added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2018-12-09T12:06:41.819Z
|
2013-11-06T00:00:00.000
|
54979359
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2013/605981.pdf",
"pdf_hash": "e8df07dd772adc40592aa9ea199709e2a4a57e4c",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46576",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "e8df07dd772adc40592aa9ea199709e2a4a57e4c",
"year": 2013
}
|
pes2o/s2orc
|
An Adaptive UKF Based SLAM Method for Unmanned Underwater Vehicle
This work proposes an improved unscented Kalman filter (UKF)-based simultaneous localization and mapping (SLAM) algorithm based on an adaptive unscented Kalman filter (AUKF) with a noise statistic estimator. The algorithm solves the issue that conventional UKF-SLAM algorithms have declining accuracy, with divergence occurring when the prior noise statistic is unknown and time-varying. The new SLAM algorithm performs an online estimation of the statistical parameters of unknown system noise by introducing a modified Sage-Husa noise statistic estimator. The algorithm also judges whether the filter is divergent and restrains potential filtering divergence using a covariancematchingmethod.This approach reduces state estimation error, effectively improving navigation accuracy of the SLAM system. A line feature extraction is implemented through a Hough transform based on the ranging sonar model. Test results based on unmanned underwater vehicle (UUV) sea trial data indicate that the proposed AUKF-SLAM algorithm is valid and feasible and provides better accuracy than the standard UKF-SLAM system.
Introduction
The simultaneous localization and mapping (SLAM) algorithm [1,2] was first proposed by Smith, Self, and Cheeseman in 1988 to provide localization and map building for mobile robots and is now widely used in many different mobile robot systems.The SLAM algorithm was first used for unmanned underwater vehicle (UUV) navigation in September 1997 in a collaborative project between the Naval Undersea Warfare Center (NUWC) and Groupe d'Etudes Sous-Marines de l' Atlantique (GESMA).The objective of the trial using SLAM was to get a UUV starting in an unknown location and without previous knowledge of the environment to build a map using its onboard sensors and then use the same map to compute the robot's location.
Given the recent wider use of UUV in the marine environment, it is notable that truly autonomous SLAM-based UUV navigation is still lacking.However, it is challenging to develop SLAM-based UUV navigation due to factors such as system complexity, weak perception, nonstructure, increasing system noise, and unknown statistical characteristics.
SLAM solutions can be divided into two categories: a nonprobabilistic probability estimation method and methods primarily based on probability estimation; Table 1 gives the summery of SLAM methods.The probability estimation method first developed [3] was the EKF based SLAM algorithm, which suffers difficulty solving data association problems, high computational costs due to the calculation of Jacobian matrices, and inconsistency due to errors introduced during linearization.In trying to reduce storage and computational requirements, Thrun et al. [4] proposed a SLAM algorithm based on sparse extended information filter.However, this method is only applicable for creating feature maps and requires the existence of features that are easy to extract and distinguish in the environment, such as point, line, and face features.More recently, Montemerlo et al. [5] proposed a Rao-Blackwellized particle filter based SLAM method (FastSLAM), where each particle stores its map and robot positioning result.However, this algorithm results in a calculation and storage problem proportional to the number of particles and is unable to avoid the disadvantages of particle degradation and sample dilution.
The unscented Kalman filter (UKF) [6,7] is a nonlinear filter based on unscented transform (UT).For nonlinear systems, UKFs avoids linearization of the state and measurement equations.Additionally, the UKF principle is simple and easy to implement as it does not require the calculation of Jacobians at each time step, and the UKF is accurate up to second order moments in the probability distribution function propagation whereas the EKF is accurate up to first order moment [8].However, when an UKF is used in underwater SLAM, it needs to predict mathematical model of the system and a priori knowledge of the noise statistics.In many practical applications, prior statistics of the noise is unknown or not accurate.Even if this information is known, the statistical characteristics easily change due to internal and external uncertainties that reflect strong time-varying characteristics.Thus, a conventional UKF does not have the adaptive ability to respond to changes in the noise statistics, which can lead to large estimation errors and even cause divergence in the case of unknown and time-varying noise statistics [9][10][11].
In order to solve the above problem, we apply an adaptive UKF (i.e., an adaptive unscented Kalman filter, or AUKF) filtering algorithm to underwater SLAM.By introducing a modified (i.e., suboptimal and unbiased) Sage-Husa maximum a posterior (MAP) noise statistic estimator, the new algorithm provides online estimation of the statistical parameters of unknown system noises and restrains filtering divergence.In addition, the method uses a covariance matching criterion approach to determine the convergence of the filter.When the filter is divergent, the proposed method introduces an adaptive fading factor to correct prediction error covariance, adjust the filter gain matrix K, and suppress filter divergence, thus enhancing the fast tracking capability of the filter.Test results based on sea trial data of UUV indicate that the proposed AUKF-SLAM algorithm provides better navigational accuracy than a conventional UKF-SLAM algorithm.
Adaptive UKF Algorithm
2.1.UKF Algorithm.Unscented Kalman filters were first proposed by Julier and Uhlmann [12].The algorithm's main principle is to select a number of sampling points in the state distribution (sigma points), which can completely capture the true mean and covariance of state distribution.Those sigma points are then substituted into the nonlinear function to obtain the corresponding nonlinear function point set, and it can solve the mean and covariance after transformation by these points.
The mean, estimate variance, and measurement variance obtained from the unscented transform are introduced into a gradually recursive process of Kalman filtering to obtain the UKF.The main steps of a UKF algorithm are as follows.
(1) Initialization (2) For ∈ {1, . . ., ∞} (i) calculate sigma points (ii) UKF prediction (iii) UKF update where is the system noise covariance, is the observation noise covariance, K is the Kalman gain, and is the weight of the mean and covariance.
where represents the state vector of the system at time , −1 represents the control, represents the measurement value of the state at time , and −1 and are independent white Gaussian noise with time-varying means of and and covariances of and , respectively.Note that is a nonnegative definite symmetric matrix, while is positive definite symmetric matrix.
Emphasis should be placed on recent data when estimating time-varying noise statistics; that is, the algorithm should gradually forget data that is too old.In this paper, we adopt a fading memory weighted exponent method to design a timevarying noise statistics estimator.According to the literature [13], we selected the weighting coefficient to satisfy which can be written as The recursive formula of a fading memory time-varying noise statistical estimator described by q , Q , r , R is where = − ℎ(x |−1 ) is the output residual sequence of the UKF.
Filter Divergence Suppression Method.
Since suboptimal Sage-Husa filters often diverge, in this paper we judge whether filtering divergence is occurring according to convergence conditions derived from the covariance matching criterion.If the convergence conditions are satisfied, the Sage-Husa algorithm is applied.If filtering divergence occurs, the proposed method calculates an adaptive weighting coefficient through a computational fading factor formula and applies this coefficient to correct P |−1 ; thus, the role of the observables is strengthened and the filter divergence is suppressed.
The convergence conditions can be written as where ( ≥ 1) is an adjustable coefficient presetting and V is the residual sequence, such that V = − h( |−1 ).
The correction method of covariance P |−1 is The adaptive weighting coefficient is calculated based on the fading factor formula [14,15] where tr(⋅) accounts for matrix trace.Here, 0 < ≤ 1 is a forgetful factor (typically about 0.95) used to increase the filter's tracking ability, with greater values of the factor creating a smaller proportion of information before time and causing a more prominent residual vector effect.This method has a strong tracking ability for sudden status changes but still keeps tracking for slowly varying state and mutation status changes when the filter reaches a steady state.
(3) Convergence Judgment.At this stage, the method uses (17) to judge whether the filter is converging.If the filter is converging, move to the next step; otherwise correct P |−1 using ( 18)∼( 19).
(5) Recursively Estimate System Statistical Noise Characteristics.Recursively estimate the system's statistical noise characteristics according to (16).
AUV Nonlinear Dynamic Model.
As seen from Figure 1, is the global coordinate system established with the initial location and bow of the UUV, where , , and describe the position and heading of the UUV within this system.While is the UUV vessel frame system and is the sonar coordinate system.The North-East coordinates are given by , with its North direction based on magnetic North.Obviously, = + 0 , where is the heading of the UUV as measured by its OCTANS.
Note that there is a distance offset (1.85 m in the direction, 0.65 m in the direction, and a very small deviation in the direction that can be ignored) between the mounting positions of three ranging sonar and the UUV's center of gravity.The coordinate systems and are shown in more detail in Figure 2. We assume that the positions of the origin of S and in are ( , ) and ( , ), respectively, and that three ranging sonar are mounted together (i.e., their mounting positions are the same).This gives a mounting deviation of three ranging sonar in of = 1.85 m and = 0.65 m, where is the deviation in the direction and is the deviation in the direction.The mounting angle of the middle sonar is , with mounting angles of ± 7.5 for the left and right sonar.According to Figure 2, using the ranging sonar as a reference we get The method uses a simple 4 DOF (degree of freedom) constant velocity kinematics to predict how the state will evolve from time − 1 to time : where [, , , ] is the position and heading of the UUV in ; [, V, , ] gives the line velocity and angle velocity of the UUV in , and is the sample time.In this equation, n() is the portion of the system noise with a time-varying mean and covariance, with the covariance of vector given by system noise where is a delta function.
Feature Model.
The feature data used in this paper is derived from measurements of the structured port environment, so the algorithm itself is what chooses for line features with which to build a feature map.The line feature model used in the proposed method is 3.1.3.Observation Model.The UUV uses a Doppler velocity log (DVL), compass, and pressure sensor to provide direct measurements of the vehicle's velocity, heading, and depth, respectively.Thus, the observation model is linear, and so the common model becomes where z is the observation vector, m is a white Gaussian with zero mean, and H varies with changes in the measurement: 3.1.4.Ranging Sonar Model.The transmission beam of a ranging sonar creates a conical surface, which is a fan in a two-dimensional plane.David Ribas et al. [16] proposed the underwater mechanical scanning imaging sonar model based on the terrestrial single beam ranging sonar model [17].In this paper, we determine the location of line features in the environment using the measurement data from the ranging sonar.
In Figure 3, represents the horizontal beam width, represents the incidence angle, and is the range at which the bin was measured from the sensor.Reference frame defines the position of the transducer head at the moment the sensor obtains a particular bin, where [ , , ] is the transformation defining the position of S with respect to to the chosen base reference.Both [ , , ] and are obtained from information in the data buffer.
To emulate the effect of the horizontal beam width, the model uses a set of values at a given resolution within an aperture of ±/2 around the direction in which the transducer is oriented.This set of values is also referred to as , where Each value represents a bearing parameter for a line tangent of the arc that models the horizontal beam width.As stated earlier, not only are all lines tangent to the arc candidates for line features, but the ones inside the maximum incidence angle limits of ± are also considered candidates.For this reason, the algorithm takes each value at a given resolution for each value of and within an aperture of ±; that is, The results of this operation are × different values of for the given aperture.These are the bearings for a set of lines representing all the possible candidates compatible with the bin.Given the geometry of the problem, the range parameter , corresponding to each one of the , bearings is L in e fe at u re L Figure 3: Ranging sonar model to distinguish line features.
Feature Extraction.
In the sea trial, the application environment of the SLAM algorithm is a cross-section of the ports, dams, and other environment structures.Note that the scanning surface of the sonar and the vertical wall or other vertical extensions of the surface will create line features in the resulting acoustic image.However, the parameters of such static line features will not change as the sonar position changes.The most popular line feature extraction methods include the split-and-merge method [18] proposed by Pavlidis, the RANSAC method [19] proposed by Fischler and Bolles, and the Hough transform (HT) [20] proposed by Illingworth and Kittler.Between these choices, HT is the most popular line feature extraction method, and a number of others have developed many methods to improve HT line feature extraction [21][22][23].In this paper, we use the HT method to extract line features from the ranged sonar data.The HT is a voting scheme where the distance values of each ranged sonar image accumulates evidence for the presence of a line feature.Units that get the most votes in the HT space correspond to line features in the actual environment.
Data Processing of Ranging Sonar Data.
A data buffer helps to separate and manage the stream of measurements produced by the continuous arrival of the sonar beams.The buffer stores variables such as the range and bearing for each bin used in the voting, the position and heading in the North-East coordinate system , and the transmit angle of the beams so that - parameter pairs used to present line features are extracted based on HT.
The steps to process the data set from each ranged sonar scan [24,25] are given below, using the left sonar as an example.First, the data buffer is set, the data is loaded with the range values, and a 0-1 matrix is built where the units without range values are set to 0 and units with range values are set to 1.In the second step, the transmit angle of the sonar in , the time, the position and heading of the UUV in , and the position and transmit angle of the sonar in are stored into the data buffer.The third step defines the base frame , where is the current position of the sonar head when the voting is performed.Finally, the position and heading of the sonar instrument in at every moment is acquired and stored in the data buffer.
Hough Transform.
There are three steps to extract line features from the sonar image data.First, the data from all three sonar instruments are loaded, and distance resolution, angle resolution, and threshold value are defined.Secondly, the accumulator is defined, and the index values of the nonzero elements of the accumulator are found.Finally, we use ( 27)∼(29) to vote, and the - parameter pairs that get the most votes are used to represent the detected line features in .
Data Association.
Once the model has extracted line features in the environment based on the HT algorithm, it needs to create an environment map and improve the state estimation of the UUV by fusing the detected line features.The next step is then to perform data association [26] to determine if a measured line corresponds to any of the features , = 1, . . ., already existing in the map and so used to update the system or if it is a new line and has to be incorporated into the map.To make this distinction, the most popular individual compatibility nearest neighbor data association algorithm is used to select the best candidate.
Given the transformation of and , the position of the th line measurement in can be represented by If we assume the position of the line feature already exists in the map in ; that is, Then the position of line feature in V is And line feature corresponds to the th line measurement, where Here, and are the parameters of the line features in the frame of reference, and are the parameters of the line features presented in the frame of reference, and s is the noise with a zero-mean white noise with covariance R affecting the line feature observation.
The proposed method uses an innovation term ^ to calculate the discrepancy between the measurement and its prediction, with its associate covariance matrix S obtained by To determine if the correspondence is valid, an individual compatibility (IC) test [27] using the Mahalanobis distance is carried out where = dim(ℎ ) and is the desired confidence level.Data association is only performed if and when a line feature is detected based on HT.If the data association is successful, that is, the line feature exists in the map, then the model updates the state estimate.Otherwise, state augmentation is carried out where the new measurement is added to the current state vector as a new feature.However, the algorithm cannot do this augmentation directly because the new feature is represented in .Thus, the algorithm must first perform a change of reference using the following: . . .DVL can measure a UUV's current velocity, bottom tracking speed, and so on.However, in the sea trial DVL was only used to measure the bottom track speed while OCTANS was used to measure the UUV's heading in real-time, that is, the angle between the front of the UUV and magnetic North.The pressure sensor provided depth data by measuring the water pressure, and three ranging sonar provided online environment perception and measurement.Three ranging sonar mounted in the horizontal frame is shown as Figure 5.
AUKF-SLAM Algorithm Verification Based on Sea Trial Data
A general description of the AUKF-SLAM algorithm is given in Figure 6.
Acquisition of Embankment Measurement Points Using
Ranging Sonar.Given a true trajectory as provided by GPS, we can obtain embankment measurement features by fusing this GPS information with the ranging sonar data and then where and represent distance values and mounting angles of the three sonar instruments.For the sea trial, the mounting angles were 1 = 7.5 ∘ , 2 = 0 ∘ , and 3 = −7.5 ∘ , corresponding to the left ( = 1), middle ( = 2), and right ( = 3) sonar.The relationship between and is shown in Figure 2, with the position of point in given by The position of relative to is we can obtain the position of point in using the following operator Figure 7 gives the embankment measurement, where the green, red, and blue points were acquired by the left, middle, and right sonar, respectively.
Line Feature Extraction the Embankment Based on
Ranging Sonar Model.The proposed model can extract line features using an HT form ranging sonar model based on the following assumptions: (a) angle resolution of the HT space is 5 ∘ and the distance resolution is 1 m; (b) the largest number of votes is selected as the threshold; (c) a 10 ∘ arc is used to model the horizontal beam width of the ranging sonar; that is, = 10 ∘ ; (d) the model does not consider uncertainty in the vertical beam width; (e) assume that there is only an echo signal when transmit beam of the ranging sonar is parallel to the vertical extension surface; that is, = 0 ∘ .
In the sea trail the algorithm extracted six line features L1∼L6.The - parameter, time, and position of the UUV when the feature was detected are given in Table 2 for each line feature.
Test Validation Based on Sea Trial Data.
To verify the performance of the AUKF-SLAM algorithm, we compared the results of tests using AUKF-SLAM, UKF-SLAM, and EKF-SLAM algorithms against trial data with the UUV and assuming that the statistical properties of the system noise was unknown in all cases.
Test Conditions.
The actual statistical characteristics of the system noise during the field trial was unknown, so for this work we assumed that actual system noise behavior was based on two laws, one time-varying and one constant.Using these laws, we conducted both a time-varying noise test and a constant noise test.Table 3 1, and it should be transformed to the heading in the coordinate system for state updating.The heading is shown as Figure 16.
(1) Performance Analysis of the AUKF-SLAM Algorithm.We calculate the estimation error of the algorithm by where time.The results indicate that once the Sage-Husa UKF-SLAM tends to diverge according to convergence conditions, the adaptive weighting coefficient is introduced to correct covariance and ensure the tracking ability.
(2) Root-Mean Square Error.We use the root-mean square error (RMSE) of the position to compare the performance of the various nonlinear filters where is the total running step and ( , ) and ( x , ŷ ) are the true position and estimated position, respectively, of the the AUKF-SLAM algorithm is smaller than the RMSE found using the UKF-SLAM algorithm by 1.9152 m; in the constant noise test the AUKF-SLAM algorithm RMSE is smaller by 0.9855 m.
From the above analysis, we can see that the AUKF-SLAM algorithm has good tracking ability and produces a RMSE that is smaller than what the other algorithms are capable of achieving for both time-varying and constant system noise.Also, the AUKF-SLAM algorithm produces a smaller RMSE in the presence of time-varying system noise as compared to when operating in a system with constant noise.
Figure 17 shows a comparison of the line features extracted during the environment and feature measurement of the
Conclusion
The proposed AUKF-SLAM algorithm adopts an improved Sage-Hausa suboptimal unbiased maximum posterior noise statistical estimator to estimate unknown system noise.The algorithm estimates and corrects the statistical character of noise in real time and decreases estimated error.At the same time, the algorithm judges whether the filter is converging and introduces an adaptive forgetting factor to correct the predicted covariance adjust the Kalman gain, and restrain the divergence of the filter when it diverges, therefore increasing the algorithm's fast tracking ability.The AUKF-SLAM algorithm provides a new method for simultaneous underwater localization and mapping in an unknown environment.Assistant navigation based on the proposed AUKF-SLAM algorithm can help UUV's fulfil missions requiring marine environment monitoring, marine terrain inspection, and long-term underwater tasks.
Figure 1 :Figure 2 :
Figure 1: Global coordinate system, the UUV vessel frame system and the sonar coordinate system.
4. 1 .
Test Conditions.The data used in the test conducted in this work comes from a sea trial completed in October 2010 near Dalian Xiaoping Island using a UUV developed by the authors.The navigation route of the UUV was from point to point in Figure4.As configured for the trails, the UUV possessed a number of different sensors including DVL, OCTANS, depth sensor, and three ranging sonar which mounted in the horizontal frame as a whole set to observe the environment.The initial position of the UUV (point ) was longitude 121.5231 ∘ , north latitude 38.8271 ∘ , and the trial ended at longitude 121.5083 ∘ , North latitude 38.8328 ∘ (position B).The initial heading of the UUV was −70.70 ∘ .During the trail, the UUV stayed near the surface of the water so that GPS was available throughout the trial.The total navigation time was 17 minutes and 6 seconds.
Figure 4 :
Figure 4: Satellite image of the sea trial test area.
Figure 5 :
Figure 5: The ranging sonar mounted in the horizontal frame.
Figure 7 :
Figure 7: Feature measurement of the embankment.
Figure 10 :
Figure 10: Comparison of position error in the North direction with time-varying noise.
Figure 11 :
Figure 11: Adaptive weighting coefficient of AUKF-SLAM algorithm in time-varying noise test.
Figure 12 :Figure 13 :
Figure 12: Comparison of the trajectory estimations with constant noise.
Figure 14 :Figure 15 :
Figure 14: Comparison of position error in the North direction with constant noise.
Figure 16 :
Figure 16: Heading of the UUV during the field trial.
Figure 17 :
Figure 17: Comparison of line features extracted from the environment and the feature measurement of the embankment.
Table 1 :
Summery of SLAM methods.
gives the results of the system noise tests and the resulting laws, where is the step number.The observation noise is = diag ([0.04 2 0.04 2 0.04 2 0.04 2 0.04 2 ]) .
, and , are the estimation error in the East and North directions at time k, respectively, while ( , ) and ( x , ŷ ) are the true position and estimated position of the UUV at time .Note that | , | and | , | represent absolute values of the error in the East and North directions at time k, respectively.Obviously, small values of | | and | | indicate higher accuracy in the filtering algorithm.Figures 8∼10 and Figures 11∼14 show that | | and | | for the proposed AUKF-SLAM algorithm are lower than the estimation errors given by the other algorithms.From Figures11 and 15, it can be seen that the adaptive weighting coefficient is greater than one at some certain
Table 2 :
Extracted line feature parameters.
Table 3 :
Change law of system noise.
Table 4 :
Comparison of RMSE.Table 4 gives the RMSE for each of the tested algorithms.From the table, we can see that the RMSE of the AUKF-SLAM algorithm is the smallest for both time-varying system noise and constant system noise.The RMSE in the case of time-varying noise for the AUKF-SLAM algorithm is smaller than the RMSE found in the constant noise scenario by 2.3534 m.In the time-varying noise test, the RMSE of
|
v3-fos-license
|
2018-04-03T04:46:08.218Z
|
2015-02-03T00:00:00.000
|
5044579
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0117443&type=printable",
"pdf_hash": "e27fd3f23a2fe61f11c63748556897fdbf27f9ac",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46577",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e27fd3f23a2fe61f11c63748556897fdbf27f9ac",
"year": 2015
}
|
pes2o/s2orc
|
Predictors of Barefoot Plantar Pressure during Walking in Patients with Diabetes, Peripheral Neuropathy and a History of Ulceration
Objective Elevated dynamic plantar foot pressures significantly increase the risk of foot ulceration in diabetes mellitus. The aim was to determine which factors predict plantar pressures in a population of diabetic patients who are at high-risk of foot ulceration. Methods Patients with diabetes, peripheral neuropathy and a history of ulceration were eligible for inclusion in this cross sectional study. Demographic data, foot structure and function, and disease-related factors were recorded and used as potential predictor variables in the analyses. Barefoot peak pressures during walking were calculated for the heel, midfoot, forefoot, lesser toes, and hallux regions. Potential predictors were investigated using multivariate linear regression analyses. 167 participants with mean age of 63 years contributed 329 feet to the analyses. Results The regression models were able to predict between 6% (heel) and 41% (midfoot) of the variation in peak plantar pressures. The largest contributing factor in the heel model was glycosylated haemoglobin concentration, in the midfoot Charcot deformity, in the forefoot prominent metatarsal heads, in the lesser toes hammer toe deformity and in the hallux previous ulceration. Variables with local effects (e.g. foot deformity) were stronger predictors of plantar pressure than global features (e.g. body mass, age, gender, or diabetes duration). Conclusion The presence of local deformity was the largest contributing factor to barefoot dynamic plantar pressure in high-risk diabetic patients and should therefore be adequately managed to reduce plantar pressure and ulcer risk. However, a significant amount of variance is unexplained by the models, which advocates the quantitative measurement of plantar pressures in the clinical risk assessment of the patient.
Introduction
In patients with diabetes mellitus, foot ulceration as a complication of the disease is associated with significant burden and increased mortality [1]. Elevated plantar pressures during locomotion are known to contribute to the development of diabetic foot ulcers [2][3][4][5][6][7][8][9]. After healing of a foot ulcer, many patients experience ulcer recurrence, and emerging evidence suggests that elevated plantar pressures are also a significant determinant of foot ulcer recurrence [10]. It is therefore recommended that interventions should routinely include targeting of abnormal pressures [11]. But despite the significance of the role of high plantar pressure in ulcer development, its determinants are not well understood in diabetes and we are therefore currently poor at explaining which patients have or will develop increased plantar pressures.
A number of studies have investigated the relationship between clinical and structural variables and either in-shoe or barefoot plantar pressure in diverse diabetes populations demonstrating contrasting results [12][13][14][15][16][17][18][19]. Important factors include presence of foot deformity [12,14,15,18,19], limited joint mobility at the ankle and metatarso-phalangeal joints [13,16], variables related to the presence of peripheral neuropathy [13,17], presence of callus [12,18] and soft tissue thickness [20]. There is controversy surrounding the contribution of body mass as it was reported to have limited predictive value in terms of plantar pressure in subjects with diabetes in one study [14] but an important contribution in another study [12]. It is difficult to directly compare the findings from studies in this area due to the varied methodologies employed. To date, none of the studies in diabetic populations have included prediction of pressures in the midfoot region with the majority focusing on plantar pressure in the forefoot region. Moreover, the types of patient were varied in terms of risk level for ulceration with none of the previous studies specifically investigating patients with a confirmed history of foot ulceration. This group merits close attention in order to better understand those at risk from ulceration as they constitute the highest risk of developing a foot ulcer.
Off-loading plantar pressures is a key target in healing and preventing ulceration in diabetes [21]. After healing of a foot ulcer, many patients experience ulcer recurrence yet there is little evidence on risk factors for this event [22,23]. Emerging evidence suggests that barefoot pressures are a significant determinant of ulcer recurrence that has been identified as related to repetitive stress on the foot [10]. As with the first ulcer episode, more knowledge is required on the underlying mechanisms of high barefoot plantar pressure to improve understanding of ulcer recurrence. Furthermore, despite its clinical importance, the measurement of pressure is not widely implemented in clinical practice. But whether plantar pressure and ulcer risk can be predicted from standard clinical measures, or should be directly measured in a high-risk population remains a question of interest. Therefore the aims of this study were to determine which factors can predict barefoot dynamic plantar pressure in an at-risk population with diabetic neuropathy and a history of ulceration, and to establish recommendations for foot screening and management in this high-risk group.
Patients
Patients with diabetes mellitus, peripheral neuropathy, and a recent history of plantar foot ulceration were eligible for inclusion. The study is a cross sectional study and a sub analysis of data collected in the DIAbetic Foot Orthopedic Shoe (DIAFOS) trial [22]. Exclusion criteria were inability to walk 100m unaided and bilateral amputation proximal to the metatarsals.
Ethics Statement
Ethical approval was obtained from the medical ethics committee of the Academic Medical Centre, University of Amsterdam and all participants provided written informed consent prior to study entry.
Demographic, disease and foot assessment
Demographic information was recorded at study entry including: age, gender, duration of diabetes, glycosylated haemoglobin levels and body mass. Foot deformity was recorded as present or absent with regard to the following: Charcot midfoot deformity, pes planus, pes cavus, hammer toes, claw toes, hallux abducto valgus and amputation (i.e. digit, ray, or forefoot). The scoring of deformity was undertaken during physical examination by one of three trained investigators and confirmed by two teams of two trained observers who scored standardised images of the feet and reached consensus on outcome. The presence of midfoot deformity based on Charcot neuro-osteoarthropathy was additionally verified from the medical records of affected patients. Hallux abducto valgus was defined as lateral deviation of the hallux relative to the first metatarsal, hammer toes as hyperflexion at the proximal interphalangeal joints with corresponding apical ground contact and claw toes as hyperextension at the metatarso-phalangeal (MTP) joints with hyperflexion at the interphalangeal joints of the lesser toes. Pes cavus was defined as a high medial-longitudinal arch and pes planus as a lowered medial-longitudinal arch; both were assessed weight-bearing. Presence of abundant callus at study entry and prior ulceration specific to each region were recorded dichotomously.
Prominent metatarsal heads, defined as palpable bony prominences, were diagnosed based on physical assessment by one of the three trained investigators. Ankle joint range of motion (ROM) was recorded via goniometry in the supine position. The range of dorsiflexion of the hallux was recorded relative to the first metatarsal shaft; bisection lines were drawn medially along the shaft of the first metatarsal and the proximal phalanx of the hallux and measured with a goniometer in the supine position [24]. Weight-bearing dorsiflexion of the hallux was recorded as the maximal angle of dorsiflexion passively achieved relative to the weight-bearing surface. All goniometric measurements were recorded twice and the mean was entered into the analysis. Peripheral neuropathy was confirmed present in each patient by the inability to sense the pressure of a 10 gram Semmes-Weinstein monofilament at minimum one of three locations on the plantar foot or by a vibration perception threshold at the dorsal aspect of the hallux greater than 25 Volts recorded using a Bio-Thesiometer (Biomedical Instrument Company, Newbury, OH, USA) [2].
Plantar pressure analysis
Barefoot plantar pressures during normal walking were recorded using an EMED-X (Novel GmbH, Munich, Germany) pressure platform using the two-step method [25] from four walking trials. The platform has a spatial resolution of four sensors per cm 2 and was sampled at 70Hz. Pressures were analysed using Novel multimask software (version 13.3.65) in five distinct regions of the foot based on functional regions and areas susceptible to local deformity: the heel, midfoot, forefoot (i.e. metatarsals), lesser toes and hallux. The mean peak pressure from the four trials in each of the regions was used in the analysis as the outcome variable.
Potential predictor variables
Only variables with a realistic potential contribution to the outcome variable based on indications from the scientific literature were included. Therefore, for example, forefoot deformities were not considered relevant to the heel model [26]. Potential predictor variables included: age, gender, body mass, duration of diabetes, glycosylated haemoglobin levels, vibration perception threshold, presence of abundant callus, ankle joint ROM, hallux dorsiflexion range of dorsiflexion, and the following foot deformities: Charcot midfoot deformity, pes planus, pes cavus, hammer toes, claw toes, prominent metatarsal heads, hallux abducto valgus, and amputation.
Statistical analysis
Statistical analyses were performed using SPSS 20.0 (SPSS Inc., Chicago, IL, USA). Demographic and group characteristics were summarised with the mean (standard deviation, [SD]), median (interquartile range) or number of cases. Differences in peak plantar pressures between presence and absence of dichotomous predictor variables (i.e. all deformities where appropriate, gender, prior ulceration and presence of abundant callus) were assessed for each foot region using Mann-Whitney U tests. Relationships were explored between peak plantar pressures in each region and continuous predictor variables (body mass, age, diabetes duration, vibration perception threshold, ankle and hallux dorsiflexion ROM and glycosylated haemoglobin) using Spearman's correlation. Data were pooled for left and right limbs to increase statistical power and avoid missing relevant feet. The majority of independent predictors entered into the model are at the foot rather than the person level, therefore the anticipated interdependency between limbs is low.
Pressure variables with skewed data were log transformed. Univariate regression analyses were used to explore the relationship of each potential predictor variable with peak plantar pressure in each of the five regions of the foot. Significant factors with a value of P<0.20 were included in the multivariable linear regression model with backward selection and considered significant with P<0.05. All results were checked with regard to assumptions of multivariate regression analysis including multicollinearity, normality, homoscedascity and independence of residuals.
Group characteristics
171 patients (141 male, 30 female) were recruited from 10 Dutch hospitals between January 2008 and October 2010. Four participants were unable to perform barefoot pressure measurement and five participants had below knee or foot amputation affecting one limb and were unable to provide barefoot plantar pressures on one side. Therefore, a total of four participants and an additional five feet were excluded resulting in 167 participants (138 male, 29 female) contributing 329 feet to the barefoot pressure analysis. The included participants had a mean (SD) age of 63 (10) years with mean (SD) duration of diabetes of 17 (13) years. A variety of foot deformities were present with hammer toes being the most prevalent deformity. Twentytwo feet had Charcot midfoot deformity and amputations ranged from partial digital to transmetatarsal. Prior ulceration occurred across all foot regions; only one case occurred in the heel, therefore this variable was not included in the regression analyses for the heel region. Results for demographic and disease related predictor variables are summarised in Table 1.
Plantar pressure features
Barefoot peak pressures recorded were highest in the forefoot region, median (IQR) 830kPa (526, 1112), and lowest in the midfoot region, 141kPa (94, 214) for the entire cohort. Plantar pressure data were not normally distributed in three of the five regions (heel, midfoot and lesser toes) and were log transformed prior to regression analyses. A descriptive summary of all outcome variables is presented for the entire cohort and also grouped by categorical predictor variable for each region of the foot in Table 2. Feet may have presented with multiple deformities, therefore absence of a single deformity in Table 2 does not equate to a foot without deformity. The association of the continuous predictor variables to plantar pressure in each region is presented in Table 3. Due to the distribution characteristics of some of the variables, for consistency all data are presented as median (inter-quartile range) and results are from nonparametric tests.
Regression analyses
In the univariate analyses, all factors, with the exception of vibration perception threshold, were associated (P <0.20) with peak plantar pressure in at least one of the five studied regions of the foot and were therefore entered into at least one of the multivariate models. In the multivariate regression analyses for each region the following factors were significantly Table 4; the standardised beta weights are presented which represent the relative contribution of each variable to the explanation of variance in plantar pressure.
Discussion
The aims of this study were to determine which factors could predict barefoot plantar pressures in a high-risk population with diabetic neuropathy and a history of ulceration and to establish recommendations for foot screening and management in this high-risk group. The predictor variables were capable of explaining between 6% and 41% of the variance in peak pressures in the multivariate regression analyses in the five different foot regions. In the midfoot region the most variation in plantar pressures (41%) was explained of all foot regions, with the most significant contribution from the presence of Charcot midfoot deformity. Furthermore, Charcot deformity showed the highest predictor value (Beta coefficient 0.504) of any factor in any of the foot regions studied. In the forefoot region, 31% of variation in pressure was explained with the largest contribution from the presence of prominent metatarsal heads, followed by claw toes. These factors were expected and confirm findings from previous studies with regard to the role of foot deformity in plantar pressure in diabetic patients [12,15,18,19]. In the majority of the five models, local factors such as the presence of foot deformity or a prior foot ulcer were clearly stronger predictors of plantar pressure than global features (age, gender, body mass, duration of diabetes, or vibration perception threshold). Therefore, deformity should be adequately managed in clinical practice to reduce plantar pressure and ulcer risk. Furthermore, the data stress and confirm that the region where the previous foot ulcer was present, remains an important target for pressure relief. However, a large amount of variance in barefoot plantar peak pressure remains unexplained in this high-risk population with diabetes. This suggests that measurement of plantar pressure as a 'surrogate' of foot injury should be an integral part of foot screening for these high-risk patients. Few global factors were significantly associated with pressure in the multivariate models: age in the forefoot model, body mass in the midfoot model and HbA1c in the heel model. Age related changes to stiffness of the plantar soft tissues [27] and reduction in joint motion [28] have been reported and may impact upon forefoot plantar pressure. The general lack of significant relationship found between body mass and plantar pressures is in agreement with previously reported research [14,18]. In one previous study, body mass index showed no significant relationship with peak plantar pressure in the forefoot whereas soft tissue thickness demonstrated a significant inverse association with plantar pressures [20]. The authors postulated this was perhaps a result of those with higher body mass having more subcutaneous tissue [20]. In the current study, body mass remained a significant predictor only in the midfoot model and it did not explain the largest amount of variation relative to the other predictor variables. Contrary to commonly held beliefs, there is little data to support the role of body mass in determining barefoot dynamic plantar pressures in patients with diabetes.
Ankle joint ROM emerged as a significant predictor variable in three of the five models (all regions except lesser toes and hallux). Limited joint mobility in diabetes, also known as diabetic cheiroarthopathy, is associated with collagen glycosylation resulting in proliferation of peri-articular tissue [29]. Reduced ankle joint ROM has been reported in patients with diabetes and has been linked to increased plantar pressures [30] and plantar ulceration [31]. In further support of this association, Achilles tendon lengthening procedures have been shown to be effective in increasing ankle joint dorsiflexion, reducing forefoot plantar pressures and reducing forefoot neuropathic ulcer recurrence [32].
Hallux dorsiflexion ROM was a significant predictor in the hallux and lesser toe models. This confirms previous observations of an association between either static or dynamic hallux ROM and plantar pressure [13,16,33]. Static joint ROM measurements have been reported to have limited ability to predict dynamic joint angular movements [34]. However, a correlation has been reported between passive and active motion at the hallux in patients with diabetes, together with a positive association with peak forefoot pressures [35]. The present study additionally recorded weight-bearing dorsiflexion ROM, a commonly used clinical technique to diagnose functional hallux limitus. Only weight bearing hallux dorsiflexion remained significant in the hallux model suggesting that it should continue to be measured in clinical practice. However, it should be borne in mind that the hallux model explained only 13% of variance in plantar pressures in the studied cohort.
With regard to the lesser-toe pressures, the largest single contribution came from hammer toe deformity which is in agreement with previous research [15]. Interesting in this regard is that toe flexor muscle tendon tenotomy procedures have been successfully employed to heal and prevent apical toe ulcers in patients with diabetes, with post-surgical pressure reduction the most likely mechanism [36]. Hallux abducto valgus was not a significant predictor in the hallux or lesser toe models but this is perhaps related to the fact the deformity predominantly affects the transverse and frontal planes rather than the sagittal.
The models were capable of explaining only between 6% and 41% of variance in barefoot plantar pressures in the studied cohort. Adding factors that were not recorded in this study on clinical correlates may improve predictive value. Data on dynamic gait, such as kinematics and kinetics, is one such factor [37]. Another dynamic variable of interest is walking speed, which has been shown to mediate plantar pressure in the heel and forefoot regions [38]. A study in a non-diabetic population combined both structural and functional factors and was able to predict between 49-57% of variance in plantar pressures, even though outcomes varied considerably across foot regions and structural variables were shown to be more contributory than functional factors [33]. Finally, foot muscle strength and morphology are affected by diabetes and neuropathy and may influence plantar pressure [39]. While adding these factors may improve prediction of plantar pressure, none of these variables are measured in a standard clinical setting and may therefore have limited clinical applicability.
This study was subject to limitations: first, the study was part of a larger trial and pragmatically it was not possible to collect all potential variables that may be of interest in explaining foot pressure. The main interest was to investigate whether prediction of barefoot pressures was possible from standard clinical measures. Secondly, the presence of multiple deformities in the same foot was not controlled for and may have been a potential limitation of the study in contributing to a lower explained variance in the models. Finally, the study focused on a high-risk population, which by virtue of the inclusion criteria may have resulted in masking the importance of certain variables. For example, VPT was not a strong predictor in any of the models, likely because participants who entered the study were all neuropathic. However, the inclusion of only a high-risk sample population was also a key strength of this study due to the morbidity associated with foot ulceration and the high ulcer recurrence rates in this group. Therefore, any new insights add to our current understanding of this high-risk population. Furthermore, this sample of high-risk patients was recruited from ten academic or large community-based Dutch hospitals and therefore the results are generalizable to the high-risk diabetic population.
In conclusion, this study has demonstrated poor to moderate prediction of barefoot plantar pressures in diabetic patients who are at highest risk of developing plantar foot ulcers, those with neuropathy and a history of plantar ulceration. Local factors (such as deformity) were better predictors than global features (such as age or body mass) and deformity should therefore be adequately managed to reduce plantar pressure and ulcer risk. While it is acknowledged that no clear data yet indicate that the screening for barefoot plantar pressures can help to reduce the incidence of foot ulceration, the study results suggest that measurement of plantar pressure as a 'surrogate' of foot injury should be an integral part of foot screening as no single factor in this study has emerged as an adequate proxy measure.
|
v3-fos-license
|
2017-08-13T07:13:44.079Z
|
2017-07-31T00:00:00.000
|
43256185
|
{
"extfieldsofstudy": [
"Computer Science",
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-4292/9/8/764/pdf?version=1501512737",
"pdf_hash": "ce7ad6e773db8fec4b04a08a63ae4a5ad1caa36b",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46578",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "d19158100d14e00236e978325f82ea34318e52b8",
"year": 2017
}
|
pes2o/s2orc
|
A Simple Normalized Difference Approach to Burnt Area Mapping Using Multi-Polarisation C-Band SAR
In fire-prone ecosystems, periodic fires are vital for ecosystem functioning. Fire managers seek to promote the optimal fire regime by managing fire season and frequency requiring detailed information on the extent and date of previous burns. This paper investigates a Normalised Difference α-Angle (NDαI) approach to burn-scar mapping using C-band data. Polarimetric decompositions are used to derive α-angles from pre-burn and post-burn scenes and NDαI is calculated to identify decreases in vegetation between the scenes. The technique was tested in an area affected by a wildfire in January 2016 in the Western Cape, South Africa. The quad-pol H-A-α decomposition was applied to RADARSAT-2 data and the dual-pol H-α decomposition was applied to Sentinel-1A data. The NDαI results were compared to a burn scar extracted from Sentinel-2A data. High overall accuracies of 97.4% (Kappa = 0.72) and 94.8% (Kappa = 0.57) were obtained for RADARSAT-2 and Sentinel-1A, respectively. However, large omission errors were found and correlated strongly with areas of high local incidence angle for both datasets. The combined use of data from different orbits will likely reduce these errors. Furthermore, commission errors were observed, most notably on Sentinel-1A results. These errors may be due to the inability of the dual-pol H-α decomposition to effectively distinguish between scattering mechanisms. Despite these errors, the results revealed that burnt areas could be extracted and were in good agreement with the results from Sentinel-2A. Therefore, the approach can be considered in areas where persistent cloud cover or smoke prevents the extraction of burnt area information using conventional multispectral approaches.
Introduction
The effects of wildfires are severe and can include damage to infrastructure and the environment [1,2] as well as contributing to land degradation and affecting global warming due to an increase in CO 2 emissions [3][4][5][6][7][8].Although fires can be disastrous events with significant impacts on infrastructure and the environment, in many ecosystems, periodic fires are vital for ecosystem functioning and keeping vegetation species in a healthy condition.In these ecosystems, periodic fires stimulate species diversity, controlling age and influencing nutrient cycles [4].In fact, without fires, many species cannot persist [9].An example is the fynbos and renosterveld ecosystems, endemic to the Cape Floristic Region in South Africa's Western Cape Province.Without fires, the individual plant species would die and be replaced by thicket and forest species [9].To minimise the adverse impact of fires whilst conserving biodiversity, the key ecological aspects of fire management include: (1) fire frequency; (2) fire season and intensity; and (3) fire size [6,7,[9][10][11].In terms of fire frequency, all non-sprouting plants should have a chance to produce sufficient seed before the next burn.Therefore, fires should not occur too frequently nor too infrequently or even in the wrong season, since it could have devastating impacts on both plant and animal species in the ecosystem.For this reason, fire management practices in the region include the active management of fire recurrence intervals.
In areas where active fire management is required to conserve biodiversity, an accurate description of burnt areas over time and space is required.Specifically, the ability to derive burnt area information using a time-series of data can provide information on veld ages.Veld age maps can be used to identify vegetation stands that should be protected from fires, including immature plants that have not had the opportunity to produce seeds [9].Additionally, stands that are due for burning can also be identified.An assessment of burnt area can also be useful for a variety of other applications and can contribute to risk management systems [2].Burnt area assessments can assist in the investigation of trends and patterns of fire occurrence as well as in analysing the drivers of fire events [12].This information can be used to project potential future patterns of fires with the aim of risk mitigation.Furthermore, an assessment of the burnt area can also assist disaster managers to identify affected areas for disaster recovery and disaster relief.
Due to the importance of fire management in ecologically sensitive regions as well as for risk mitigation, systems have been developed for fire prediction, active fire monitoring, and post-fire damage assessment [4].Although field observations are considered a vital source of information for these systems, the size of affected areas are potentially large and affected areas are frequently in remote locations.This means that field observations can, in cases, be insufficient in terms of accuracy and coverage to provide reliable data for fire management practices [10].To address these challenges, considerable amounts of research have been devoted to the extraction of fire-related information from remote sensing data.
Remote sensing data can provide valuable information for fire managers including indicators of (1) vegetation status; (2) active fire detection; and (3) burnt areas assessment [10,13,14].The synoptic view provided by satellite data, as well as repeat observations make satellite sensors the only viable way to operationally monitor large or remote areas.Conventionally, optical and multispectral data have been widely used for fire monitoring and burn scar mapping [6,9,15].However, data acquisition is often limited by the presence of cloud or smoke in actively burning regions.To overcome this limitation, synthetic aperture radar (SAR) data has been used to compliment the extraction of burnt area investigations from multispectral data [1,2,4,[16][17][18].
This paper further explores the feasibility of using multi-temporal multiple-polarisation C-band SAR data for extracting burnt areas in a fynbos region in the Western Cape Province of South Africa.A new, multi-temporal approach to burn scar identification using polarimetric decomposition on quad-pol and dual-pol SAR scenes is introduced.An overview of the principles of SAR data for burnt area identification is provided in Section 2 and the Normalised Difference α Index (NDαI) is formulated in Section 3.An introduction to the study area and the data acquired is provided in Section 4. The results of the burnt area mapping from SAR data are provided and discussed in Section 5. Some observations and relevant conclusions are provided in Section 6.
The Potential for SAR Data Analysis for Burn-Scar Detection
Several studies have investigated multispectral data sources for their ability to map burnt areas in various environments [3,10,14,19].Using a time-series of investigations, these products have been found useful for the compilation of date-of-burn maps to estimate fire season as well as for the estimation of fire recurrence period.Furthermore, the extent and season of the fire could be used to derive indicators of fire intensity [10].Although the maturity of algorithms to derive burnt area from multispectral data has reached a level where it is operationally used in various fire management systems [10], data acquisition is frequently affected by the presence of cloud cover or smoke plumes in actively burning areas [3,5].Furthermore, spectral overlaps between burnt areas and shadows, water bodies and unburnt canopies as well as vegetation regrowth in previously burnt areas can cause difficulties in discriminating between burnt and unburnt areas [5].Therefore, burnt area mapping algorithms are generally focused on areas with uniform topography and vegetation characteristics [5].
To overcome the limitations of relying solely on multispectral data for burnt area detection, the use of SAR data for the mapping of burnt areas has also been considered [1,2,4,17].This is because SAR data are unaffected by cloud and smoke cover at the time of data acquisition, allowing for the extraction of information in these conditions [1,3].SAR is therefore considered to be complementary to multispectral data for burnt area mapping [1].The potential of using SAR for the mapping of burnt areas lies in the sensitivity of SAR backscatter to vegetation structure and biomass.In particular, the removal of leaves and branches of vegetation due to fire would lead to a change in SAR backscatter [19].Studies on the use of SAR data for burnt area detection generally rely on the analysis of SAR backscatter at various polarisations and its variation between burnt and unburnt areas [1,2,5] using single-pass [1,5] or multi-temporal analysis [3] approaches.In general, SAR backscatter was found to be affected by many factors including moisture conditions, surface roughness and biomass [4].Under different conditions, SAR backscatter was found to exhibit either an increase or decrease associated with burnt conditions depending on the region under investigation, the incidence angle of the sensor and the surface conditions [5,19].Therefore, the identification of burnt areas using a universal backscatter-based algorithm would be complicated.In other cases, the backscatter difference between burnt and unburnt land-cover classes were insufficient to identify burnt areas with a high degree of confidence [5].Another challenge for burnt area extraction using SAR data is that SAR change detection is generally considered to be a challenging task due to SAR speckle effects, complex textures and a general heterogeneous appearance [19][20][21].To address these challenges, region-based and object-orientated approaches based on image segmentation have been found to deal with speckle effectively [19][20][21][22].In one investigation using ALOS PALSAR data, a normalised difference backscatter index approach was tested where the results were used as input into an image segmentation and object-based classification approach to discriminate between burnt and unburnt areas [19].The segmentation and classification of objects, rather than single pixels, were found to effectively deal with speckle effects.However, misclassifications remained present due to similar temporal variations in SAR backscatter being observed for unburnt, un-vegetated or low-vegetation (such as grassland and agriculture) and burnt areas [19].
With the increase in the availability of multiple-polarisation SAR data, several investigations have also considered SAR polarimetry for its potential contribution to burnt area investigations [16].The field of SAR polarimetry investigates the backscatter behaviour of surfaces using multiple-polarisation data [23].The SAR backscatter in different polarisations is sensitive to the shape, orientation and dielectric properties of scattering elements [24].This sensitivity allows for the identification and separation of scattering mechanisms by investigating the differences in polarimetric signatures [24][25][26][27].Several coherent and incoherent scattering target decomposition theorems have been developed with the objective to extract information about scattering behaviour from volumes and surfaces allowing the description of ground/volume scattering scenarios [21,28,29].The interaction of the various scattering mechanisms with different polarisations implies that polarimetric image analysis can provide information on the dominant scattering mechanisms observed in a resolution cell (i.e., surface scattering from the ground or volume scattering from a vegetation canopy, etc.).Furthermore, a time series approach can be included to extract information on the evolution of the dominant scattering mechanisms over time.
Due to the sensitivity to scattering mechanisms, the polarimetric decomposition approaches have been considered for its ability to detect burnt areas and to derive indicators of burn severity [16].It was observed that, for burnt areas, the α-angle derived from quad-pol C-band data was less than 45 • due to the lower contribution of volume scatterers in burnt areas compared to unburnt areas [16].It was noted, however, that low α-angle would also be expected in areas that were associated with low-vegetation densities or areas that were bare before the burn [16].
An analysis of the polarimetric behaviours of burnt versus unburnt areas was performed in two test sites in Canada and China [17].Both the H-A-α decomposition and the Freeman-Durden three-component decomposition was applied.The analysis revealed that, while forested areas exhibited strong volume scattering contributions, fire scars showed relatively strong surface scattering contributions together with mixed volume and double-bounce scattering.This provided the ability to extract fire scar areas with an overall accuracy of 85% compared to a reference classification provided by SPOT-5 data.The results suggested that the quad-pol RADARSAT-2 data provided complementary information to multispectral data for burnt area mapping.However, it was observed that both clear-cut and exposed land showed strong surface scattering contributions creating potential for confusion.
Although the polarimetric decomposition approaches have been found to reduce the uncertainty associated with analysing SAR backscatter intensity alone [11,12], the presence of clear-cut or previously bare surfaces also contributed to low α-angles causing a potential avenue for confusion.To address these challenges, we propose a multi-temporal approach to burn scar extraction using polarimetric decompositions.The approach adopted is outlined in Section 3.
The Normalised Difference Alpha-Angle Index for Burn Scar Identification
Due to the success achieved in mapping burnt areas using polarimetric decompositions, we propose a technique for mapping burnt areas using polarimetric decomposition while minimizing the potential for identifying previously bare areas as burnt areas.The approach relies on the acquisition of two multiple-polarisation SAR scenes with one scene acquired before the burn, and one scene acquired shortly after the burn.In the case of quad-pol data, the scenes are subject to the H-A-α decomposition [28].The resulting α-angle provides an indication of scattering mechanism with α-angles of lower than 40 • being associated with a higher contribution by surface scatterers [16], and α-angles of between 40 • and 50 • being associated with volume scattering mechanisms.The modified dual-polarisation H-α decomposition [32], is used for dual-polarisation scenes.
To exploit the reduction in the contribution of volume scatterers associated with the removal of vegetation after a burn, a Normalised Difference Index approach, similar to the approach adopted in [19], was used with the exception that α-angle rather than SAR backscatter was considered.The Normalised Difference α-Angle Index (NDαI) was formularised as: where α pre-burn is the α-angle associated with the image captured before the burn and α post-burn is the α-angle after the burn.The resulting NDαI is designed to exhibit values between −1 and 1 with NDαI > 1 being associated with a decrease in α-angles between the pre-burn image and the post-burn image.A simple threshold-based image segmentation approach can then be used to extract the extent of the burn scar while minimising the impact of SAR speckle on the results.Since the algorithm exploits both a pre-burn image as well as a post-burn image, it is believed that this approach would minimize the inclusion of previously bare areas in the burnt area assessment.Furthermore, since the α-angle extracted from the polarimetric decompositions exploits only the phase of the SAR signal, different moisture conditions at the time of data acquisition would not affect the ability to differentiate between burnt and unburnt areas.Therefore, the approach is expected to be more generally applicable compared to backscatter-based attempts.
Introduction to the Test Site
To test the ability of the NDαI to identify burnt areas and thereby contribute to operational fire monitoring systems, the algorithm was tested in an area located near Simonsberg Mountain, situated between the towns of Stellenbosch and Paarl in the Western Cape Province of South Africa (Figure 1).Simonsberg Mountain is 1399 m high at the peak with slopes averaging 20 • although slopes of up to ~72 • are present.Slopes face in a north-to-north-easterly direction as well as in a south-to-south-westerly direction.The mountain is situated in an area hosting Swartland Shale Renosterveld and Mountain Fynbos vegetation types, while the surrounding areas are characterised by vineyards, plantations and agricultural land.The climate in the area has been described as typically Mediterranean with warm dry summers and cool, wet winters [34].Average daily maximum temperatures range between 27 • C and 34 • C in summer with almost no rainfall occurring during the summer months.The hot, dry conditions are ideal for the occurrence of wild fires, which are generally experienced between the months of November and March [11].One such fire started on the lower slopes of Simonsberg Mountain on 19 January 2016.The fire engulfed most of the mountain, destroying farmland, vineyards and natural vegetation in the process.
Introduction to the Test Site
To test the ability of the NDαI to identify burnt areas and thereby contribute to operational fire monitoring systems, the algorithm was tested in an area located near Simonsberg Mountain, situated between the towns of Stellenbosch and Paarl in the Western Cape Province of South Africa (Figure 1).Simonsberg Mountain is 1399 m high at the peak with slopes averaging 20° although slopes of up to ~72° are present.Slopes face in a north-to-north-easterly direction as well as in a southto-south-westerly direction.The mountain is situated in an area hosting Swartland Shale Renosterveld and Mountain Fynbos vegetation types, while the surrounding areas are characterised by vineyards, plantations and agricultural land.The climate in the area has been described as typically Mediterranean with warm dry summers and cool, wet winters [34].Average daily maximum temperatures range between 27 °C and 34 °C in summer with almost no rainfall occurring during the summer months.The hot, dry conditions are ideal for the occurrence of wild fires, which are generally experienced between the months of November and March [11].One such fire started on the lower slopes of Simonsberg Mountain on 19 January 2016.The fire engulfed most of the mountain, destroying farmland, vineyards and natural vegetation in the process.Due to the ecological importance of fires in the region, the area is monitored, in real-time, for the presence of fire by the Advanced Fire Information System (AFIS) developed by the Council for Scientific and Industrial Research (CSIR) Meraka Institute in South Africa.Once a fire is detected, information is sent to fire management agencies and the local Centre for Disaster Management and the appropriate response is taken.In addition to monitoring fire spread and fire location, burnt area maps are derived and disseminated to interested parties.The burnt area maps are derived using multispectral data including Sentinel-2A and Landsat 8. Due to the ecological importance of fires in the region, the area is monitored, in real-time, for the presence of fire by the Advanced Fire Information System (AFIS) developed by the Council for Scientific and Industrial Research (CSIR) Meraka Institute in South Africa.Once a fire is detected, information is sent to fire management agencies and the local Centre for Disaster Management and the appropriate response is taken.In addition to monitoring fire spread and fire location, burnt area maps are derived and disseminated to interested parties.The burnt area maps are derived using multispectral data including Sentinel-2A and Landsat 8.
Multispectral Burnt Area Mapping: Data Acquisition and Processing
For the multispectral burnt area assessment for the Simonsberg fire, Sentinel-2A data was obtained.One pre-burn scene and one post-burn scene, acquired on 17 January 2016 and 16 February 2016, respectively, were obtained.The Sentinel-2A sensor captures data in 13 spectral bands in the visible, near-infrared and shortwave infrared range.The resolution ranges between 10 and 60 m depending on the band in question.The data was provided in Level-1C processing level, representing top of atmosphere (TOA) reflectance.Atmospheric correction was performed using the radiative transfer-based SEN2COR atmospheric correction [35] to derive surface reflectance.For the extraction of burnt area, multiple burnt area indices, using Near Infrared (NIR) Band 8 (0.842 µm) and Shortwave Infrared (SWIR) Band 12 (2.190µm), were merged.Normalised Difference Vegetation Index (NDVI) values were used to mask out non-burnable pixels.Finally, a threshold value was automatically estimated using the Otsu algorithm [36] and applied to distinguish between burnt and unburnt areas.The processing workflow is summarised in Figure 2. The burn scar, presented in Figure 3, revealed a total area burnt of about 2400 ha.
Multispectral Burnt Area Mapping: Data Acquisition and Processing
For the multispectral burnt area assessment for the Simonsberg fire, Sentinel-2A data was obtained.One pre-burn scene and one post-burn scene, acquired on 17 January 2016 and 16 February 2016, respectively, were obtained.The Sentinel-2A sensor captures data in 13 spectral bands in the visible, near-infrared and shortwave infrared range.The resolution ranges between 10 and 60 m depending on the band in question.The data was provided in Level-1C processing level, representing top of atmosphere (TOA) reflectance.Atmospheric correction was performed using the radiative transfer-based SEN2COR atmospheric correction [35] to derive surface reflectance.For the extraction of burnt area, multiple burnt area indices, using Near Infrared (NIR) Band 8 (0.842 µm) and Shortwave Infrared (SWIR) Band 12 (2.190µm), were merged.Normalised Difference Vegetation Index (NDVI) values were used to mask out non-burnable pixels.Finally, a threshold value was automatically estimated using the Otsu algorithm [36] and applied to distinguish between burnt and unburnt areas.The processing workflow is summarised in Figure 2. The burn scar, presented in Figure 3, revealed a total area burnt of about 2400 ha.Remote Sens. 2017, 9, 764 6 of 13
Multispectral Burnt Area Mapping: Data Acquisition and Processing
For the multispectral burnt area assessment for the Simonsberg fire, Sentinel-2A data was obtained.One pre-burn scene and one post-burn scene, acquired on 17 January 2016 and 16 February 2016, respectively, were obtained.The Sentinel-2A sensor captures data in 13 spectral bands in the visible, near-infrared and shortwave infrared range.The resolution ranges between 10 and 60 m depending on the band in question.The data was provided in Level-1C processing level, representing top of atmosphere (TOA) reflectance.Atmospheric correction was performed using the radiative transfer-based SEN2COR atmospheric correction [35] to derive surface reflectance.For the extraction of burnt area, multiple burnt area indices, using Near Infrared (NIR) Band 8 (0.842 µm) and Shortwave Infrared (SWIR) Band 12 (2.190µm), were merged.Normalised Difference Vegetation Index (NDVI) values were used to mask out non-burnable pixels.Finally, a threshold value was automatically estimated using the Otsu algorithm [36] and applied to distinguish between burnt and unburnt areas.The processing workflow is summarised in Figure 2. The burn scar, presented in Figure 3, revealed a total area burnt of about 2400 ha.
NDαI Burnt Area Mapping: Data Acquisition and Processing
To test the ability of the NDαI for burnt area extraction in the area of interest, two sources of C-band SAR data were acquired.The dataset included two RADARSAT-2 scenes captured in Fine Beam quad-pol mode with ~35 • incidence angle and pixel spacing of ~4.7 m in the range direction and ~4.9 m in azimuth direction.The scenes were captured on 12 January 2016 (coinciding with pre-burn conditions) and 5 February 2016 (coinciding with post-burn conditions).The RADARSAT-2 data was subject to the quad-pol H-A-α decomposition, using a 5 × 5 window size, to derive the α-angles for the pre-and post-burn scenes.Two Sentinel-1A IW dual-polarisation (VV and VH) scenes captured on 1 January 2016 (coinciding with pre-burn conditions) and 18 February 2016 (coinciding with post-burn conditions) were obtained.The Sentinel-1A data provided a nominal pixel spacing of ~2.3 m in range and ~14 m in azimuth direction and incidence angle range between 29 • and 26 • .The Sentinel-1A data was subject to the dual-pol H-α decomposition (using a 5 × 5 window size) as implemented in SNAP 5.0.Both Sentinel-1A and RADARSAT-2 α-angle results were subject to terrain correction using the Range-Doppler approach using SRTM 1 Arc-Second DEM as input.The output pixel spacing was set to 15 m for both scenes.The dual-pol α-angle output by SNAP 5.0 was inverted by applying a scale factor expressed as 90 • − α to derive the α-angle in the expected scale range.The resulting datasets were used as input to derive NDαI data for the Sentinel-1A and RADARSAT-2 observations respectively.
To deal with SAR speckle effects, an object-based image analysis approach (OBIA) was adopted whereby thresholding was performed on regions characterised by similar statistics [21].These regions were identified using multiresolution image segmentation, a region-based algorithm that merges neighbouring segments in multiple resolutions, starting at pixel level [19,37].The scale parameter was determined heuristically and a scale parameter of 10 was found to be optimal.Similarly, the shape and compactness factors were set to 0.5 and 0.1, respectively.Objects representing burnt areas were then extracted by setting appropriate threshold values of NDαI.It was found that burnt segments were associated with higher mean NDαI values than non-burnt areas.Segmentation parameters and threshold values were determined through an empirical approach.In the case of RADARSAT-2, a threshold of NDαI > 0.025 was found to represent areas where vegetation cover decreased significantly due to the fire.In the case of Sentinel-1A, the threshold was adapted and NDαI > 0.050 was used.The processing workflow is presented graphically in Figure 4.
NDαI Burnt Area Mapping: Data Acquisition and Processing
To test the ability of the NDαI for burnt area extraction in the area of interest, two sources of C-band SAR data were acquired.The dataset included two RADARSAT-2 scenes captured in Fine Beam quad-pol mode with ~35° incidence angle and pixel spacing of ~4.7 m in the range direction and ~4.9 m in azimuth direction.The scenes were captured on 12 January 2016 (coinciding with preburn conditions) and 5 February 2016 (coinciding with post-burn conditions).The RADARSAT-2 data was subject to the quad-pol H-A-α decomposition, using a 5 × 5 window size, to derive the α-angles for the pre-and post-burn scenes.Two Sentinel-1A IW dual-polarisation (VV and VH) scenes captured on 1 January 2016 (coinciding with pre-burn conditions) and 18 February 2016 (coinciding with post-burn conditions) were obtained.The Sentinel-1A data provided a nominal pixel spacing of ~2.3 m in range and ~14 m in azimuth direction and incidence angle range between 29° and 26°.The Sentinel-1A data was subject to the dual-pol H-α decomposition (using a 5 × 5 window size) as implemented in SNAP 5.0.Both Sentinel-1A and RADARSAT-2 α-angle results were subject to terrain correction using the Range-Doppler approach using SRTM 1 Arc-Second DEM as input.The output pixel spacing was set to 15 m for both scenes.The dual-pol α-angle output by SNAP 5.0 was inverted by applying a scale factor expressed as 90° − α to derive the α-angle in the expected scale range.The resulting datasets were used as input to derive NDαI data for the Sentinel-1A and RADARSAT-2 observations respectively.
To deal with SAR speckle effects, an object-based image analysis approach (OBIA) was adopted whereby thresholding was performed on regions characterised by similar statistics [21].These regions were identified using multiresolution image segmentation, a region-based algorithm that merges neighbouring segments in multiple resolutions, starting at pixel level [19,37].The scale parameter was determined heuristically and a scale parameter of 10 was found to be optimal.Similarly, the shape and compactness factors were set to 0.5 and 0.1, respectively.Objects representing burnt areas were then extracted by setting appropriate threshold values of NDαI.It was found that burnt segments were associated with higher mean NDαI values than non-burnt areas.Segmentation parameters and threshold values were determined through an empirical approach.In the case of RADARSAT-2, a threshold of NDαI > 0.025 was found to represent areas where vegetation cover decreased significantly due to the fire.In the case of Sentinel-1A, the threshold was adapted and NDαI > 0.050 was used.The processing workflow is presented graphically in Figure 4.
Results
The results of the NDαI algorithm as applied to RADARSAT-2 data and Sentinel-1A are presented in Figure 5A,B respectively.For comparison with the Sentinel-2A burn scar results, the Sentinel-2A-derived burnt area is overlain as black outlines in both Figure 5A,B.The results suggest that high NDαI values are associated with the burnt areas as would be expected.However, the relationship appears to be more prominent on quad-pol Radarsat-2-derived NDαI compared to Sentinel-1A-derived NDαI.
Results
The results of the NDαI algorithm as applied to RADARSAT-2 data and Sentinel-1A are presented in Figure 5A,B respectively.For comparison with the Sentinel-2A burn scar results, the Sentinel-2A-derived burnt area is overlain as black outlines in both Figure 5A,B.The results suggest that high NDαI values are associated with the burnt areas as would be expected.However, the relationship appears to be more prominent on quad-pol Radarsat-2-derived NDαI compared to Sentinel-1A-derived NDαI.A comparison between the Sentinel-2A and SAR-derived burn scars is presented in Figure 6A,B respectively.Areas identified as burn scars on both Sentinel-2A and NDαI data are presented in green.Areas identified as burn scars using the NDαI only are presented in red and areas identified as burnt on Sentinel-2A only are displayed in blue.In lieu of ground-truth information, a classification accuracy assessment using the Sentinel-2A burn scar as reference data was performed.Standard error metrics were used to express the reliability of the classifications, including overall accuracy (percent correctly classified) and kappa statistic (how well the classification performed relative to a random assignment of classes).Also calculated was the commission error (percentage of area classified as "burnt" which was, in fact, not burnt) and omission error (percentage of actual burnt area incorrectly classified as "not burnt").The results reveal that, for RADARSAT-2, an overall classification accuracy of 97.4% (Kappa = 0.72) was achieved.For the "Burnt" class, a commission error of 17.7%, and an omission error of 33.1% was achieved.This corresponds to a detection efficiency rate (or "producer's accuracy") of 66.9%.For Sentinel-1A, an overall accuracy of 94.8% (Kappa = 0.57) was achieved.The corresponding errors of commission and omission were 48.2% and 29.3%, respectively, with a detection efficiency rate of 70.7%.
When the errors of omission are considered, it is observed that a strip of the burnt area trending in a north-westerly to south-easterly direction (highlighted with a black oval in Figure 6) was identified as burnt only on the Sentinel-2A imagery and not by either of the SAR sensors.This area is associated with slopes facing away from the sensor in a north-easterly direction, similar to the lineof-sight direction of the scenes.The omission of these pixels in the burnt area classification on the SAR data suggests that the terrain distortions inherent to SAR scenes, most notably radar shadowing effects, may limit the ability to extract burn scars using this approach.When the correspondence between the reference data and the SAR classifications are considered as a function of local incidence angle (Figure 7), it becomes clear that severe errors of omission occur at high local incidence angles.A comparison between the Sentinel-2A and SAR-derived burn scars is presented in Figure 6A,B respectively.Areas identified as burn scars on both Sentinel-2A and NDαI data are presented in green.Areas identified as burn scars using the NDαI only are presented in red and areas identified as burnt on Sentinel-2A only are displayed in blue.In lieu of ground-truth information, a classification accuracy assessment using the Sentinel-2A burn scar as reference data was performed.Standard error metrics were used to express the reliability of the classifications, including overall accuracy (percent correctly classified) and kappa statistic (how well the classification performed relative to a random assignment of classes).Also calculated was the commission error (percentage of area classified as "burnt" which was, in fact, not burnt) and omission error (percentage of actual burnt area incorrectly classified as "not burnt").The results reveal that, for RADARSAT-2, an overall classification accuracy of 97.4% (Kappa = 0.72) was achieved.For the "Burnt" class, a commission error of 17.7%, and an omission error of 33.1% was achieved.This corresponds to a detection efficiency rate (or "producer's accuracy") of 66.9%.For Sentinel-1A, an overall accuracy of 94.8% (Kappa = 0.57) was achieved.The corresponding errors of commission and omission were 48.2% and 29.3%, respectively, with a detection efficiency rate of 70.7%.
In future, the combination of SAR data captured from different look directions may partially overcome this limitation.In addition to the errors of omission, some areas were incorrectly classified as burnt (i.e., errors of commission).These are shown in red on Figure 6.Although some of these errors could be due to omissions in the Sentinel-2A burn scar classification, true errors of commission are present.It was also observed that the classification derived from Sentinel-1A had significantly higher commission errors than those from RADARSAT-2 (48.2% vs. 17.7%).
Discussion of the Results
Both Sentinel-1A and RADARSAT-2 results demonstrated the ability to extract the burnt area with overall accuracies of 97.4% and 94.8%, respectively, compared to single-date polarimetric decompositions that provided overall accuracies of 85% [17].It should be noted, however, that these overall accuracies are somewhat inflated due to the fact that far more non-burnt areas exist than burnt areas (by a factor of 17), leading to unbalanced class sizes in the confusion matrix.The moderately high kappa values of 0.72 and 0.59 (for RADARSAT-2 and Sentinel-1A, respectively) therefore provide a more reliable metric of classification accuracy, coupled with the errors of omission and commission.The errors of omission for RADARSAT-2 and Sentinel-1A data were similar (33.1% and 29.3%, respectively), suggesting that both sources of information would be able to provide a reasonable detection of burnt areas.The results also suggest that more accurate results would be When the errors of omission are considered, it is observed that a strip of the burnt area trending in a north-westerly to south-easterly direction (highlighted with a black oval in Figure 6) was identified as burnt only on the Sentinel-2A imagery and not by either of the SAR sensors.This area is associated with slopes facing away from the sensor in a north-easterly direction, similar to the line-of-sight direction of the scenes.The omission of these pixels in the burnt area classification on the SAR data suggests that the terrain distortions inherent to SAR scenes, most notably radar shadowing effects, may limit the ability to extract burn scars using this approach.When the correspondence between the reference data and the SAR classifications are considered as a function of local incidence angle (Figure 7), it becomes clear that severe errors of omission occur at high local incidence angles.In future, the combination of SAR data captured from different look directions may partially overcome this limitation.
Remote Sens. 2017, 9, 764 9 of 13 In future, the combination of SAR data captured from different look directions may partially overcome this limitation.In addition to the errors of omission, some areas were incorrectly classified as burnt (i.e., errors of commission).These are shown in red on Figure 6.Although some of these errors could be due to omissions in the Sentinel-2A burn scar classification, true errors of commission are present.It was also observed that the classification derived from Sentinel-1A had significantly higher commission errors than those from RADARSAT-2 (48.2% vs. 17.7%).
Discussion of the Results
Both Sentinel-1A and RADARSAT-2 results demonstrated the ability to extract the burnt area with overall accuracies of 97.4% and 94.8%, respectively, compared to single-date polarimetric decompositions that provided overall accuracies of 85% [17].It should be noted, however, that these overall accuracies are somewhat inflated due to the fact that far more non-burnt areas exist than burnt In addition to the errors of omission, some areas were incorrectly classified as burnt (i.e., errors of commission).These are shown in red on Figure 6.Although some of these errors could be due to omissions in the Sentinel-2A burn scar classification, true errors of commission are present.It was also observed that the classification derived from Sentinel-1A had significantly higher commission errors than those from RADARSAT-2 (48.2% vs. 17.7%).
Discussion of the Results
Both Sentinel-1A and RADARSAT-2 results demonstrated the ability to extract the burnt area with overall accuracies of 97.4% and 94.8%, respectively, compared to single-date polarimetric decompositions that provided overall accuracies of 85% [17].It should be noted, however, that these overall accuracies are somewhat inflated due to the fact that far more non-burnt areas exist than burnt areas (by a factor of 17), leading to unbalanced class sizes in the confusion matrix.The moderately high kappa values of 0.72 and 0.59 (for RADARSAT-2 and Sentinel-1A, respectively) therefore provide a more reliable metric of classification accuracy, coupled with the errors of omission and commission.The errors of omission for RADARSAT-2 and Sentinel-1A data were similar (33.1% and 29.3%, respectively), suggesting that both sources of information would be able to provide a reasonable detection of burnt areas.The results also suggest that more accurate results would be obtained in areas of relatively flat topography and low local incidence angles.In rugged terrain, reasonable accuracies will be obtained on slopes facing towards the sensor.However, in areas of steep topography where high local incidence angles are associated with slopes facing away from the SAR sensor, the classification accuracies would decrease due to an increase in errors of omission.These effects would be present irrespective of whether quad-pol or dual-pol scenes were used.In these areas, burnt area extraction from SAR scenes can be complemented by the addition of multispectral burnt area assessment techniques assuming that cloud-and smoke-free data is available.However, in areas where multispectral approaches fail to extract burn scars in areas of steep topography in the presence of shadows, the NDαI approach may fail to increase the ability to extract burnt area information depending on the sensor geometry and local topography.However, the combined use of SAR scenes from different look directions is expected to overcome these limitations and is recommended for testing in future research.One potential limitation in an approach combining multiple look directions lies in the rotational invariance (to yield the same results irrespective of viewing angle) of the α-angle used in the NDαI approach.Although the H, A and α parameters of the Cloude-Pottier algorithm are rotational-invariant [38], recent investigations have demonstrated that the rotational invariance does not hold for dual-polarisation Cloude-Pottier decompositions [26].The effects of rotational invariance on dual-pol decomposition will need to be tested to assess the robustness of the algorithm in a variety of data acquisition and scattering geometries.
In addition to errors of omission, significant errors of commission were observed for dual-pol NDαI results compared to quad-pol NDαI results (48.2% compared to 17.7% respectively).Although the threshold values used during the segmentation of the Sentinel-1A NDαI could be adjusted to minimise the errors of commission, the result was found to be associated with a corresponding increase in errors of omission.Changing the threshold from 0.050 to 0.075, for example, lowers commission error from 48.2% to 20.5%, but raises omission error from 29.3% to 50.0%.Therefore, the selection of optimal threshold values should be based on a trade-off between the over-or under-estimation of burnt area.The high commission errors observed on Sentinel-1A NDαI burnt area results are likely due to the inability of the α-angle from the modified H-α decomposition to effectively separate scattering mechanisms when one like-polarised and one cross-polarised band is used [33].In fact, the lack of co-polarisation was found to degrade the ability to extract scattering mechanisms with medium-and low-entropy scattering mechanisms being highly confused [33].Furthermore, high-entropy scattering mechanisms were found to be confused by medium-entropy and multiple scattering mechanisms [33].To further investigate the commission errors on dual-polarisation data, the areas associated with high commission errors were compared to the land-cover map of the area [39].The results suggest that high commission errors on Sentinel-1 burnt area results were associated with areas that were sparsely vegetated, or corresponding to low-vegetation (urban sports fields or golf courses, cultivated fields, and young plantations).This suggests that, for unburnt, un-vegetated or low-vegetation areas (such as grassland and agriculture) and burnt areas, a similar temporal signature of α-angle is observed when dual-pol data is used.This is similar to the observation for the change in SAR backscatter using the normalised difference index approach [19].Therefore, the probability of false alarms are expected to be higher for the dual-polarisation NDαI approach.Future investigations will consider the inclusion of additional observables, including interferometric coherence, for the minimisation of errors of commission on dual-polarisation data.
The image segmentation and thresholding parameters selected in this investigation were selected empirically.The selection of optimal threshold value when change detection using ratioing techniques are involved is a well-known problem.Although "trial-and-error" approaches are commonplace, they are known to be time-consuming and prone to operator bias [40,41].Furthermore, the robustness of the algorithm in different areas of investigation and data acquisition geometries have not been tested.If the same segmentation parameters and threshold values do not apply in different scenarios, the trial-and-error-based approach will be time-consuming to implement.For this reason, automation of the threshold and segmentation parameter selection process, by implementation of, for example, the Kittler and Illingworth minimum-error thresholding algorithm [41] is recommended for future research.
Concluding Remarks
This paper has demonstrated the potential of a thresholded, multi-temporal, α-angle-based index (NDαI) for mapping burn scars in a fire-prone region using Sentinel-1A and RADARSAT-2 data.The multi-temporal approach adopted in this investigation assumed that the confusion between burnt areas and previously bare areas would be minimised compared to single-date approaches assuming that data acquisition is limited to time periods as close as possible before and after the burn.With larger day differences between data acquisitions, the probability of the removal of vegetation due to causes other than burns, for example harvesting or deforestation, would result in a misclassification of burnt versus unburnt areas.Furthermore, rapid changes can take place after a fire including regrowth of vegetation.Therefore, effort should be made to acquire data at the shortest temporal baseline possible.
Although the confusion between low-or no-vegetation areas and burnt areas were minimised when quad-pol RADARSAT-2 data were used, significant errors of commission (48.2%) were present in the burnt area results derived from Sentinel-1A dual-polarisation results.The errors were associated with unburnt areas characterised by low or no vegetation.This suggests that the confusion between burnt areas and previously bare areas using the dual-polarisation NDαI would not be reduced as significantly as expected.Furthermore, omission errors on slopes facing away from the sensor in rugged terrain, and therefore associated with steep local incidence angles, were associated with both RADARSAT-2 and Sentinel-1A burnt area results.This suggests that the algorithm would be more suitable in areas of flat topography unless data from multiple orbits can be combined successfully.
Further research on this technique should investigate the use of multiple look directions to compensate for the effect of local incidence angle and the minimisation of errors of omission.Furthermore, the reduction of errors commission in previously bare or low-vegetation areas should be considered by incorporating additional SAR observables, including interferometric coherence.To minimise both errors of omission and errors of commission, the SAR burnt area estimates can be complemented by the addition of multispectral burnt area estimates assuming that smoke and cloud free data are available.
Despite the limitations highlighted above, the results of the investigation suggest that the SAR-based NDαI algorithm for burn scar extraction can be used to compliment observations by conventional multispectral approaches.This would be particularly useful in areas where persistent cloud -cover or smoke in actively burning areas prevents the use of conventional techniques.The ability to complement multispectral burn scar extractions with the NDαI mapping algorithms using SAR data increases the information available for informed fire management practices.Furthermore, employing freely available imagery such as Sentinel-1 would aid in reducing the cost of deriving such information.
Figure 1 .
Figure 1.The location of the Simonsberg Mountain (black star) near Stellenbosch, South Africa.
Figure 1 .
Figure 1.The location of the Simonsberg Mountain (black star) near Stellenbosch, South Africa.
Figure 4 .
Figure 4.The synthetic aperture radar (SAR) data processing workflow for burnt area mapping (SLC = Single Look Complex).
Figure 4 .
Figure 4.The synthetic aperture radar (SAR) data processing workflow for burnt area mapping (SLC = Single Look Complex).
Figure 5 .
Figure 5.The results of calculating NDαI for Radarsat-2 (RS-2) (A) and Sentinel-1A (S1) (B) data.The extent of the burn scar extracted from Sentinel-2A (black outlines) are also shown for comparison.
Figure 5 .
Figure 5.The results of calculating NDαI for Radarsat-2 (RS-2) (A) and Sentinel-1A (S1) (B) data.The extent of the burn scar extracted from Sentinel-2A (black outlines) are also shown for comparison.
Figure 6 .
Figure 6.Comparing the results of the Sentinel-2A burn scar extraction with the NDαI-derived burn scar from RADARSAT-2 (A) and Sentinel-1A (B).
Figure 7 .
Figure 7.The relationship between local incidence angle and error of omission for the "Burnt" class.
Figure 6 .
Figure 6.Comparing the results of the Sentinel-2A burn scar extraction with the NDαI-derived burn scar from RADARSAT-2 (A) and Sentinel-1A (B).
Figure 6 .
Figure 6.Comparing the results of the Sentinel-2A burn scar extraction with the NDαI-derived burn scar from RADARSAT-2 (A) and Sentinel-1A (B).
Figure 7 .
Figure 7.The relationship between local incidence angle and error of omission for the "Burnt" class.
Figure 7 .
Figure 7.The relationship between local incidence angle and error of omission for the "Burnt" class.
|
v3-fos-license
|
2020-10-16T13:06:45.541Z
|
2020-10-01T00:00:00.000
|
222420825
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2075-4418/10/10/808/pdf",
"pdf_hash": "a8a1c5a7584fa722b9a2e040271e7e34b5d44879",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46581",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "cebe9b6d74b89bc1207b3aa2ee4107c63325635d",
"year": 2020
}
|
pes2o/s2orc
|
The Role of Innate Lymphoid Cells in the Regulation of Immune Homeostasis in Sepsis-Mediated Lung Inflammation
Septic shock/severe sepsis is a deregulated host immune system response to infection that leads to life-threatening organ dysfunction. Lung inflammation as a form of acute lung injury (ALI) is often induced in septic shock. Whereas macrophages and neutrophils have been implicated as the principal immune cells regulating lung inflammation, group two innate lymphoid cells (ILC2s) have recently been identified as a new player regulating immune homeostasis. ILC2 is one of the three major ILC subsets (ILC1s, ILC2s, and ILC3s) comprised of newly identified innate immune cells. These cells are characterized by their ability to rapidly produce type 2 cytokines. ILC2s are predominant resident ILCs and, thereby, have the ability to respond to signals from damaged tissues. ILC2s regulate the immune response, and ILC2-derived type 2 cytokines may exert protective roles against sepsis-induced lung injury. This focused review not only provides readers with new insights into the signaling mechanisms by which ILC2s modulate sepsis-induced lung inflammation, but also proposes ILC2 as a novel therapeutic target for sepsis-induced ALI.
Introduction
Sepsis is a life-threatening organ dysfunction caused by a dysregulated host immune system response to infection [1]. Despite advances in intensive care management and multidisciplinary treatments, the mortality rate of sepsis exceeds 20% in developed countries [2,3].
Novel types of non-T, non-B lymphocytes-termed 'innate lymphoid cells' (ILCs)-were discovered during the last decade. ILCs lack adaptive antigen receptors [4]. Initially, three major subsets (groups 1, 2, and 3 ILCs-ILC1s, ILC2s, and ILC3s, respectively) were defined as new immune cells that link innate and adaptive immunity according to the transcription factors required for their development
Innate Lymphoid Cells Protect against Pathogens and Contribute to Tissue Repair
ILCs are mainly tissue-resident cells [10] that mediate immune response in mucosal tissues. ILC1s, excluding natural killer (NK) cells, secret interferon-γ, and TNF-α, express the transcription factor T-bet [11,12]. ILC1s serve as an essential player in host immunity through the rapid secretion of an IFN-γ response to viral infections [13]. ILC2s produce type 2 cytokines and are regulated by the transcription factors GATA-3 and retinoic acid receptor-related orphan receptor alpha (RORα) [14,15]. ILC2-derived type 2 cytokines are largely responsible for the host protective immune response against helminth infection [16]. Additionally, IL-5 secreted from ILC2s in the stomach promotes bacteria-binding IgA production by plasma B cells, leading to defense against bacterial infections [17]. ILC3s and LTi cells contain the transcription factor RORγt and secret IL-17 and IL-22 [18][19][20]. ILC3-derived IL-22 protects the lungs [21] and gut [22] following bacterial infection. ILCs contribute to the biological defense against pathogens by driving the immune response.
ILCs induce protective immunity in response to infection and promote the maintenance of tissue integrity by promoting wound healing and reducing tissue damage. ILC1s contribute to damage reduction in acute liver injury by producing IFN-γ, which aids the survival of hepatocytes through the upregulation of Bcl-xL [23]. ILC2s repair intestinal epithelial cells in colitis by the production of amphiregulin (AREG), which is the epidermal growth factor-like molecule [24], following DSS treatment [25]. ILC3s facilitate skin repair by promoting epidermal re-epithelialization via the production of IL-17 [26]. These results indicate that ILC-derived cytokines play not only a protective role against pathogens, but also a regulatory role in maintaining immune homeostasis.
A Pathogenic Role of ILC2s and Lung ILC2s
ILC2s have been extensively studied in relation to allergic diseases, including asthma, which is a typical type 2 immune-mediated airway disease. Deregulated type 2 cytokine production and metabolism in ILC2s are considered to be a part of the underlying mechanisms of asthma induction [27][28][29][30]. Given recent findings showing the importance of overactive immune response by ILC2s in asthma [31][32][33], it appears that negatively regulating ILC2s may offer as a therapeutic strategy. Indeed, several already established drugs for asthma, such as β2-adrenergic agonists [34], corticosteroids [35], and leukotriene receptor antagonists [36], have been found to suppress the ILC2 immune response. Furthermore, several studies aimed to suppressing ILC2 functionality reduced airway inflammation and/or improved asthma control [37,38]. The activation of ILC2s may worsen disease progression, and research on ILC2-mediated immune response in asthma is already leading to the establishment of therapeutic strategies.
The role played by ILC2s in the lungs has been demonstrated in studies involving typical ILC2-deficient mice, such as in Rora sg/sg bone marrow transplant (BMT) mice or Rora fl/sg Il7r Cre mice. RORα, which is a key transcriptional factor for establishing ILC2-deficient mice, is critical to ILC2 development, although it does not substantially affect CD4 + T-cell development [39]. Rora sg/sg BMT mice are reconstituted by transplanting the BM of staggerer mice (Rora sg/sg ) to sub-lethally irradiated lymphocyte-deficient Rag2 −/ − Il2rg −/ − mice [39]. Staggerer mice are naturally RORα-deficient mice and do not survive for long once past weaning. Rora fl/sg Il7r Cre mice are made by engineering conditionally targeted Rora fl/sg mice inter-crossed with the IL-7 receptor (IL-7R)-Cre [16]. Research findings from these representative engineered mutant ILC2-deficient mice have shown that ILC2s play a crucial role in orchestrating and mediating type 2 immunity in the lung, as indicated in Table 1. It remains to be elucidated how sepsis-induced lung inflammation affects these engineered mutant mice lacking ILC2. Table 1. List of previous reports that studied type 2 immunity in the lung of representative engineered mutant ILC2-deficient mice.
Disease Model Roles of ILC2s in the Lung References
Papain-induced asthma model ILC2s play a crucial role in the inflammatory response to allergens, even in the presence of Th2 cells. [39] Papain-induced asthma model ILC2-derived IL-13 primes naive T cells to differentiate into Th2 cells by promoting the migration of DCs to LNs. [40] House dust mite-induced asthma model BAL eosinophils are significantly decreased in ILC2-deficient mice. [41] Papain-induced asthma model ILC2s induce DCs, thereby promoting memory Th2 function. [42] Hemorrhagic shock model ILC2-derived IL-5 promotes IL-5 production of neutrophils. [43] Helminth-infected mouse model ILC2-deficiency leads to a reduction in eosinophil accumulation. [44] Rora fl/sg Il7r Cre Mice
Disease Model Roles of ILC2s in the Lung References
Papain-induced asthma model ILC2s induce DCs, thereby promoting memory Th2 functionality. (The authors used both Rora sg/sg BMT and Rora fl/sg Il7r Cre mice). [42] Papain-induced asthma model Helminth-infected mouse model ILC2s promote increases in Th2 cells in mLN and IgE concentrations in the lung. [45] Cigarette smoke exposure model ILC2s help induce collagen deposition following cigarette exposure. [46] ILC2s: group two innate lymphoid cells, DCs: dendritic cells, LNs: lymph nodes, BAL: bronchoalveolar lavage.
ILC2s in Sepsis-Induced Lung Injury
The lung is one of the most frequently affected organs in sepsis [47], typically leading to acute lung injury (ALI). ALI secondary to sepsis is a cause of significant morbidity and mortality in septic patients (the in-hospital mortality is 60% [ALI group] vs. 14% [non-ALI group]) [48], despite efforts to improve therapy. The pathological roles of aberrantly activated lung macrophages and infiltrating neutrophils in ALI are well documented [49]; however, the mechanism of sepsis-induced lung injury remains poorly understood. ILC2s, the predominant population of ILCs in the lung [7], have garnered attention as a new player regulating type 2 immune responses. IL-5 and IL-13, representative type 2 cytokines produced by ILC2, have been shown to protect against lung injury and sepsis [50][51][52]. Additionally, ILC2-derived AREG also contributes to respiratory tissue remodeling following influenza virus-induced injury [53]. Furthermore, IL-9 is secreted as an autocrine amplifier of ILC2 function, promoting AREG production, and leading to the repair of lung tissue injuries induced by the migration of Nippostrongylus brasiliensis [54]. These results indicate that ILC2s maintain lung homeostasis by producing these crucial cytokines in both local and systemic inflammation. Meanwhile, an excessive ILC2 immune response can cause aggravating type 2 lung inflammation. For instance, high-mobility group box 1 promotes lung ILC2 proliferation and decreases ILC2 apoptosis following hemorrhagic shock, leading to the accumulation of ILC2 in the lung [55]. This leads to eosinophil infiltration and type 2 cytokine production in the lung, thereby contributing to lung injury following hemorrhagic shock [55]. Therefore, analyzing the immunological dynamics of ILC2s in sepsis, which cause a dysfunctional immune response to infection, could lead to new insights into the pathophysiology of sepsis-induced lung inflammation.
These research findings indicate that lung ILC2s are crucial regulators during sepsis. In addition, dysregulated ILC2s in the immune response may be involved in the pathogenesis of sepsis-induced lung injury or mortality. How then can we control the function of ILC2s or adjust deregulated ILC2s? The molecular mechanism that balances ILC2 regulation in the septic lung is not fully understood. We have recently reported important research findings that address this question.
ILC2 Subsets in the Septic Lung
Two ILC2 subsets have been reported in the lung: Natural ILC2 (nILC2) and inflammatory ILC2 (iILC2) (Figure 1) [59]. nILC2s are characterized by Lineage (Lin) − ST2 + IL17RB −/lo CD127 + KLRG1 hi CD90 hi [59][60][61]. nILC2 subset is comprised of tissue-resident ILC2s [10] and is activated by IL-33 [59]. On the other hand, iILC2 subset, which is characterized by Lin -ST2 − IL17RB + CD127 + KLRG1 int CD90 lo [59][60][61], is thought to move from the gut to the lung in response to either IL-25 administration or to helminth infections [60]. Interestingly, we hardly observed any significant increase in the number of iILC2s; indeed, levels of IL-25 mRNA expression over the seven days following CLP surgery in our recent study remained very low [62]. These research findings possibly indicate that iILC2s hardly migrate to the septic lungs, and that their role therein is limited. Therefore, in this review, we focused on the role of nILC2s.
The Mechanism That Drives IL-33/ST2 Signaling Stimulates ILC2s
IL-33, which stimulates ILC2s and promotes IL-5 and IL-13 production [14], is predominantly expressed by damaged epithelial cells in the lung following inflammation (i.e., recruited neutrophils release proteases and oxygen-derived free radicals [63]). ST2 (also known as IL1RL1) is a receptor of IL-33 and is also expressed on a variety of immune cells, such as Th2 cells [64], regulatory T cells [65], mast cells [66], M2 macrophages [67], and eosinophils [68]. ST2 serves as a marker of murine lung ILC2s [7]. The expression levels of ST2 on ILC2s differs based on where the latter is located [69]. It is upregulated by TGF-β signaling via the MEK-dependent pathway [70].
IL-33 binds a heterodimer formed by ST2 and the IL-1 receptor accessory protein (IL-1RAP). This signaling induces the recruitment of myeloid differentiation primary response protein 88 (MyD88), which is located in the cytoplasmic region of ST2. Subsequently, MyD88 recruits IL-1R-associated kinase 1 (IRAK1), IRAK4, and TNF receptor-associated factor 6 (TRAF6), resulting in the activation of either the NF-κB or AP-1 pathway [71]. The signal stimulates lung ILC2s and promotes IL-5 and IL-13 production. The molecular mechanism underlying IL-33/ST2 signaling in ILC2 [72,73] may share some similarities with Th2 cells [74] in mediating MyD88. However, a recent study revealed that this signaling helps to promote Foxp3 and GATA3 expression in colonic Tregs [65]. IL-33/ST2 signaling may express different functions in a variety of pathways. Its molecular mechanism is not yet fully understood.
PD-1 is expressed on lymphocytes, such as T cells and B cells. PD-1 is not expressed on resting T cells but is expressed after activation [78]. Signaling through TCR or BCR upregulates PD-1 on lymphocytes [79]. PD-1 expression in ILC2s is increased by IL-33 stimulation [80].
The inhibitory mechanism and function of PD-1/PD-L1 signaling may differ depending on cell types. For instance, the signal in T cells can inhibit T-cell functions by recruiting SHP-2, thereby dephosphorylating their downstream signaling molecules within the PI3K/AKT and MAPK/ERK signaling pathways. These pathways are triggered by both T-cell receptors (TCRs) and CD28 [76]. PD-1 of ILC2s restricts their numbers and functions through the inhibition of STAT5 signaling [81]. PD-1/PD-L1 signaling has been shown to negatively regulate both ILC2s and T cells; however, the details of this molecular mechanism are not fully understood.
The interaction of PD-1 and PD-L1 is well documented as a cause of impaired T-cell functionality. Blocking this signaling has been established as a successful therapy for several cancers by ameliorating T-cell exhaustion. Since cancer shares several immunosuppressive mechanisms with sepsis, blocking PD-1/PD-L1 signaling on T cells has been studied as a therapeutic target during sepsis [82,83]. Interestingly, our recent study demonstrated that PD-1 levels on both ILC2s and PD-L1 in the lungs are upregulated during sepsis. This prompted us to evaluate whether ILC2s are a part of the pathogenesis underlying sepsis-induced lung inflammation by PD-1-mediated inhibition of the down-regulation of immune responses, and whether blocking PD-1/PD-L1 signaling in ILC2s represents a potential treatment target.
The Functional Dynamics of Lung ILC2s during Sepsis
Both IL-33/ST2 signaling and PD-L1/PD-1 signaling are essential to fulfilling the functions of ILC2s. Indeed, they affect the balance of ILC2 regulation in the septic lung. However, there is little research on the functional dynamics of lung ILC2s over the different time points, early to late phase, during sepsis.
We investigated the transitions of IL-33/ST2 signaling and PD-L1/PD-1 signaling in the septic lung, as well as IL-13 production in ILC2s [62]. During days 1 through 7 after CLP, we discovered that the balance of signaling strengths between the IL-33/ST2 and PD-L1/PD-1 pathways affected the levels of IL-13 production in ILC2s in the septic lung (Figure 2A). In short, IL-13 production by ILC2s in the lung was initially inhibited by sepsis, but then gradually increased. Although IL-33/ST2 signaling in ILC2s was robust, IL-13 secretion remained low during the early phase of sepsis. This might be explained by the high PD-1 expression evident on ILC2s.
We also evaluated IL-5 secretion in ILC2s [62]. IL-5 production levels were not significantly different during our experimental period. Our results suggest that IL-5 production was only slightly more influenced than that of IL-13 in the septic lung. There is no information on the expression levels of other ILC2-producible cytokines, such as IL-9 or AREG, although the biological defense mechanism associated with ILC2s may be perturbed by PD-1 in sepsis.
IL-33 certainly affects the dynamics of ST2 and PD-1 expression on ILC2s. Our experiments evaluating both ST2 and PD-1 expression levels of ILC2s after CLP in IL-33 knockout mice showed that the changes in expression levels during days 1 through 7 after CLP were smaller than those of wild-type mice [62]. This result is consistent with the fact that IL-33 stimulation upregulates the expression levels of PD-1 on ILC2s [80]. As TGF-β upregulates ST2 expression on ILC2s [70], it could have an effect on ILC2s during sepsis. Other sepsis-induced cytokines may also change these expression levels; however, the details surrounding this phenomenon remain unclear.
PD-1 on ILC2s as a Target for PD-1 Blocking Therapy in the Septic Lung
Clinically proven pharmacological interventions to alleviate ALI are currently lacking [49]. A few non-pharmacological supportive treatments, such as the conservative fluid strategy to prevent lung edema formation and lung-protective mechanical ventilation, have shown some effectiveness for ALI [49,[84][85][86]; nevertheless, establishing a novel pharmacological therapeutic strategy is urgently needed.
In mice, IL-13 has been shown to exert a protective role during sepsis by reducing inflammation. Blocking IL-13 in a CLP-induced sepsis mouse model revealed worse rates of mortality and lung injury, which are associated with increased neutrophil-activating chemokine and proinflammatory cytokine levels in the lungs [87]. Furthermore, ILC2-derived IL-13 polarizes alveolar macrophages to M2 macrophages in one in vitro experiment [88]. A recent study has shown that by adaptively transferring M2 macrophages into lung-injured mice, lung inflammation and damage can be significantly reduced [89]. This indicates that a rapid shifting to M2 macrophages may regulate lung inflammation and damage. In our study, sepsis simultaneously induced low levels of IL-13 production and high levels of PD-1 expression on ILC2s during the early phase of sepsis (CLP day 1). Therefore, blocking PD-1/PD-L1 signaling may lead to increased IL-13 production and the development of new therapeutic strategies for sepsis-induced acute lung injuries ( Figure 2B).
In fact, blocking PD-1/PD-L1 signaling has been shown to improve outcomes in CLP-induced sepsis mice [90,91]. The effectiveness of blocking PD-1/PD-L1 signaling has been proven in animal experiments studying sepsis, which is thought to occur by ameliorating T-cell exhaustion [82]. Additionally, our results may indicate that blocking PD-1/PD-L1 signaling has beneficial effects; e.g., relieving the inhibition of IL-13 secretion on ILC2s. On the other hand, since the role of PD-1/PD-L1 signaling in septic lung ILC2s remains unclear, the impact on already existing protective mechanisms through AREG or IL-9 production, or the side effects of blocking PD-1/PD-L1 signaling, such as the potential for excessive IL-13 production leading to the immunosuppression or inhibition of Th2 polarization, is uncertain.
Furthermore, the roles played by IL-13 in sepsis also remain unclear. Although the effects of developmental roles of IL-13 in the remodeling of the immune system need to be considered, IL-13 deficient mice have experienced survival benefits, along with decreased tissue damage following CLP [92]. Our concept requires further investigation; however, our research findings provide new insights into the role of ILC2s in the pathophysiology of sepsis-induced lung inflammation, particularly regarding the possibility of inhibiting IL-13 production in ILC2s by PD-1.
Summary and Future Challenges
ILCs are increasingly recognized to play key roles in the immune response to sepsis. ILCs help to induce a protective response against infection and also promote the maintenance of tissue integrity by ameliorating tissue damage and/or repairing it. We have summarized in the present report the roles and molecular mechanisms of the ST2-IL-33 axis and PD-1-PD-L1 axis in ILC2s. Although ILC2s are not regulated by IL-33/ST2 and PD-1/PD-L1 signaling alone, these signaling mechanisms are crucial regulators for ILC2s in sepsis-induced lung inflammation. Additionally, we discussed how IL-13 production is regulated between the two signaling systems, and the potential therapeutic strategies of activating PD-1-mediated IL-13 production on ILC2 in the septic lung. Further investigations should be conducted regarding how lung ILC2s are activated and controlled under different settings and how they interact with other immune cells through PD-1/PD-L1 binding and its molecular mechanisms. ILC2s may be the last piece of the puzzle for deciphering the complicated immune responses mounted against human sepsis-induced acute lung injuries, potentially leading to novel therapeutic strategies.
Acknowledgments:
The authors thank all the members of all our laboratory for their helpful suggestions and critical reading of the manuscript.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
|
v3-fos-license
|
2021-06-24T13:21:32.351Z
|
2021-06-24T00:00:00.000
|
235614184
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2021.679805/pdf",
"pdf_hash": "e1b42ec88c26a74a6c2e4b6074cac39ccb424bc4",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46583",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "e1b42ec88c26a74a6c2e4b6074cac39ccb424bc4",
"year": 2021
}
|
pes2o/s2orc
|
Analysis of Antibiotic Resistance Genes, Environmental Factors, and Microbial Community From Aquaculture Farms in Five Provinces, China
The excessive use of antibiotics speeds up the dissemination and aggregation of antibiotic resistance genes (ARGs) in the environment. The ARGs have been regarded as a contaminant of serious environmental threats on a global scale. The constant increase in aquaculture production has led to extensive use of antibiotics as a means to prevent and treat bacterial infections; there is a universal concern about the environmental risk of ARGs in the aquaculture environment. In this study, a survey was conducted to evaluate the abundance and distributions of 10 ARGs, bacterial community, and environmental factors in sediment samples from aquatic farms distributed in Anhui (AP1, AP2, and AP3), Fujian (FP1, FP2, and FP3), Guangxi (GP1, GP2, and GP3), Hainan (HP1, HP2, and HP3), and Shaanxi (SP1, SP2, and SP3) Province in China. The results showed that the relative abundance of total ARGs was higher in AP1, AP2, AP3, FP3, GP3, HP1, HP2, and HP3 than that in FP1, FP2, GP1, GP2, SP1, SP2, and SP3. The sul1 and tetW genes of all sediment samples had the highest abundance. The class 1 integron (intl1) was detected in all samples, and the result of Pearson correlation analysis showed that the intl1 has a positive correlation with the sul1, sul2, sul3, blaOXA, qnrS, tetM, tetQ, and tetW genes. Correlation analysis of the bacterial community diversity and environmental factors showed that the Ca2+ concentration has a negative correlation with richness and diversity of the bacterial community in these samples. Of the identified bacterial community, Proteobacteria, Firmicutes, Chloroflexi, and Bacteroidota were the predominant phyla in these samples. Redundancy analysis showed that environmental factors (TN, TP, Cl–, and Ca2+) have a positive correlation with the bacterial community (AP1, GP1, GP2, GP3, SP1, SP2, and SP3), and the abundance of ARGs (sul1, tetW, qnrS, and intl1) has a positive correlation with the bacterial community (AP2, AP3, HP1, HP2, and HP3). Based on the network analysis, the ARGs (sul1, sul2, blaCMY, blaOXA, qnrS, tetW, tetQ, tetM, and intl1) were found to co-occur with bacterial taxa from the phyla Chloroflexi, Euryarchaeota, Firmicutes, Halobacterota, and Proteobacteria. In conclusion, this study provides an important reference for understanding the environmental risk associated with aquaculture activities in China.
INTRODUCTION
Antibiotics are extensively used to prevent and control bacterial infections in medical care, livestock husbandry, and aquaculture (Kümmerer, 2009;Luo et al., 2010). Some of them are also used as growth promoters in aquaculture activities (Chen H. et al., 2018). However, excessive antibiotics and their metabolites would enter the environment, and they might be further absorbed into soil particles and eventually accumulated in sediments because aquatic animals cannot take full advantage of these antibiotics (Kümmerer, 2009). It is worth noting that the abundance of antibiotic resistance genes (ARGs) in soil was associated with the amount of antibiotic residues in the environment (Fahrenfeld et al., 2014), and the ARGs combining with minerals and humus from the environment might exist for a long time (Dang et al., 2017;Hurst et al., 2019;Ma et al., 2019). It is well known that the ARGs have unique biological characteristics, and they could spread by horizontal gene transfer among bacteria of different species and self-amplify among the same species (Guo et al., 2017;Kumar et al., 2017).
The sediments were regarded as an important plot for accumulation and transmission of ARGs (Marti et al., 2014). Shen et al. (2020) reported that several ARGs (sul1, tetG, tetW, tetX, and intl1 gene) were detected in water and sediment of aquaculture farms in Jiangsu Province, China. Chen B. et al. (2018) explored the ARGs in the sediments from bullfrog farms and confirmed that these identified ARGs were able to encode resistance to over 10 categories of antibiotics, such as aminoglycosides, beta-lactams, chloramphenicols, fluoroquinolones, macrolides, polypeptides, sulfonamides, and tetracyclines.
There is a universal concern that the presence of ARGs in sediments is a potential environmental threat . The antibiotic-resistant bacteria have constituted a huge repository of ARGs in sediments (Martínez, 2008). Once these ARGs have been transferred into the human symbiotic microbes, they would cause great risks of the ecological environment and human health (Smillie et al., 2011;Forsberg et al., 2012). Currently, a study about ARGs in the environment suggested that the existing forms of ARGs largely determine the ways in which these genes are acquired and disseminated among bacterial hosts (Mao et al., 2014). The conception of integron was first proposed by Stokes in 1989 (Stokes andHall, 1989). It is a key pathway for bacteria to acquire ARGs, which influenced the removal and transfer of ARGs in the bacterial community (Gaze et al., 2011). As one of the most important mobile genetic materials, the integron could capture, rearrange, and express mobile gene cassettes responsible for the spread of ARGs and further accelerate the prevalence and transmission of ARGs in the environment (Martinez-Freijo et al., 1998;Cambray et al., 2010).
Previous studies have suggested that the nutrients also promote directly or indirectly the ARGs propagation (Zhao et al., 2017;. Furthermore, the long-time input of nitrogen and phosphorus not only changed the composition of the bacterial community but also drove the propagation of ARGs (Pan et al., 2020). Total organic carbon (TOC) and total dissolved nitrogen (TDN) were potentially important environmental factors, which affected the abundance and diversity of ARGs in urban river systems . Moreover, some researches have indicated that bacterial communities shaped the distribution and abundance of ARGs (Huerta et al., 2013;Xiong et al., 2015). These findings suggested that the distribution and prevalence of ARGs are not only related to the use of antibiotics but also affected by many environmental factors. In this study, we aimed to (1) evaluate the relative abundance of 10 ARGs in sediment samples from different aquaculture farms; (2) elucidate the correlation between environmental factors, ARGs abundance, and bacterial community in different aquaculture farms; (3) identify the co-occurrence patterns between ARGs and bacterial taxa.
Sample Collection
A total of 15 sediment samples were collected from aquaculture ponds distributed in five Chinese provinces, including Anhui (Wuwei, Freshwater aquaculture farm), Fujian (Zhangzhou, Mariculture farm), Guangxi (Qinzhou, Mariculture farm), Hainan (Haikou, Freshwater aquaculture farm), and Shaanxi (Heyang, Freshwater aquaculture farm) between September and October 2019 (Supplementary Figure 1). Three ponds were selected in every aquaculture farm. These aquaculture ponds could produce aquatic products with average 4,000 kg or more per year. Due to the high stocking density, different antibiotics, including sulfonamides, tetracyclines, beta-lactams, and quinolones were used for prophylactic purposes on these farms. No bacterial infections occurred in the sampled ponds in the past year according to our investigation.
Each pond has an area of approximately 900-1,200 m 2 with a depth of approximately 150-200 cm. The sediments of all sampled ponds have not been cleaned for at least 1 year to ensure that the samples meet the requirements. The samples were collected from water inlets, water outlets, and center areas of each pond (collected the top 10 cm of the sediment) using the CN-100 bottom sampler (Ruibin, China), and the samples of each pond were completely mixed to avoid heterogeneous differences caused by single sampling. After mixing, the sample was sealed in a sterile plastic bag and transported at 4 • C to the laboratory. All the samples were divided into two parts and stored at −80 • C for further analysis.
DNA Extraction and Qualitative PCR of Antibiotic Resistance Genes
The genomic DNA was extracted from 0.25 g lyophilized sediment samples using the TIANamp Soil DNA kit (Tiangen, China). All the operations were performed with the product instructions. The quality of DNA was detected using Ultramicro nucleic acid analyzer (Allsheng, China). The PCR amplification was performed to test 10 ARGs (sul1, sul2, sul3, tetM, tetQ, tetW, qnrB, qnrS, bla OXA , and bla CMY ) and class 1 integron integrase gene (intl1) based on the investigation of antibiotic use in the aquaculture farms of this study. The primers of target genes were synthesized by Sangon Biotech (Shanghai, China), and the primer sequences are shown in Supplementary Tables 1, 2. The PCR conditions were as follows: pre-denaturation at 95 • C for 3 min, then 35 cycles of denaturation at 95 • C for 30 s, annealing at the specified temperature (Supplementary Table 1
High-Throughput Sequencing
In order to further analyze the bacterial community composition in the sediment samples, the high-throughput sequencing of bacterial community was analyzed with an Illumina Mi Seq platform at Novogene (Beijing, China). The V3-V4 regions of bacterial 16S rRNA genes were amplified using the primer pair 341F and 806R. The sequencing analysis was processed using QIIME software for 16S rRNA datasets described in a previous literature (Caporaso et al., 2010).
Quantification of Antibiotic Resistance Genes
PCR products of the target gene were purified with the Universal DNA Purification Kit (Tiangen, China) and ligated into pGM-T vector (Tiangen, China). Subsequently, the pGEM-T vector carrying target gene was transformed into Escherichia coli DH5α (Tiangen, China), and the positive clones were acquired after PCR amplification and sequence analysis. Moreover, the recombinant plasmids with target gene were extracted with TIANprep Mini Plasmid Kit (Tiangen, China) and searched for homolog identity with NCBI Blast program. The concentration of recombinant plasmids was checked by Ultramicro nucleic acid analyzer (Allsheng, China), and the standard curves of recombinant plasmids were built with 10-fold serial diluted. The amplification efficiency of all primers ranged from 91.07 to 106.64% with R 2 > 0.99 (Supplementary Table 3). The target gene copy numbers of the sediment samples were calculated with the CT values according to a previous study (Yuan et al., 2019).
The real-time quantitative PCR (Q-PCR) was performed on a LightCycler R 96 instrument (Roche Ltd., Italy) by utilizing SYBR R Green Pro Taq HS Premix (AG, China) according to the manufacturer's protocol for further analysis. The 10-µl qPCR reaction system contained 2 × SYBR R Green Pro Taq HS Premix (5 µl), 10 µM primer (0.2 µl for each primer, Sangon Biotech, China), RNase-free water (4.1 µl), and DNA samples or standard plasmid (0.5 µl). The amplification condition of qPCR was as follows: initial enzyme activation at 95 • C for 30 s, then 40 cycles of at 95 • C for 5 s and at 60 • C for 30 s.
Analysis of Environmental Factors
The contents of Ca 2+ , Mg 2+ , and Cl − were determined by ethylene diamine tetraacetic acid (EDTA) volumetric method . The concentrations of total nitrogen (TN) and total phosphorus (TP) were determined by spectrophotometric method (SEPA, 2002;Trolle et al., 2009), and the standard curve of TN and TP is shown in Supplementary Table 4.
Statistical Analysis
Pearson correlation analysis was used to analyze the correlation between environmental factors (TN, TP, Cl − , Ca 2+ , and Mg 2+ ) with the relative abundance of bacterial community and ARGs. Non-metric multidimensional scaling (NMDS) analysis was used to evaluate the difference of bacteria between different sampling sites. Redundancy analysis (RDA) was employed to assess the effects of environmental factors and ARGs on the bacterial community. The co-occurrence between abundance of ARGs and bacterial taxa was analyzed using network analysis based on the Pearson correlation (Li et al., 2015).
Environmental Factors in the 15 Sediment Samples
To study the effects of environmental factors on bacterial community in the sediments, the concentrations of five factors (TN, TP, Cl − , Ca 2+ , and Mg 2+ ) at different sample sites were detected (Supplementary Table 5). Among the sampling sites, TN concentrations ranged from 48.841 ± 0.158 to 193.679 ± 0.45 mg/kg ( Figure 1A). The lowest concentration of TN appeared in the sample of HP3 and the highest in the sample of GP2. The TP concentrations were relatively lower in other sampling sites, except GP1 (118.757 ± 0.956 mg/kg), GP3 (48.064 ± 1.373 mg/kg), and SP1 (48.938 ± 0.323 mg/kg) ( Figure 1B). The concentration of Cl − has a higher correlation with the aquaculture environment of sampling sites. The Cl − concentration in the sample sites from mariculture farms was generally higher than that of other sampling sites ( Figure 1C). The Ca 2+ concentration of GP3 and FP2 was the highest (1.632 ± 0.889 g/kg) and lowest (0.0800 ± 0.0346 g/kg), respectively ( Figure 1D). The concentrations of Mg 2+ of all sampling sites were generally lower, with the exception of GP1 (0.720 ± 0.128 g/kg), GP2 (1.377 ± 0.159 g/kg), and GP3 (3.334 ± 0.703 g/kg) ( Figure 1E).
Diversity and Composition of Bacterial Community
With the sequencing of 16S rRNA gene, a total of 61,871 operational taxonomic units (OTUs) were identified from all the 15 sediment samples (Supplementary Table 6). The Rank Abundance curves of OTUs were saturated with all samples (Supplementary Figure 3), which indicated that the abundance and evenness of bacterial community were similar. The indices of ACE, Chao1, Shannon, and Simpson revealed the richness and diversity of the bacterial communities (Supplementary Table 6). The NMDS analysis based on the OTU abundance also indicated that there was no obvious geographic cluster of bacterial communities in different sediment samples (Supplementary Figure 4).
At the phylum level, four predominant phyla (Proteobacteria, Firmicutes, Chloroflexi, and Bacteroidota) were detected in all sediment samples (Figure 2A). The Cyanobacteria has minor abundance, accounting for 0.30-9.39% of total bacterial 16S rRNA sequence libraries. At the genus level, the 16S rRNA sequence libraries detected 30 predominant bacterial genes from the 15 sediment samples. The bacterial community mainly includes Methanosaeta, Sphingomonas, Sulfurovum, and Thiobacillus ( Figure 2B). In the phylum Proteobacteria, Dechloromonas, Pseudomonas, Sphingomonas, and Thiobacillus were the predominant genera of the bacterial community. In the sediment of FP1, FP2, and FP3, Sulfurovum and Sulfurimonas have a higher abundance (Supplementary Figure 5).
Effect of Environmental Factors on the Diversity of the Bacterial Community
The Pearson correlation analysis was used to evaluate the effects of environmental factors on the bacterial community structure. The results indicated that there was no significant correlation between the concentration of TN, TP, Cl − , and Mg 2+ with the richness and the diversity of bacterial community. However, the Ca 2+ concentration had a significantly negative correlation with the richness and diversity of the bacterial community [OTU (r = −0.61, p < 0.05), ACE (r = −0.65, p < 0.05), and Chao1 index (r = −0.62, p < 0.05)].
Abundance and Distribution of Antibiotic Resistance Genes
To analyze the ARGs distribution in different aquaculture farms, 10 tested ARGs and 16S rRNA were investigated. The ARGs abundance was normalized to 16S rRNA genes to compare the difference of ARGs in different samples . As shown in Figure 3A, the ARGs were classified into four categories (sulfonamide, tetracycline, beta-lactam, and fluoroquinolone resistance genes) and integron. The higher abundance of sulfonamide resistance genes was detected in all these samples, but higher abundance of tetracycline resistance genes was detected in the samples of FP3, HP2, and HP3. The highest abundance of total ARGs was detected in the samples of HP (HP1, HP2, and HP3). The beta-lactam resistance genes have the highest abundance in the samples of FP (FP1, FP2, and FP3).
The distribution of ARGs is shown in Figure 3B. Sul1, sul2, and tetM genes were the highest abundance in all sediment samples, while sul3 and qnrS genes were only detected in the sediments from HP (HP1, HP2, and HP3) and SP (SP1, SP2, and SP3), respectively. Overall, the ARGs levels in different sample sites were obviously different, which could be associated with the different types of antibiotics used in different aquaculture farms.
Correlations Among Bacterial Community, Antibiotic Resistance Genes, and Environmental Factors
The RDA analysis was performed to further explore the correlation between the bacterial communities of the 15 sediment samples, ARGs abundance, and environmental factors (Figure 4). The weights for variables making up canonical axes of RDA are summarized in Supplementary Table 7. We found that the TN, TP, Cl − , Ca 2+ , sul1, bla CMY , intl1, qnrS, and tetW have a significant correlation with the bacterial community in the 15 sediment samples (permutations = 999, p < 0.05), which explained 61.82% of overall variation in the bacterial community. The RDA1 and RDA2 explained 44.36 and 17.46% of the total variance, respectively. Positive correlation was found between environmental factors detected in this study and the bacterial community (AP1, GP1, GP2, GP3, SP1, SP2, and SP3), and the bacterial community of the sampling sites showed a negative correlation with ARGs. Moreover, the abundance of ARGs (sul1, tetW, qnrS, and intl1) has a much higher correlation with the bacterial community (AP2, AP3, HP1, HP2, and HP3), while bla CMY has a stronger correlation with the bacteria of FP (FP1, FP2, and FP3).
Furthermore, the correlation analysis of 10 ARGs showed that there were significant correlations among multiple ARGs (Supplementary Figure 6). Similarly, the intl1 gene was correlated with the sul1, sul2, sul3, bla OXA , qnrS, and tetM, tetQ, and tetW genes. A Mantel test was also performed to demonstrate whether there were high correlations between the total ARGs and intl1. Results showed that a significant correlation (permutations = 999, r = 0.8013, p < 0.01) was found among the total ARGs and intl1.
DISCUSSION
Aquaculture ponds are regarded as a major reservoir for antibiotic resistant bacteria and ARGs due to the overreliant use of antibiotics (Boran et al., 2013). However, excessive antibiotics and their metabolites have been released into the environment due to the abuse and misuse of antibiotics (Bu et al., 2013;Liu FIGURE 4 | The redundancy analysis (RDA) of environmental factors, antibiotic resistance genes (ARGs), and bacterial communities. and Wong, 2013). Relevant studies have indicated that antibiotic contamination could lead to the emergence of ARGs in the environment (Qiao et al., 2018;Yang et al., 2018). In this study, we explored the correlation between environmental factors, ARGs, and the bacterial community in different aquatic environments.
Sulfonamides and tetracyclines were used widely in aquatic farms (Luo et al., 2010), the abundance of sul and tet genes was significantly correlated with the use of the corresponding antibiotics (Gao et al., 2012). Generally, the establishment of ARGs requires selective pressure on antibiotics over a long period. However, once the selective pressure was established, the ARGs would persist and difficult to be eliminated even if the pressure is removed (Pei et al., 2006;Xiao et al., 2016). In this study, a higher abundance of sul and tet genes in sediment samples was detected, and it was consistent with previous studies that sul and tet genes were the dominant ARGs in aquaculture water environments (Hoa et al., 2008;Tamminen et al., 2011). Remarkably, the abundance of sul gene was higher than that of tet gene, except for the sample from FP3. A previous study also indicated that the sul gene persists longer than the tet gene (McKinney et al., 2010). The intl1 was one of the mobile genetic element genes and widely existed in Gram-positive and Gramnegative bacteria (Zeng et al., 2019), and it was regarded as an important pollution genetic marker caused by human activity (Gillings et al., 2015). The abundance of Intl1 gene is a proxy for anthropogenic pollution among many other factors are that they are linked to genes conferring resistance to antibiotics, and intl1 gene was also closely related to multidrug resistance (MDR). Our study revealed that the abundance of ARGs (sul1, sul2, sul3, tetW, tetQ, tetM, bla OXA , and qnrS) was significantly correlated with the abundance of Intl1. It was indicated that intl1 may play a key role in ARGs proliferation and diffusion from the sediment of aquaculture farms. Moreover, the previous study indicated that there were the co-occurrence patterns among many ARGs in pig farm wastewater . Correlation analysis in the study also showed that there was a significant positive correlation among the different types of ARGs. The bacteria carrying multiple ARGs could easily obtain the resistance to antibiotics (Trudel et al., 2016); therefore, the potential environmental risk of ARGs should be given attention.
In this study, a significant difference was observed in bacterial communities among different aquatic farms. The dominant phyla were Proteobacteria and Firmicutes in all sediment samples. Similar results were found in pig farms and the sediment of a shrimp farm Zeng et al., 2020). Within the Proteobacteria phylum, the Sphingomonas was the main compositions of the genus. It was reported that the genome of Sphingomonas contains multiple efflux pumps (Jia et al., 2019), suggesting that Sphingomonas might better exist in the sediments of aquatic environments. A previous study implied that the physiochemical properties of the environment may influence the bacterial community by affecting the nutrient availability or physiological activity . The present study found that the bacterial communities from AP1, GP1, GP2, GP3, SP1, SP2, and SP3 were significantly correlated with environmental factors. In addition, the concentrations of Mg 2+ , Ca 2+ , and Cl − in the environment influence the composition of the bacterial community . It is worth noting that the Ca 2+ has a significant negative correlation with the richness and diversity of bacterial communities. Interestingly, the calcium carbonate was widely used in aquaculture farms, which led to the high accumulation of Ca 2+ in the sediments. Therefore, the excessive use of calcium carbonate might lead to a decrease in the diversity and richness of bacterial communities in the environment.
The aquatic environment gradually becomes a reservoir of antibiotic-resistant bacteria because of the use and abuse of antibiotics in aquatic farms (Huang et al., 2017). Previous studies certified that some bacterial taxa from Firmicutes were the dominant ARGs-carrying bacteria (Zhang et al., 2021). We also found that the Proteiniclasticum and Soehngenia from Firmicutes might be the main potential hosts of ARGs, which have strong co-occurrence with sul1, sul2, bla OXA , qnrS, tetQ, tetM, and intl1 genes. Similarly, the Anaerolinea and Leptolinea from Chloroflexi have strong co-occurrence with ARGs (sul1, sul2, bla OXA , qnrS, tetW, tetQ, tetM, and intl1). However, the Sulfurovum from Campilobacterota only has a co-occurrence with bla CMY , and the Campilobacterota was the dominant phylum in the samples of FP (FP1, FP2, and FP3). Furthermore, there was a stronger co-occurrence between tetM gene and six bacterial taxa in these samples. A previous study revealed the similar results in the soil of swine feedlots . It is worth noting that the tetM gene was regarded as a detection tool to track and monitor ARGs transport in agricultural systems (Cadena et al., 2018). Our research found that the ARGs have a complex co-occurrence correlation with the bacterial taxa in sediment, which indicated that some bacterial taxa could be resistant to multiple antibiotics in the sediments. Overall, this study indicated that the ARGs in the sediments of aquaculture farms have an impact on the environment and bacterial communities, and we must pay more attention to and take preventive measures.
CONCLUSION
The present study indicated that the sulfonamides and tetracycline resistance genes were the predominant ARGs in the sediments of the investigated aquatic farms. Some bacterial taxa from the phyla Chloroflexi, Euryarchaeota, Firmicutes, Halobacterota, and Proteobacteria might be the main potential hosts of ARGs in these aquatic farms. Moreover, the excessive Ca 2+ might inhibit the diversity and richness of bacterial communities.
DATA AVAILABILITY STATEMENT
The sequence raw datasets in this study can be found in the NCBI repository (http://www.ncbi.nlm.nih.gov/bioproject/708165).
AUTHOR CONTRIBUTIONS
XC and HL played an important role in the conception of the study. CL and YS finished the part of the experiment. RZ and HX organized the original data. YL and XS performed the data analysis. XC wrote the first manuscript. HL edited the final manuscript. All authors contributed to the article and approved the submitted version.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-03-22T00:00:00.000
|
4985802
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0009729&type=printable",
"pdf_hash": "6efe1543d3e4b96d7087c484b7ec6822ee44bde8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46584",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "6efe1543d3e4b96d7087c484b7ec6822ee44bde8",
"year": 2010
}
|
pes2o/s2orc
|
The Complete Multipartite Genome Sequence of Cupriavidus necator JMP134, a Versatile Pollutant Degrader
Background Cupriavidus necator JMP134 is a Gram-negative β-proteobacterium able to grow on a variety of aromatic and chloroaromatic compounds as its sole carbon and energy source. Methodology/Principal Findings Its genome consists of four replicons (two chromosomes and two plasmids) containing a total of 6631 protein coding genes. Comparative analysis identified 1910 core genes common to the four genomes compared (C. necator JMP134, C. necator H16, C. metallidurans CH34, R. solanacearum GMI1000). Although secondary chromosomes found in the Cupriavidus, Ralstonia, and Burkholderia lineages are all derived from plasmids, analyses of the plasmid partition proteins located on those chromosomes indicate that different plasmids gave rise to the secondary chromosomes in each lineage. The C. necator JMP134 genome contains 300 genes putatively involved in the catabolism of aromatic compounds and encodes most of the central ring-cleavage pathways. This strain also shows additional metabolic capabilities towards alicyclic compounds and the potential for catabolism of almost all proteinogenic amino acids. This remarkable catabolic potential seems to be sustained by a high degree of genetic redundancy, most probably enabling this catabolically versatile bacterium with different levels of metabolic responses and alternative regulation necessary to cope with a challenging environment. From the comparison of Cupriavidus genomes, it is possible to state that a broad metabolic capability is a general trait for Cupriavidus genus, however certain specialization towards a nutritional niche (xenobiotics degradation, chemolithoautotrophy or symbiotic nitrogen fixation) seems to be shaped mostly by the acquisition of “specialized” plasmids. Conclusions/Significance The availability of the complete genome sequence for C. necator JMP134 provides the groundwork for further elucidation of the mechanisms and regulation of chloroaromatic compound biodegradation.
Introduction
Cupriavidus necator JMP134 (formerly Ralstonia eutropha JMP134) is a Gram-negative b-proteobacterium able to degrade a variety of chloroaromatic compounds and chemically-related pollutants. It was originally isolated based on its ability to use 2,4 dichlorophenoxyacetic acid (2,4-D) as a sole carbon and energy source [1]. In addition to 2,4-D, this strain can also grow on a variety of aromatic substrates, such as 4-chloro-2-methylphenoxyacetate (MCPA), 3-chlorobenzoic acid (3-CB) [2], 2,4,6-trichlorophenol [3], and 4-fluorobenzoate [4]. The genes necessary for 2,4-D utilization have been identified. They are located in two clusters on plasmid pPJ4: tfd I and tfd II [5,6,7,8]. The sequence and analysis of plasmid pJP4 was reported and a congruent model for bacterial adaptation to chloroaromatic pollutants was proposed [9]. According to this model, catabolic gene clusters assemble in a modular manner into broad-host-range plasmid backbones by means of repeated chromosomal capture events.
Cupriavidus and related Burkholderia genomes are typically multipartite, composed of two large replicons (chromosomes) accompanied by classical plasmids. Previous work with Burkholderia xenovorans LB400 revealed a differential gene distribution with core functions preferentially encoded by the larger chromosome and secondary functions by the smaller [10]. It has been proposed that the secondary chromosomes in many bacteria originated from ancestral plasmids which, in turn, had been the recipient of genes transferred earlier from ancestral primary chromosomes [11]. The existence of multiple Cupriavidus and Burkholderia genomes provides the opportunity for comparative studies that will lead to a better understanding of the evolutionary mechanisms for the formation of multipartite genomes and the relation with biodegradation abilities.
Genome sequencing and assembly
The complete genome of C. necator JMP134 was sequenced at the Joint Genome Institute using a combination of 3 kb and fosmid (40 kb) libraries. Library construction, sequencing, finishing, and automated annotation steps were performed as described at the JGI web page (http://www.jgi.doe.gov/sequencing/index. html). Gene prediction was performed using CRITICA [12] and Glimmer [13] followed by manual inspection of the automatically predicted gene models. Predicted coding sequences (CDSs) were manually analyzed and evaluated using an Integrated Microbial Genomes (IMG) annotation pipeline (http://img.jgi.doe.gov) [14]. CLUSTALW was used for sequence alignments [15]; phylogenetic trees were built using Phylip.
Genome analysis
Functional annotation and comparative analysis of C. necator with related organisms was performed using a set of tools available in IMG. Unique and orthologous C. necator genes were identified using BLASTp (reciprocal best BLASTp hits with cutoff scores of E ,10 25 and 60% identity). Signal peptide cleavage sites were identified using SignalP 3.0 [16] and transmembrane proteins were predicted using TMHMM [17], both with their default settings. Synteny plots were made using Promer, a subroutine of Mummer [13].
GenBank accession numbers
The sequences of the four genomic replicons described here have been deposited in GenBank (accession numbers CP000090-CP000093), and the project information to the GenomesOnline Database (Gc00292) [18].
General genome features
The genome of C. necator JMP134 consists of four DNA molecules: two circular chromosomes and two plasmids (Table 1 and Figure 1). The four replicons combined contain 6,631 protein coding sequences (CDSs), of which 4,898 (73.8%) could be assigned a putative function. There are 87 RNA genes including 66 tRNAs and six rRNA loci, each arranged in the order of 5S-23S-16S. Also identified were 83 pseudogenes. Analysis of the distribution of genes representing major functional categories reveals that chromosome 1 encodes most of the key functions required for transcription, translation, and DNA replication, while chromosome 2 encodes functions involved in energy production and conversion, secondary metabolism, and amino acid transport and metabolism.
Comparative genomics
Various comparisons were made between the genome of C. necator JMP134 and four other closely-related b-proteobacteria that also possess multipartite genomes (Table 1). Synteny plots comparing C. necator JMP134 with other closely related Cupriavidus/Ralstonia genomes (C. necator H16; C. metallidurans CH34; and Ralstonia solanacearum GM1000) reveal extensive conservation of chromosome 1 but a lack of synteny in chromosome 2 ( Figure 2). The origin and evolutionary history of chromosome 2 probably includes multiple occurrences of gene duplication and lateral gene transfer (see below). Notably, in all four species chromosome 2 contains three copies of the rRNA locus, thus indicating past recombination between chromosomes 1 and 2.
These four genomes were also compared by determining the numbers of genes encoded by each that are unique to one organism and the number that are shared by two, three, or all four strains ( Figure 3). Protein identity was defined conservatively using reciprocal best BLASTp hits with a cutoff of 60% identity of the amino acid sequence. By that criterion, 1910 genes are found in all four strains (1713 on chromosome 1, 197 on chromosome 2).
Approximately 28.7% of the CDSs in the genome of C. necator JMP134 (1904 out of 6,631) were not found in any of the other three genomes. These 1904 unique genes are distributed among all four replicons: 552 on chromosome 1, 841 on chromosome 2, 432 in the megaplasmid, and 80 in plasmid pPJ4. Of the 552 unique genes on chromosome 1: 43 (8%) have no orthologs or paralogs in the current version of IMG; 87 (15%) have a best BLASTp hit within C. necator JMP134 indicating that they arose from gene duplication; 422 (76%) have a best BLASTp hit to other organisms within the database ( Figure 4). The majority of those organisms are other b-proteobacteria, particularly Burkholderiaceae, with a minor percentage also from the Alcaligenaceae and the Comamonadaceae b-proteobacterial families. A sizable minority of them (,30%) are found in other phylogenetically diverse soil bacteria. Of the 841 unique genes on chromosome 2 of C. necator, 47 (6%) have no orthologs or paralogs, 181 (22%) have a best BLASTp hit within the C. necator JMP134 genome, and 612 (73%) have a best BLASTp hit to other genomes ( Figure 4). These data indicate that the evolution of these two chromosomes has involved substantial gene duplication and extensive lateral gene transfer events (preferentially with related organisms, i.e., b-proteobacteria).
To analyze the functional content of these unique genes we examined their distribution towards particular COGs ( Figure S1). Excluding COGs R and S (categorized as General features and Hypothetical Functions, respectively), the data indicate that the majority of the unique genes belong to COG K. COG K refers to transcription and the majority of these unique genes are transcriptional regulators. Although the distribution of unique genes to various COG categories differs among the four organisms, a significant number of unique genes belong to signal transduction pathways (COG T, mainly histidine kinases and response regulators), energy production and conversion (COG C, mainly dehydrogenases, oxidases and hydroxylases), amino acid transport and metabolism (COG E, mainly transporters), and lipid metabolism (COG I, mainly acyl-CoA synthetases and dehydrogenases, enoyl-CoA hydratases).
Similarly, C. eutropha H16 has 2000 genes that are not present in any of the other three strains: 784 on chromosome 1, 956 on chromosome 2, and 258 in its megaplasmid pHG1. Interestingly, orthologs for 122 genes found in megaplasmid pHG1 are present on the chromosomes of the other two Cupriavidus strains: 35 in C. necator JMP134 and 82 in C. metallidurans CH34.
Of the 2,449 genes identified on chromosome 2 of C. necator JMP134, 460 (18.8%) have orthologs on chromosome 1 of either C. eutropha H16, C. metallidurans CH34, or R. solanacearum, but only 45 of them have orthologs in more than one genome.
The prevailing hypothesis for the origin of the secondary chromosome in the multipartite genomes of Cupriavidus and Burkholderia posits that it evolved from ancestral plasmids. We sought to determine whether these putative ancestral plasmids were the same in the Cupriavidus/Ralstonia, and Burkholderia lineages. Since chromosome 2 encodes homologs of ParA and ParB 1D: plasmid pPJ4. Circle 1 (from outside to inside): COG assignments for CDSs on the plus strand. Circle 2: COG assignments for CDSs on the minus strand. Circle 3: RNA genes (green = tRNAs; red = rRNAs; black = other RNAs). Circle 4 (for chromosome 1 and 2, only): genes not found in C. eutropha H16, C. metallidurans CH34, or R. solanacearum GMI1000. Circle 5: % G+C. Circle 6: GC skew (G-C/G+C). Colors indicate the following: dark gray, hypothetical proteins; light gray, conserved hypothetical and unknown function; brown, general function prediction; red, replication and repair; green, energy metabolism; blue, carbon and carbohydrate metabolism; cyan, lipid metabolism; magenta, transcription; yellow, translation; orange, amino acid metabolism; pink, metabolism of cofactors and vitamins; light red, purine and pyrimidine metabolism; lavender, signal transduction; and blue sky, cellular processes. doi:10.1371/journal.pone.0009729.g001 (proteins involved in the active partitioning of low-copy-number plasmids), we investigated the similarity and phylogenetic relationships of the ParA and ParB proteins encoded by chromosome 2 in 19 b-proteobacteria from those three genera ( Figure 5). Figure 5A shows the similarity of the C. necator ParB and DnaA (present in chromosome 1) to the corresponding proteins of the other lineages. Although the identity of the DnaA proteins is preserved to around 70%, the identity of the ParB proteins is significantly lower among Cupriavidus/Ralstonia and Burkholderia species (,28%). Phylogenetic analysis ( Figure 5B) also indicates that ParB proteins from the Cupriavidus and Ralstonia lineages form distinct groups. Taken together, these data suggest that two distinct plasmids (one for Cupriavidus/Ralstonia and one for Burkholderia) may have been the origin of the secondary chromosomes present in the genera Cupriavidus/Ralstonia, and Burkholderia.
Catabolism of aromatic compounds
We have reconstructed the metabolic pathways for aromatic compound degradation in C. necator JMP134, comparing the catabolic abilities found in silico with the range of compounds that support growth of this strain [19]. C. necator is able to use 60 aromatic compounds as a sole carbon and energy source. Aromatic degradation pathways have been classified to central and peripheral. Peripheral pathways transform a large variety of aromatic compounds into a few key intermediates (such as gentisate, catechol, benzoyl-CoA etc) which are subsequently degraded via the central pathways. All of the central ringcleavage pathways for aromatic compounds known in Proteobacteria, with the exception of the homoprotocatechuate pathway, are found in this strain: the b-ketoadipate pathway, with its catechol, chlorocatechol and protocatechuate ortho ringcleavage branches (cat, tfd and pca genes, respectively); the 4methylcatechol ortho ring-cleavage pathway (mml genes); the gentisate ring-cleavage pathway (mhb genes); the phenylacetyl-CoA ring-cleavage pathway (paa genes); the homogentisate ringcleavage pathway (hmg genes); the 2,3-dihydroxyphenylpropionate meta ring-cleavage pathway (mhp genes); the catechol meta ring-cleavage pathway (phl genes); the chlorohydroxyquinol ortho ring-cleavage pathway (tcp genes); the aminohydroquinone ringcleavage pathway (mnp genes); and the 2-aminobenzoyl-CoA ringcleavage pathway (abm genes). The approximately 300 genes predicted to be directly involved in catabolism of aromatic compounds were found to be more or less equally distributed between chromosomes 1 and 2. Gene redundancy is predicted to play a significant role in the catabolic potential of C. necator. Redundant functions were observed in the catechol, protocatechuate, salicylate, and phenylacetyl-CoA pathways; in the degradative pathways for benzoate and chloroaromatic compounds; in some of the p-hydroxybenzoate and (methyl)phenols peripheral reactions; in the presence of several meta ring-cleavage enzymes; in other oxygenases, maleylacetate reductases and regulatory proteins. In total, the genome of C. necator encodes more than 70 oxygenases belonging to the main oxygenase groups that function in the catabolism of aromatic compounds. Is this extensive catabolic versatility shared by other soil bacteria? Genome-wide studies performed on P. putida KT2440 [20], B. xenovorans LB400 [10], Rhodococcus jostii RHA1 [21], and ''A. aromaticum'' sp. EbN1 [22] show a significant degree of catabolic versatility, based on the high number of aromatic pathways encoded, suggesting that bacteria with such capabilities may be more common in nature than previously supposed.
Transport of aromatic compounds
A search for transporter genes in the vicinity of genes encoding aromatic degradative enzymes located ABC transporters from several families, including the family 4 ABC transporters. This group, originally identified as branched-chain amino acid transporters, has more recently been found to also transport other amino acids and urea (http://www.tcdb.org). One member of this family is known to function in transport of aromatic compounds [23]. C. necator JMP134 contains several family 4 ABC transporters that are predicted to transport aromatic compounds, most-but not all-of which are shared with other Cupriavidus strains.
One family 4 transporter (Reut_A1329-1333) shared by the three Cupriavidus strains is adjacent to genes involved in benzoate degradation. This one is similar to that found in the box operon in Azoarcus evansii [24], and also to an hba operon in R. palustris GCA009 that encodes hydroxybenzoate degradation [25]. Another family 4 ABC transporter (Reut_B3779-3783) adjacent to a ring-hydroxylating dioxygenase is found only in C. necator JMP134 and C. eutropha H16. In a family 4 ABC transporter found also in C. metallidurans CH34 and R. solanacearum GMI1000, the binding protein (Reut_B4017) is separated by several genes from the permease and ATPase components (Reut_B4007-4010) which are, in turn, adjacent to a gene encoding a 4-hydroxybenzoate 3monooxygenase. However, the transporters (Reut_B3779-3783, and (Reut_B4007-4010, Reut_B4017) do not cluster with sequences related to the degradation of aromatic compounds.
Two putative aromatic compound ABC transporters that are unique to C. necator JMP134 are located on plasmids. One (Reut_C6326-6330) is found on the megaplasmid where it is one gene away from a putative 3-chlorobenzoate 3,4-ring-hydroxylating dioxygenase. The other (Reut_D6487-6490) is on plasmid pPJ4 [9]. However, this transporter has a high similarity to a probable urea transporter in the C. necator JMP134 genome (Reut_A0986-0990) that is adjacent to urease encoding genes.
Some ABC transporter families that have not been previously known to transport aromatic compounds are found in the vicinity of aromatic degradative enzymes, including two from families 15/ 16 (COG0715). One full transporter (Reut_B5799-5801) and one binding protein (Reut_C6311) may be involved in aromatic compound transport. A family 2 ABC transporter (Reut_B4133-4136) may also function in aromatic compound transport as it is directly adjacent to a dioxygenase putatively involved in ring hydroxylation. The only closely related transporter found is in Bradyrhizobium japonicum where it, also, is adjacent to genes of aromatic catabolism.
C. necator JMP134 has only two members of the benzoate: proton symporter family (TC 2.A.46): Reut_A2362 that is shared with C. metallidurans CH34 and R. solanacearum GMI1000, and Reut_B5351 that is unique to strain JMP134. Also found in C. necator JMP134 are 13 members of a family of aromatic acid transporters-family 15 of the major facilitator superfamily (MFS).
In addition, C. necator JMP134 has one MFS family 27 transporter and one family 30 transporter, both likely to be involved in aromatic compound uptake.
We investigated the possible presence of permease-type aromatic transporters by searching for homologs to the following proteins: BenK from Acinetobacter baylyi ADP-1 (the only benzoate transporter with a biochemically confirmed function); VanK, MucK, and PcaK from A. baylyi ADP-1 (transporters with other biochemically confirmed transport functions); and four putative transporter proteins (BenK from Pseudomonas putida PRS2000, PcaK from Azoarcus sp. EbN1, BenK from Rhodococcus sp. RHA1, and a putative transporter from A. baylyi ADP-1. This search identified 30 possible transporters with varying degrees of similarity to described aromatic acid transporters of this type.
Additional metabolic features
In addition to the broad catabolic potential towards aromatic compounds, strain JMP134 degrades various other pollutants such as cyclohexanecarboxylate, tetrahydrofurfuryl alcohol and acetone. The pathways utilized for the degradation of the above compounds correspond to the ones described in other bacteria (Table S1) [26,27,28,29,30,31,32].
Some interesting groups of enzymes without specific physiological role are also encoded in the genome of this bacterium: (i) Bacterial dehalogenases are important in the metabolism of diverse halogenated compounds originated from natural and anthropogenic sources [33,34], and some representatives of different kinds of dehalogenases seem to be encoded in the genome of strain JMP134. They include homologs of the hydrolytic (S)-2-haloacid dehalogenase (Reut_A1952 and Reut_ B5662) and a reductive dehalogenase belonging to glutathione Stransferase (GST) superfamily (Reut_C5979), probably involved in dechlorination of 2-chloro-5-nitrophenol [19]. Additionally, two contiguous genes (Reut_A1486 and Reut_A1487) both belonging to the GST family, show high identity with ORF3 and ORF4 of the tft cluster involved in metabolism of 2,4,5-trichlorophenoxyacetate by Burkholderia cepacia AC1100 [35], suggesting a probably role as dechlorinating enzymes in catabolism of chloroaromatic compounds. (ii) Bacterial nitroreductases are flavoenzymes that catalyze the NAD(P)H-dependent reduction of the nitro groups on nitroaromatic and nitroheterocyclic compounds. These enzymes have raised a great interest due to their potential applications in bioremediation and biocatalysis [36]. At least four nitroreductases probably involved in metabolism of nitroaromatic or nitroheterocyclic compounds are encoded in the genome of strain JMP134: Reut_B3607, Reut_ C6301, Reut_C5940 and Reut_C5984. The last three of them are encoded by genes located in the megaplasmid and without close homologs in the rest of Cupriavidus/Ralstonia strains, suggesting that this replicon could be specialized in catabolism of nitroaromatic compounds, besides 3-nitrophenol catabolism [19]. (iii) Baeyer-Villiger monooxygenases (BVMO) are a type of flavoproteins that play a role in hydroxylation of either alicyclic, aliphatic, or aryl ketones to form a corresponding ester, which can easily be hydrolyzed. These enzymes attract a huge interest on industrial applications since they are able to perform highly regio-and enantioselective oxygenations on several substrates. The strain JMP134 has four genes putatively encoding BVMO (Reut_B5461, Reut_C6279, Reut_B4935 and Reut_B5155) that are scattered across the genome and are present in clusters with other genes coding for subsequent metabolism downstream of the monooxygenase reaction (i.e., esterases, hydrolases and alcohol/aldehyde dehydrogenases) but this fact does not shed enough light about their physiological substrates. A few related homologs are also found in the rest of Cupriavidus genomes.
Degradation of amino acids C. necator JMP134 is able to grow on all the proteinogenic amino acids except glycine, methionine, arginine and lysine [37]. This pattern of amino acids utilization is identical for C. necator H16 and slightly different for C. metallidurans CH34, which is unable to use tryptophane and cysteine but grows on glycine and lysine [37]. It should be noted that glutamine and asparagine were not included in this study [37].
The inability of strain JMP134 to grow on arginine is consistent with the absence of genes coding for any of the four arginine catabolic pathways described in bacteria: the arginine deiminase, the arginine decarboxylase, the arginine dehydrogenase and the arginine succinyltransferase pathway [38]. These genes are also absent in Cupriavidus/Ralstonia strains H16, CH34, LMG19424, GMI1000 and 12J. On the other hand, the absence of genes coding for the cadaverine pathway, the aminovalerate pathway and the aminoadipate pathway involved in degradation of lysine [39] is consistent with the inability of this bacterium to grow on this amino acid. Similarly, these genes are not found in the rest of Cupriavidus/Ralstonia strains, but the presence of a putative ornithine/lysine/arginine decarboxylase (Reut_A0689, H16_A2930, Rmet_2754, RALTA_A2412, RSc2365, Rpic_2578) in all the Cupriavidus/Ralstonia strains is intriguing, since the ability to grow on these amino acids is not a metabolic trait of these genera. An explanation for this apparent inconsistency is that the role of this putative ornithine/lysine/arginine decarboxylase in Cupriavidus/Ralstonia strains is exclusively in acid resistance and not in catabolism since this kind of amino acids decarboxylases are acid-induced and are part of an enzymatic system in E. coli that contributes to making this organism acid-resistant [40].
The inability of use methionine as growth substrate by JMP134 and the rest of Cupriavidus/Ralstonia strains is consistent with the absence of L-methionine c-lyase, a pyridoxal 59-phosphatedependent enzyme that catalyzes the direct conversion of Lmethionine into a-ketobutyrate, methanethiol, and ammonia [41].
The presence of a putative glycine cleavage enzyme system in C. necator JMP134, encoded by the gcvTHP genes (Table S1), catalyzing the oxidative cleavage of glycine to CO 2 , NH 3 and transferring a one-carbon unit to tetrahydrofolate would be contradictory with the inability of this strain to grow in glycine. However, it should be noted that the metabolism of one-carbon compounds in C. necator JMP134 is not enough to support growth on these compounds as sole carbon source and they are only used as an auxiliary energy source [37], in contrast with chemolithoautotroph strains as H16 and CH34 (See energy metabolism section).
Glutamine is also included among the amino acids that are not supporting growth of C. necator JMP134, since a glutaminase encoding-gene, enabling the transformation of glutamine to glutamate, is not found in this strain, although is present in strains CH34 and GMI1000. A gene encoding a bifunctional proline dehydrogenase/pyrroline-5-carboxylate dehydrogenase, catalyzing the four-electron oxidation of proline to glutamate, is found in the genome of strain JMP134 (Table S1) and the rest of Cupriavidus/Ralstonia strains allowing the utilization of proline by these bacteria. According to this trait, a glutamate dehydrogenaseencoding gene, converting glutamate to a-ketoglutarate and thus directly feeding the tricarboxylic acids cycle is found in strain JMP134 (Table S1) and the rest of Cupriavidus strains, but not in strains 12J and GMI1000.
The presence in strain JMP134 of an L-asparaginase-encoding gene, enabling the hydrolysis of L-asparagine to L-aspartate and ammonia (Table S1), would suggest that this strain is able to use this amino acid as sole carbon and energy source. This gene is also encoded in the genomes of the rest of Cupriavidus strains but not in strains 12J and GMI1000. The formed aspartate can be metabolized through conversion to oxaloacetate by L-aspartate oxidase (NadB), or to fumarate by aspartate-ammonia-lyase (AspA) ( Table 1). The presence of an L-aspartate oxidase-encoding gene is common to the rest of Cupriavidus/Ralstonia strains, but the aspartate-ammonia-lyase is a peculiarity of C. necator JMP134. Alternatively, aspartate may be transformed to alanine by an aspartate 1-decarboxylase, however a gene encoding this enzyme was not found in C. necator JMP134, in contrast with strains H16, LMG19424, 12J and GMI1000 that harbor an aspartate 1decarboxylase-encoding gene.
The genomic analysis of strain JMP134 suggests that L-alanine can be degraded by two different pathways. L-alanine can be directly degraded to pyruvate and ammonia by a NADHdependent L-alanine dehydrogenase or converted to D-alanine by an alanine racemase and subsequently degraded to pyruvate and ammonia via D-alanine dehydrogenase (Table S1) [42]. The D-alanine pathway seems to be shared by the rest of Cupriavidus/ Ralstonia strains, but the L-alanine dehydrogenase is only found in strains H16 and JMP134.
Serine and threonine seem to be used as carbon source by strain JMP134 due to the presence of the respective deaminases (Table S1). Serine would be directly converted into pyruvate and ammonia by the action of serine deaminase whose gene is also found in the genomes of the rest of Cupriavidus/Ralstonia strains. On the other hand, threonine would be deaminated to 2-oxobutanoate by threonine deaminase that also seems to be encoded in the genomes of the rest of Cupriavidus/Ralstonia strains.
A complete bifurcated pathway for degradation of histidine is found in the genome of strain JMP134 consistent with its ability to grow using this amino acid as only carbon and energy source. Histidine catabolism proceeds in four or five steps pathways overlapping in the first three reactions to transform this amino acid into N-formimino-Lglutamate [43]. At this point, N-formimino-Lglutamate can be converted to L-glutamate via single-or twostep reactions. Both routes are encoded in the genome of C. necator JMP134 (Table S1) and in the genomes of the rest of Cupriavidus strains, but only the single-reaction route is encoded in the genomes of strains 12J and GMI1000.
The catabolism of branched-chain amino acids (BCAAs) starts by the action of an a-oxoglutarate-dependent aminotransferase which catalyzes the hydrolysis of leucine, isoleucine and valine to aoxoisocaproate, a-oxo-c-methylvalerate, and a-oxoisovalerate, respectively, followed by decarboxylation of these a-oxoacids to their corresponding branched chain acyl-CoA, in a reaction catalyzed by a branched chain a-oxoacid dehydrogenase complex. Both, the BCAA aminotransferase and the a-oxoacid dehydrogenase complex seem to be encoded in the genome of strain JMP134 (Table S1).
The catabolism of branched-chain amino acids (BCAA) starts with leucine dehydrogenase or a-oxoglutarate-dependent aminotransferase which catalyzes the hydrolysis isoleucine and valine to to the corresponding a-oxoacids (a-oxoisocaproate, a-oxo-c-methylvalerate and a-oxoisovalerate, respectively). Subsequently, the branched-chain a-oxoacid dehydrogenase complex catalyzes the decarboxylation to the corresponding acyl-coenzyme A (CoA) derivatives [44]. Both BCAA aminotransferase and leucine dehydrogenase seems to be encoded in the genome of strain JMP134, in addition to the common branched-chain a-oxoacid dehydrogenase complex (Table S1). The branched-chain aa aminotransferase seems to be also encoded in the rest of Cupriavidus strains, but only strain H16 additionally encodes leucine dehydrogenase.
Finally, L-cysteine would be degraded by two alternative pathways in C. necator JMP134 since a L-cysteine desulfhydrase transforming L-cysteine to ammonia, hydrogen sulphide and pyruvate, and a Fe 2+ -dependent cysteine dioxygenase that performs sulfoxidation to form cysteine sulfinic acid, are found in the genome of this strain. Both enzymes seem to be conserved in the genomes of the rest of Cupriavidus/Ralstonia strains.
The pathways for the degradation of aromatic amino acidstryptophan, phenylalanine and tyrosine-have been analyzed in detail, recently [19].
Degradation of carbohydrates C. necator JMP134 is very limited in sugar or sugar acids degradation, since only fructose and gluconate can be metabolized by this strain, in contrast with other Cupriavidus strains that are able to use glucose, 2-ketogluconate and N-acetyl-glucosamine [37]. Fructose and gluconate can be initially catabolized by fructokinase and gluconate kinase, respectively, using a Entner-Doudoroff pathway, with 2-keto-3-desoxy-6-phosphogluconate (KDPG) aldolase as key enzyme. The genes encoding this pathway are equally distributed in both chromosomes and several examples of gene redundancy are found (glucose-6-phosphate isomerase, glucose-6-phosphate 1-dehydrogenase, 6-phosphogluconolactonase and phosphogluconate dehydratase) (Table S1). It should be noted that similar genes encoding gluconate kinase are found in the rest of Cupriavidus/Ralstonia strains, but a homolog to fructokinase gene is only found in the genome of strain H16. In addition, genes encoding a glucosaminate deaminase and 2-keto-3-deoxygluconate kinase are found in the genome of strain JMP134 and in the rest of Cupriavidus/Ralstonia strains, putatively enabling the utilization of glucosaminate by these strains. However, the utilization of this sugar by strain JMP134 has not been evaluated [37].
Although glucose would be metabolized by strain JMP134, since a glucokinase gene is found in its genome, the absence of an uptake system for this hexose would explain why this strain does not use this sugar as a carbon source. In addition, the absence of 2ketogluconate kinase and N-acetylglucosamine-6-phosphate deacetylase encoding genes is consistent with the inability of strain JMP134 to use these sugars as growth substrates. C. necator JMP134 has incomplete Embden-Meyerhoff-Parnas and oxidative pentose phosphate pathways due to the absence of genes encoding the key enzymes phosphofructokinase and 6-phosphogluconate dehydrogenase, respectively.
Metabolism of polyhydroxyalkanoate (PHA)
The microbial polyesters as poly-(R)-3-hydroxybutyrate (PHB), belonging to the family of polyhydroxyalkanoic acids (PHA), occurred as insoluble inclusions in the cytoplasm and served as a storage compound for carbon and energy when the cells are cultivated under imbalanced growth conditions. The metabolism of PHA has been extensively studied in C. necator H16, a model for microbial polyoxoester production [45]. Analysis of genome sequence revealed that strain JMP134 possesses the key enzymes in PHA biosynthesis (Table S1): a type I poly(3-hydroxybutyrate) polymerase (Reut_A1347), two b-ketoacyl-CoA thiolases (Re-ut_A1348; Reut_A1353) and four NADPH-dependent b-ketoacyl-CoA reductases (Reut_A1349, Reut_B3865, Reut_C6018, Re-ut_B4127) which, together, convert acetyl-CoA into PHB. In addition to type I PHA synthase, strain JMP134 contains also a type II PHA synthase (Reut_A2138). Type II PHA synthases utilize thioesters of at least five carbon atoms whereas type I enzymes utilize thioesters of three to five carbon atoms. It should be noted that C. necator H16 lacks apparent type II PHA synthases. Additionally, four phasin (PHA-granule associated protein) encoding genes are found in the genome of strain JMP134. Phasins are most probably involved providing, together with phospholipids, a layer at the surface of the PHA granules [45]. Finally, the intracellular depolymerization of PHB in C. necator H16 is performed by multiple PHB depolymerases and PHB oligomer hydrolases [45]. Similarly, the mobilization of PHB in strain JMP134 seems to be performed by two putative PHB oligomer hydrolases (Reut_A1981, Reut_A1272) and five PHB depolymerases (Reut_A1049, Reut_A0762, Reut_B4702, Re-ut_B3626, Reut_B5113). Genes similar to the ones involved in PHB metabolism are found in all the rest of Cupriavidus/Ralstonia strains, indicating that this trait is widespread in these genera. It should be noted that PHB accumulation in C. necator JMP134 has been verified previously [46].
Nitrogen metabolism
Among the genes participating in nitrogen metabolism found on chromosome 1 of C. necator JMP134 are Reut_A3432, a putative ammonium monooxygenase (amoA), and an NAD glutamate dehydrogenase (NAD-gdh; 1371497-1376338 bp) putatively involved in ammonification. The NAD-gdh protein has 55% and 57% amino acid identity with the NAD-gdh protein reported in Azoarcus sp. and Pseudomonas aeruginosa, respectively [47].
Aerobic energy metabolism
Genome analysis of strain JMP134 revealed a robust energy metabolism typical of most free-living heterotrophs dwelling in an environment with fluctuating O 2 levels. The presence of an extensive inventory of genes for respiratory chain components including at least nine distinct terminal oxidases indicates that the aerobic respiration chain adapts to varying concentrations of O 2 . Genes required for formation of complexes I, II and III of oxidative phosphorylation are present in large chromosome of strain JMP134: (i) a typical proton-pumping NADH:quinone oxidoreductase encoded by a large cluster of 14 genes (Reut_A0961-Reut_A0974); (ii) a succinate dehydrogenase belonging to the four-subunit type C subgroup [52] encoded by four genes (Reut_A2322-Reut_A2325); and (iii) the cytochrome bc1 complex, coupling electron transfer from ubiquinol to periplasmic cytochromes c with proton pumping, encoded by three genes (Reut_A3091-Reut_A3093). All of these genes are highly conserved and share similarities to the relatives of Cupriavidus/Ralstonia group.
In addition to use of proton-translocating NADH dehydrogenase of complex I in energy production, strain JMP134 may employ two different type II NADH dehydrogenases (Reut_ A0874/Reut_B4838) to optimize the (NADH)/(NAD+) balance under changing environmental conditions [53]. It should be noted that the second of these genes seems to be unique to strain JMP134, in contrast with the first one that is highly conserved in the rest of Cupriavidus/Ralstonia strains.
The respiratory chain of strain JMP134 can be fueled, besides NADH dehydrogenases, by at least three formate dehydrogenases allowing the use of formate as an auxiliary energy source by this strain [54], but not as a growth substrate since the product of formate oxidation, CO 2 is not fixed by strain JMP134 [37]. A soluble, NAD + -reducing, molybdenum-containing formate dehydrogenase, previously characterized in strain C. necator H16 [55], is encoded by the five genes of the fds cluster located in large chromosome and seems to be conserved in all Cupriavidus strains, but not in Ralstonia genus (Table S1). Another soluble formate dehydrogenase may be encoded by fdw genes on small chromosome. The FdwA and FdwB gene products would form a dimeric tungsten-containing formate dehydrogenase that recycles NADH at the expense of formate oxidation to CO 2 , as proposed for C. necator H16 [56]. This soluble formate dehydrogenase is also found in C. taiwanensis LMG19424 (Table S1). An additional membranebound formate dehydrogenase is putatively encoded by fdhA, fdhB and fdhC genes, which would encode a catalytic subunit, an ironsulfur subunit, and a transmembrane cytochrome b subunit, respectively, as proposed for C. necator H16 [56]. In addition, an accessory gene fdhD is found in this cluster located in large chromosome (Table S1). This kind of formate dehydrogenase seems to be encoded in the genomes of all the rest of Cupriavidus/ Ralstonia strains. The presence of a second membrane-bound formate dehydrogenase encoded by fdo genes, as described in strain H16 [56], is not found in strain JMP134.
Strain JMP134 apparently contains an unusually large number of genes for terminal oxidases catalyzing the reduction of O 2 to water using cytochrome c or quinol as electron donors: (i) one operon coding for an aa3-type cytochrome oxidase, which typically operates at high oxygen concentrations; (ii) one operon coding for a cbb3-type cytochrome oxidase having high affinity for oxygen, and qualifying to operates at extremely low pressures of oxygen; (iii) one operon for a bb3-type cytochrome oxidase; (iv) two operons coding for bd-type quinol oxidases; and (v) three operons coding for bo3-type quinol oxidases (Table S1). All these terminal oxidases-encoding operons are also found in strain H16 and its putative function has been analyzed, according to previous physiological and biochemical studies [56]. All the rest of Cupriavidus/Ralstonia strains have the aa3-, cbb3-and bb3-type cytochrome oxidases-encoding operons but a lower number of quinol oxidases-encoding operons (Table S1). Finally, it should be mentioned the presence of a putative caa3-type high-potential iron sulfur protein (HiPIP) oxidase-encoding operon, exclusively found in the genome of strain JMP134. The HiPIP is a small soluble protein functioning as the electron carrier between the cytochrome bc complex and the HiPIP terminal oxidase of the respiratory chain described in the strict aerobe and thermohalophile Rhodothermus marinus [57]. However, no homologous gene encoding a HiPIP similar to that described in R. marinus is found in the genome of strain JMP134, revealing that the identity of the putative electron donor for this terminal oxidase remains unknown in this bacterium.
Altogether, the genomic analysis of energy metabolism in strain JMP134 confirms that this bacterium is well adapted to life in habitats subject to fluctuating carbon sources and physicochemical conditions. The existence of putative ecoparalogs or isoenzymes having different kinetic properties (e.g., terminal oxidases) or metal cofactor content (e.g., formate dehydrogenases) allows this bacterium to cope with rapidly changing O 2 concentrations and environments with varying metal supply.
Quorum sensing
Although several quorum-sensing systems employing N-acylhomoserine lactones (AHLs) have been identified in members of the closely related Burkholderia and Ralstonia genera [58,59], none were detected in the C. necator JMP134 genome. On the other hand, a complete phenotype conversion (Phc) regulatory system was found to be encoded by chromosome 1. This system has been studied primarily in the phytopathogen R. solanacearum GMI1000 where it forms the core of the complex network that regulates virulence and pathogenicity genes [60]. At the center of this Phc system is PhcA, a LysR-type transcriptional regulator, and the products of the phcBSRQ operon that control levels of active PhcA in response to cell density. The unique signaling molecule employed for quorum sensing is the volatile 3-hydroxy palmitic acid methyl ester (3-OH PAME) [60]. 3-OH PAME posttranscriptionally modulates the activity of PhcA by acting as the signal for an atypical two-component regulatory system. This system consists of a membrane-bound sensor-kinase, PhcS, which phosphorylates PhcR, an unusual response regulator with a Cterminal kinase domain in place of a DNA-binding domain [60]. The amino acid identity between the C. necator JMP134 and the R. solanacearum GMI1000 Phc gene products range from 56% to 75%. The presence of a phcA ortholog in a Cupriavidus strain capable of fully complementing R. solanacearum phcA mutants was previously reported [61]. That strain also appears to make a form of 3-OH PAME and to contain orthologs of phcB and phcS [61]. The possible physiological functions regulated by the Phc system in C. necator JMP134 pose intriguing questions that are, as yet, unanswered.
Plant-bacteria associations
Members of the genus Cupriavidus, as well as the closely related Ralstonia and Burkholderia, include a few plant pathogens and symbionts. There is substantial evidence suggesting that members of these two genera are able to interact with plants and to establish diverse commensal or even mutualistic associations with these hosts [62,63,64]. Although this area has not been the focus of research in C. necator JMP134, specifically, recent experimental evidence suggests that this bacterium is able to proliferate in the rhizosphere and even within internal tissues of A. thaliana (Zúñ iga, A, Ledger, Th. and B. González, unpublished data). For most of the plant bacteria associations described so far, the bacterial genes typically involved include those encoding protein or nucleotide transport from the microorganism to the host, as well as those involved in the production of extracellular enzymes and the elicitors of the plant hypersensitive response [65,66]. C. necator JMP134 has several genes related to protein transport. On chromosome 1 are found several genes related to type IV transport systems (Reut_A0401-0404, Reut_A0784-0788, Reut_A0779, Reut_A1436, Reut_A2960-2962, and Reut_A3131-3135). Reut_ A2970 encodes a protein translocase with 72% amino acid identity to the SecA of Burkholderia multivorans ATCC 17616. Chromosome 2 also harbors a number of genes encoding putative components of a type IV secretion system (Reut_B5405-5416).
Phage sequences
On chromosome 1 of C. necator JMP134 is found a large phagelike gene cluster that spans ,43 kb and includes 55 CDSs (Reut_A2365-2419). Most of these putative proteins have no homologs in other sequenced genomes of members of the Ralstonia or the Cupriavidus genera. However, homologs for many of these proteins, with amino acid sequence identities .60%, are present in various Burkholderia species, including B. vietnamiensis G4, B. cenocepacia HI2424, B. dolosa AUO158, and B. multivorans ATCC 17616. The overall sequence identity and arrangement of the CDSs clustered in this region suggest that this putative phage is related to the characterized temperate Burkholderia podophage, BcepC6B.
A few additional phage-like sequences are found scattered in chromosomes 1 and 2. These include phage-type integrases (Reut_A0577, Reut_A1625, Reut_A2191, and Reut_B5345), two DNA polymerases with similarity to the DNA polymerase of phage SPO1 (Reut_A1937 and Reut_B4396), and two hypothetical phage proteins (Reut_A0552 and Reut_A2198). Since these sequences are not accompanied by other phage-like genes and are instead adjacent to transposon-related sequences, they likely correspond to transposon fragments rather than phage remnants. One possible exception: Reut_A2191 is accompanied by genes encoding putative phage regulatory proteins (Reut_A2193 and Reut_A2195) and thus might be descended from a prophage.
The megaplasmid contains a higher density of phage-type integrase genes and transposon elements than that found on either chromosome. There are five integrase sequences (Reut_C5954, Reut_C5993, Reut_C6147, Reut_C6164 and Reut_C6343) all of which are adjacent to transposons, thus suggesting that these integrases are part of transposon elements. This conclusion is further supported by the identification of one such sequence in plasmid pJP4 next to the transposase of a Tn3 family transposon (IS1071).
A full set of che genes encoding chemotaxis functions forms a putative operon on chromosome 2 adjacent to fla genes encoding the flagellum and motor proteins. Additional copies of all except two of the che genes (cheY and cheZ) are scattered on chromosome 1.
These genes are also located on chromosome 2 in C. eutropha H16 and C. metallidurans CH34.
Conclusions
Analysis of the complete genome of C. necator JMP134 adds further insights into the evolution of multipartite genomes in bproteobacteria, and the presence of aromatic catabolism and other metabolic functions. It has been proposed that multipartite genomes arise through intragenomic gene transfer between progenitor chromosomes and ancestral plasmids. Our analysis supports that hypothesis and further indicates that distinct plasmids served as the scaffolds for the assembly of secondary chromosomes in the Cupriavidus, Ralstonia, and Burkholderia lineages. Furthermore, both chromosomes in the Cupriavidus show evidence of significant gene duplication and lateral gene transfer, with foreign DNA preferentially incorporated into the secondary chromosomes. The C. necator JMP134 genome contains nearly 300 genes potentially involved in the catabolism of aromatic compounds and encodes almost all of the central ring-cleavage pathways. Although all these genomes possess a significant number of aromatic catabolism functions, including central and peripheral pathways, the genome of strain JMP134 is by far the one that provides more versatile degradative abilities. The availability of the complete genome sequence for C. necator JMP134 provides the groundwork for further elucidation of the mechanisms and regulation of chloroaromatic compound biodegradation, and its interplays with several other key metabolic processes analyzed here.
Supporting Information
Table S1 Functional annotation of key metabolic genes of C. necator JMP134. Found at: doi:10.1371/journal.pone.0009729.s001 (0.27 MB DOC) Figure S1 Functional distribution of unique genes. COG categories are as follows: Information storage and processing: A, RNA processing, modification; B, chromatin structure; J, translation, ribosomal structure/biogenesis; K, transcription; L, DNA replication, recombination, repair. Cellular processes: D, cell division, chromosome partitioning; M, cell envelope biogenesis outer membrane; N, Cell motility and secretion; P, Inorganic ion transport and metabolism; T, Signal transduction mechanisms. Metabolism: C, Energy production and conversion; G, Carbohydrate transport and metabolism; E, Amino acid transport and metabolism; F, Nucleotide transport and metabolism; H, Coenzyme metabolism; I, Lipid metabolism; Q, Secondary metabolites biosynthesis, transport and catabolism; Poorly characterized: R, General function prediction only; S, Function unknown.
|
v3-fos-license
|
2018-06-07T13:35:13.401Z
|
2018-05-29T00:00:00.000
|
44113718
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00436-018-5924-5.pdf",
"pdf_hash": "138a18608807d7766139f90cf420b3d1cc5b3140",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46587",
"s2fieldsofstudy": [
"Environmental Science",
"Biology",
"Medicine"
],
"sha1": "53e50eb4909e1bf6c6285d290c8f0c3edc46697a",
"year": 2018
}
|
pes2o/s2orc
|
Avian malaria co-infections confound infectivity and vector competence assays of Plasmodium homopolare
Currently, there are very few studies of avian malaria that investigate relationships among the host-vector-parasite triad concomitantly. In the current study, we experimentally measured the vector competence of several Culex mosquitoes for a newly described avian malaria parasite, Plasmodium homopolare. Song sparrow (Melospiza melodia) blood infected with a low P. homopolare parasitemia was inoculated into a naïve domestic canary (Serinus canaria forma domestica). Within 5 to 10 days post infection (dpi), the canary unexpectedly developed a simultaneous high parasitemic infection of Plasmodium cathemerium (Pcat6) and a low parasitemic infection of P. homopolare, both of which were detected in blood smears. During this infection period, PCR detected Pcat6, but not P. homopolare in the canary. Between 10 and 60 dpi, Pcat6 blood stages were no longer visible and PCR no longer amplified Pcat6 parasite DNA from canary blood. However, P. homopolare blood stages remained visible, albeit still at very low parasitemias, and PCR was able to amplify P. homopolare DNA. This pattern of mixed Pcat6 and P. homopolare infection was repeated in three secondary infected canaries that were injected with blood from the first infected canary. Mosquitoes that blood-fed on the secondary infected canaries developed infections with Pcat6 as well as another P. cathemerium lineage (Pcat8); none developed PCR detectable P. homopolare infections. These observations suggest that the original P. homopolare-infected songbird also had two un-detectable P. cathemerium lineages/strains. The vector and host infectivity trials in this study demonstrated that current molecular assays may significantly underreport the extent of mixed avian malaria infections in vectors and hosts.
Introduction
In vector-borne disease systems, identifying the relative contribution of different vector and host species is a crucial step in determining the transmission rates of pathogens in a community (McCallum et al. 2001). The task of separating minor from major vectors is relatively easily accomplished in simple avian malaria systems such as in the Hawaiian archipelago, where one Plasmodium and few mosquito species co-exist (Van Riper et al. 1986;LaPointe et al. 2012;Winchester and Kapan 2013). Far more complex vector-vertebrate and parasite-host-vector interactions occur in systems of multiple host and vector species (Dietz 1980). In most locations, several vectors contribute to the transmission of multiple avian pathogens, and in the context of disease dynamics, the general dimension of vector functional diversity is an important consideration. However, Power and Flecker (2008) state that functional diversity includes many factors not necessarily related just to vector taxonomic diversity. Compatibility with both the host and the vector, along with abiotic factors (such as environmental constraints and temperature), will determine the biogeographical distribution of parasites and is a product of co-evolution between parasites, hosts, and vectors (Kawecki 1998). Additionally, all parameters of the vectorial capacity of one vector species may vary considerably in time and space due to genetic polymorphisms in different populations and ecosystems (Lambrechts et al. 2009).
We attempted to investigate avian malaria dynamics in central California, where large numbers of resident and migratory songbirds and multiple Culex, Culiseta, and Aedes mosquito species occur. Prevalence studies conducted in 2011 and 2012 showed that China Creek Park harbored a rich diversity of avian Plasmodium species and lineages in birds (Walther et al. 2016) and mosquitoes (Carlson et al. 2015). However, there was some incongruence between the prevalence of Plasmodium species in resident birds and mosquitoes. Some parasite species identified in resident birds were not found in the local mosquitoes and vice versa. One parasite species, Plasmodium homopolare (belonging to the subgenus Novyella), newly described by Walther et al. (2014), was the most common parasite in resident birds but was rarely found in mosquitoes (Walther et al. 2014;Carlson et al. 2015). This anomaly led us to hypothesize that some ornithophilic Culex mosquito species were incompatible vectors of some Plasmodium parasite species. To test this hypothesis, we attempted to infect Culex mosquito species with P. homopolare. Because we were unable to culture P. homopolare, we collected this parasite from a wild bird that was identified as positive for P. homopolare by microscopy and polymerase chain reaction (PCR). In a controlled laboratory setting, this blood extract was injected into a domestic canary (Serinus canaria forma domestica). Blood from this canary was subsequently injected into three other canaries, which were in turn used to infect mosquitoes to determine vector competence.
In nature, multiple instances and opportunities occur within the vertebrate and invertebrate hosts for complex parasiteparasite interactions that ultimately impact the heterogeneity, persistence, and transmission dynamics of Plasmodium in a community. Here, we describe the results from the vectorcompetence studies and discuss how these data align with the prevalence data collected in the field at China Creek Park.
Infected wild bird blood collection
Infected blood from one song sparrow (Melospiza melodia) was obtained from China Creek Park, Central California, following methods described by Walther et al. (2016). Following protocols designed by Carlson et al. (2016), 100 μl of song sparrow blood was extracted by jugular venipuncture with a syringe preloaded with 0.014 cc of citrate phosphate dextrose adenine (CPDA) solution to prevent clotting. Half of the blood drawn into the syringe was discharged into a tube containing lysis buffer (10 mM Tris-HCL, pH 8.0, 100 mM EDTA, 2% SDS) and held at room temperature for later DNA extraction and molecular testing. The syringe holding the remainder of the blood was placed in a plastic bag and held on top of wet ice for transport to the laboratory (for 174 miles). Two thin blood smear slides air-dried and fixed in absolute methanol in the field were also made from small aliquots of the song sparrow blood. The slides were stained, the same day the smear was made, with Giemsa as described by Valkiūnas et al. (2008). The intensity of parasitemia, estimated by counting the number of parasites per 10,000 erythrocytes, was determined to be 0.0002%. Parasites were identified both by microscopy using morphological keys described by Valkiūnas (2005) and by sequencing parasite DNA after amplification via PCR. Parasite DNA was extracted from the whole blood sample following the DNeasy blood and tissue kit protocols (Qiagen, Valencia, CA). A nested PCR described in Waldenström et al. (2004) was carried out to amplify a 478-bp sequence of the mitochondrial cytochrome b gene (cyt b). All PCR products were viewed on 1.8% agarose gels stained with ethidium bromide. The positive sample was cleaned and sent for sequencing to Elim Biopharmaceuticals Inc. (Hayward, CA). The sequences were edited using Sequencher 5.1 (Gene Codes, Ann Arbor, MI) and were then identified using a NCBI nucleotide BLAST ™ search.
Inoculation of donor blood into canaries
Microscopic examinations and PCR screening of the peripheral blood of all four domestic canaries (approximately 2-3 years of age), obtained from a California breeder (Steve Mieser, as described in our IACUC permit 17601) and used for the infectivity studies, revealed that they were not infected with malaria parasites prior to the experimentation. We confirmed with the breeder that the canaries were not exposed to mosquitoes because they were maintained in an indoor aviary. The canaries were tested upon arrival and again 15 and 30 days after arrival via blood smear inspection and PCR analysis of collected blood samples. One of the canaries (canary A) was injected with 0.04 cc of infected song sparrow whole blood into the jugular vein. Ten days post infection (dpi), a blood smear was made with whole blood of canary A extracted from the brachial vein to check for the establishment of an infection. Infected blood from canary A was injected into canaries B and C as a second passage. One last direct canary-tocanary inoculation was conducted from canary C to canary D (a third passage)-refer to Fig. 1 , but because they all died within a few days after imbibing blood from the canaries, colony mosquitoes for this species were obtained at the Sutter-Yuba Mosquito and Vector Control District. F1 adults were reared from egg rafts collected from the surface of ponds and also from eggs laid from gravid females, which were originally collected in gravid traps (Cummings 1992;Reiter 1987). Cx. pipiens complex member species were identified by PCR, following the protocol described by Smith and Fonseca (2004), as Cx. pipiens, Cx. quinquefasciatus, and hybrids of the two species. However, we use caution with these identifications for Kothera et al. (2013) were not able to find pure Cx. pipiens in California, only hybrids.
Experimental infection of mosquitoes
All adult mosquitoes were maintained in an incubator at 26°C and 70% humidity with an automatic 12 h light/dark cycle. Up to 50 mosquitoes were placed in each one-gallon bucket and were provided with four cotton balls lightly soaked in 10% sucrose that were replaced daily.
To increase the probability of a successful blood feeding, all mosquitoes were starved by removing their sugar access 24 h prior to blood feeding on the infected canaries. Mosquitoes were aspirated from their original cage to a 3.7-l bucket that contained the unrestrained infected canary. The canaries were unrestrained to reduce stress on the bird that was infected with malaria and allow for a more natural feeding. The mosquitoes were allowed to feed for 1 h on each canary starting from 20 to 2100 h to emulate the crepuscular feeding pattern. The feeding was supervised for the entire hour to ensure that no more than~50 mosquitoes (half of the mosquitoes in the cage) fed on one canary at a time. We determined that up to 100 mosquitoes could safely feed on a canary at a time since each mosquito can take a maximum of 3 μl of blood per feeding (Klowden and Lea 1978). A canary weighs on average 30 g so if 100 mosquitoes take 3 μl of blood, then a maximum of 300 μl will be taken in total for each feeding, which is 1% of the body weight of a bird. However, a maximum cut-off was not necessary because it was rare that more than 15 mosquitoes at a time would feed on the same canary.
Fully blood-fed mosquitoes were transferred into smaller half-liter cartons (no more than 25 individuals per carton) and held in the incubator. Mosquitoes that did not feed were placed back into their original 3.7-l container and were used again in subsequent feeding attempts, while partially blooded mosquitoes were discarded. Fully blood-fed mosquitoes were Fig. 1 Schematic of Plasmodium homopolare passage from a fieldcaught song sparrow (Melospiza melodia) to naïve domestic canaries (Serinus canaria forma domestica). A mosquito symbol indicates the canaries upon which experimental mosquitoes took an infectious blood meal (see Table 1 for more details) monitored daily, and any individuals that died prior to or between scheduled time points for dissections (15, 20, and 25 dpi) were placed into 70% ethanol until processed with the experimental mosquito samples.
Determination of infection in experimental mosquitoes
To determine the infectious status, some females of each mosquito species were dissected on 15, 20, and 25 days post infection (dpi). A period of 15 days at 26°C was considered long enough for sporozoites to have migrated to the salivary glands based on vector competence trials of other Plasmodium species (Meyer and Bennet 1976;Kazlauskiené et al. 2013;Carlson et al. 2016). The number of mosquitoes dissected depended on the availability of mosquitoes of each species still surviving after 15 to 25 dpi. All mosquitoes were first anesthetized with triethylamine (Sigma-Aldrich, St. Louis, MO). Salivary glands were dissected from the thorax and the midgut from the abdomen, and slide preparations made. Thoraxes and salivary glands were placed in separate tubes containing 70% ethanol and were used for parasite DNA extraction and amplification in the same manner described by Carlson et al. (2015). Preparations of midguts to detect oocysts were made by removing the midgut and placing it into a drop of saline, followed by adding a drop of 0.5% solution of mercurochrome. Because no permanent preparations of midguts were made, each midgut was viewed microscopically within 20 min of preparation.
The ideal way to determine vector competence is to have infected vectors refeed on a naïve host. We attempted refeeding mosquitoes on two Plasmodium-negative canaries, but every attempt to get mosquitoes to refeed failed, despite providing an opportunity for the mosquitoes to oviposit eggs developed after their primary (first infective) blood meal. Therefore, we were forced to use an artificial capillary tube method (Aitken 1977;Cornel et al. 1993) to test for sporozoites secreted in saliva extrapolates at 15, 20, and 25 dpi. Initially, we filled each capillary with a solution containing equal parts of heat-inactivated fetal bovine serum (FBS; Sigma-Aldrich, St. Louis, MO) and 10% sucrose. However, after constant parasite-negative results, despite using mosquitoes with known parasite-positive salivary glands, we concluded that this method had to be modified for Plasmodium studies. We then used a modified version of the Rosenberg et al. (1990) method, which did provide PCR-positive saliva samples. In this method, the fascicle sheath of the mosquito was first removed and the exposed proboscis was then inserted into the capillary tubes filled with a mixture of one part mineral oil and one part 10% sucrose. The mosquito was allowed to expectorate for 15 min before being removed for further processing.
Data analysis
Salivary gland infection rates, determined by PCR as described in Carlson et al. (2015), were tested for variation between mosquito species using a logistic regression analysis. The analysis modeled the proportion of positive salivary glands against the day of infection and tested whether the overall proportion of mosquitoes with sporozoites in their salivary glands differed between mosquito species after adjustment for any time trend. All analyses were performed in R version 3.2 (R Core Team 2015).
Canary infections
Canary A was believed to be infected with only P. homopolare (lineage SOSP_CA3P; GenBank accession number KJ482708); however, recipient canary A was subsequently found to be co-infected with both P. homopolare and P. cathemerium. P. cathemerium lineage SPTO_CA_ELW_6P (Pcat6; GenBank accession number KJ620779) infection with 3.6% merozoites was first detected on 10 dpi in canary A, and gametocytes were detected on 5 dpi in the two secondary infected canaries (B and C). P. cathemerium parasitemias in canaries B and C ranged from 5.5 to 6.7%, respectively. Canary D, which was infected with blood from canary C, had the highest P. cathemerium parasitemia on 5 dpi at 37%. In all canaries, P. cathemerium parasitemias declined after 5 dpi to < 1% on 7 dpi and to < 0.2% on 9 dpi. In all four canaries, only two or fewer P. homopolare gametocytes were visualized per 1000 erythrocytes (≤ 0.002%) on days 5, 7, and 9 post infection. It was possible that some trophozoites that we called P. cathemerium may have been P. homopolare trophozoites, because it is very difficult to differentiate trophozoites of the two species morphologically (Valkiūnas 2005). All canaries survived the experimental infection, and only after 2 months did they test positive for P. homopolare by PCR. Microscopically, trophozoites of P. homopolare were still visible in the blood, indicating that a chronic infection persisted. P. cathemerium was no longer detected by PCR or seen in blood smears after 2 months post-infection. Infection with a third parasite lineage became apparent xenodiagnotically only when the experimentally infected mosquitoes were tested for parasite DNA. This third parasite had a cyt b sequence identical to P. cathemerium lineage HOFI_CA_ELW_8P (Pcat8; GenBank accession number KJ620781) that was previously reported by Carlson et al. (2015) as a P. cathemerium-like lineage isolated from mosquitoes and birds in China Creek Park. Pcat6 and Pcat8 lineages have a genetic distance of 0.85%.
Mosquito infections
A total of 286 mosquitoes fully blood-fed on Plasmodium infected canaries B and D between 5 and 25 dpi. Of these blood-fed mosquitoes, 115 were dissected at 15, 20, and 25 dpi (Table 1), of which 44 were infected with Plasmodium (37%). The other 169 blood-fed mosquitoes died before or in between the time points and could not be dissected, but were preserved in 70% ethanol. The majority (153/169) of the mosquitoes that died were Cx. tarsalis that had been collected from the field. Of the 169 dead mosquitoes, 66 (39%) thoraces were positive when tested by PCR. Table 1 shows the differences between the feeding patterns on canaries B and D. Cx. stigmatosoma did not readily take a blood meal from canary B, despite having equal opportunity to feed on this canary as other mosquito species.
Plasmodium homopolare was not detected in any of the blood-fed mosquitoes. Fig. 2 shows the percentage of uninfected vs infected females for each species and the percentage infection with the two P. cathemerium lineages. P. cathemerium lineage Pcat6 was detected in salivary glands of all species except Cx. pipiens, whereas lineage Pcat8 was detected in only Cx. stigmatosoma and Cx. tarsalis. The probability of sporozoite infection by P. cathemerium parasites (based on salivary gland infections by PCR) differed significantly among mosquito species based on the logistic regression with adjustment for the trend over time postinfection (Fig. 3). For Cx. stigmatosoma, the mean probability of Pcat6 and Pcat8 salivary gland infection at 20 dpi was 39.2% (95% CI 20.4-61.9%). The probability of infection was significantly lower for Cx. pipiens at 2.7% (P = 0.0004; 95% CI 0.7-10.5%) and highest for Cx. tarsalis at 79.0% (P = 0.0063; 95% CI 58.5-91.0%) at 20 dpi.
Detection of sporozoites was attempted by collecting expectorate samples by the capillary tube method. Only three saliva samples collected by the modified Rosenberg et al. (1990) method described above tested positive by PCR. Two Cx. stigmatosoma and one Cx. tarsalis were positive for lineage Pcat8.
Oocysts were counted in the midguts from each mosquito at each of the three post-infection time points. Total number of oocysts ranged from 1 to 23 in Cx. stigmatosoma (n = 16), from 1 to 15 in Cx. tarsalis (n = 22), and from 1 to 28 (n = 6) in Cx. quinquefasciatus. The single Cx. pipiensquinquefasciatus hybrid mosquito that tested positive had three oocysts. Because it is impossible to morphologically distinguish among Plasmodium species during the oocyst and sporozoite stages (Valkiūnas 2005), we were not able to determine whether these oocysts were from one or more species. One potential solution for this in future studies Table 1 Table 1 reports mosquito infections tested on 15, 20, and 25 dpi resulting from blood meals obtained from canary B and canary D, respectively. For each of the three time points, the infections are reported for each of the two Plasmodium cathemerium lineages Pcat6 (SPTO_CA_ELW_6P, GenBank accession number KJ620779) and Pcat8 (HOFI_CA_ELW_8P, GenBank accession number KJ620781).
For each of the five mosquito species tested, the total number of positive thoraxes (T) and salivary glands (S) are reported, followed by the total sample size (N). The symbol B-^characterizes samples for which a PCR was not carried out because there were not enough mosquito specimens to test at that time point
Experimental infection
A question often asked in disease ecology is as follows: what determines pathogen infection dynamics in time and space to explain trends in maintenance and spread? It is generally agreed that the main determinants of the structuring of parasite and host associations and heterogeneity of infection in a host population are host exposure and innate and adaptive immune responses to the pathogens (Poulin 2011). Host exposure is influenced by the capacity of the local vectors to transmit the pathogens, otherwise known as vectorial capacity (Garrett-Jones and Shidrawi 1969). In studies conducted at China Creek Park (Carlson et al. 2015;Walther et al. 2016), incongruence between host and vector Plasmodium infection prevalences led to proposing that specific parasite-vector interactions (incompatibilities and compatibilities) were likely occurring, which could be tested by vector experimental infection assays. Experimental vector infection studies undertaken in this study revealed that mixed Plasmodium infections, which are common in hosts (Manwell and Herman 1935;Palinauskas et al. 2011), raise new interpretative challenges and parasite species interactions in vectors, which can determine relative parasite abundance in time and space. One of the obvious results in this study was that none of the mosquitoes presented with P. homopolare became infected, despite imbibing gametocytes of this species, admittedly at low levels of < 0.002%. Non-exclusive hypotheses may be suggested as following: (1) this level of gametocytemia was too low for effective syngamy within the mosquito midguts precluding the infection of any species of mosquito; (2) coinfection with P. cathemerium prevented P. homopolare from developing an infection in the mosquito; (3) the mosquito species in this study are not the natural vector of P. homopolare; and/or (4) the PCR assay was not sensitive for Cx. stigmatosoma. The probability of infection was significantly lower for Culex pipiens complex at 2.7% (P = 0.0004; 95% CI 0.7-10.5%) and significantly higher for Cx. tarsalis at 79.0% (P = 0.0063; 95% CI 58.5-91.0%) compared to that for Cx. stigmatosoma; **P < 0.01, ***P < 0.001 enough to detect P. homopolare when it was simultaneously present with the two P. cathemerium lineages.
1: P. homopolare was described in 2014 (Walther et al. 2014), and little is known about the life cycle and pathology of this species. Considering the extremely high prevalence but low parasitemia in the birds captured at China Creek (< 0.25% parasitemia, Walther et al. 2016) and in the canaries in this study (< 0.002%), it is possible that P. homopolare progresses to a long-lasting chronic infection in hosts and there is only a short window of time during which the gametocytemia is high enough to infect mosquitoes. In this study, we were unable to determine the minimum gametocytemia level required for P. homopolare to infect mosquitoes, and we may have missed the infectious window. 2: Competition with congeneric parasites limits P.
homopolare infections in mosquitoes. There are opportunities for Plasmodium species to interact in mosquitoes, especially when vector species overlap (Paul et al. 2002) and mixed infections in infective birds are quite common (Valkiūnas et al. 2003;Biedrzycka et al. 2015). In in vitro studies, Valkiūnas et al. (2013Valkiūnas et al. ( , 2014 noted several reproductive outcomes in blood containing gametes from several avian Haemoproteus parasite species. Outcomes included complete blockage of development of ookinetes of some species, to increase in reproductive success (increase in ookinete production) of other species and, on occasion, production of hybrid ookinetes. Based on our infective studies, it is possible that P. cathemerium blocked syngamy and development of P. homopolare ookinetes in the species of mosquitoes that were infected. Much knowledge is lacking concerning in vivo interactions of Plasmodium gametes and ookinete development in mixed infections in mosquitoes, studies of which will be enhanced, when parasite species-specific nuclear markers are available which can detect hybrids .
Moreover, when discussing parasite-parasite competition, it is also important to consider host specificity in the context of parasites and the relationship between host breadth and hostuse proficiency. P. homopolare was recently described by Walther et al. (2014) as a new species and P. cathemerium is a generalist parasite, having been detected in 9 families and 26 species, with a worldwide distribution (Valkiūnas 2005). P. homopolare infected 84 birds of 399 birds collected at China Creek Park, representing 9 host species from 5 families. A total of 31% of birds collected at China Creek Park were infected with Plasmodium, and 68% were infected with P. homopolare, which is the same as saying that 1 in 5 birds at China Creek Park were infected with P. homopolare (Walther et al. 2014). P. cathemerium was found in 6 bird species and 3 families at China Creek Park. This means that both parasites are considered generalists with the ability to infect multiple distantly related host taxa and can switch between resident and migrant bird species (Waldenström et al. 2002). Little is known about the transmission cycle of P. homopolare, especially in the sense of its course of infection of the acute phase vs the chronic phase. Thus, we had no a priori knowledge on how this parasite would infect our laboratory canaries; in other words, we do not know if canaries can be naturally infected with this parasite. However, it has been reported in the human malaria field that P. vivax is suppressed during an acute infection in the presence of P. falciparum, but re-emerges as a chronic infection once P. falciparum subsided (Boyd and Kitchen 1937;Maitland et al. 1996;Bruce et al. 2000). This could be explained by immunemediated apparent competition in a host where there are modifications to host susceptibility when a host's immune response to one parasite affects its ability to control a second species.
3:
The vector responsible for maintenance of P. homopolare in nature was not among the species used in our study. At China Creek Park, Carlson et al. (2015) reported that only 3 out of the 76 Plasmodium-positive field-collected mosquito thoraxes were infected with P. homopolare: one Cx. tarsalis (which was the only individual with positive salivary glands), one Cx. restuans, and one Culiseta particeps. However, at the same site, Walther et al. (2014) reported that 68% of infected birds were infected with P. homopolare by PCR, but not all blood smears were checked for gametocytes. Combes (1991) described the two ecological drivers of heterogeneous distribution of parasites in a host population as the α and β filters. The encounter filter α refers to the ecological and/or behavioral obstacles that result in the exclusion of a parasite in a host species. The compatibility filter β refers to the successful metabolic and/or immunological response in an individual host to an invading pathogen that results in the exclusion of species that do not permit coexistence with the invading pathogen. Very few studies have tested the α and β filters directly in avian malaria vectors. According to several prior studies (Gager et al. 2008;Njabo et al. 2011;Medeiros et al. 2013;Valkiūnas et al. 2015), vectors do not play a role in driving avian Plasmodium parasite ranges and that ranges are determined by host compatibilities. This may not be universally true, and the range of P. homopolare may be driven by the presence of compatible vectors such as Cx. restuans in the northern hemisphere. Perhaps, there is specific vector compatibility of P. homopolare for Cx. restuans. Cx. restuans is widespread in the USA but also has a patchy distribution (Darsie and Ward 2005) with preferences for marshy areas. Several other studies provide evidence for a β filter by demonstrating genotype-by-genotype interactions between pathogens and their vectors, which then may mean that the structuring of pathogens within populations may be, in part, a result of adaptation of pathogens to local vector genotypes (Ferguson and Read 2002;Schmid-Hempel and Ebert 2003;Lambrechts et al. 2005;Joy et al. 2008;Lambrechts et al. 2009). For example, Lambrechts et al. (2009) provide experimental evidence for the potential role of vector-driven genetic structuring of dengue viruses. Clearly, for avian malaria, more studies are needed to discern the role of vectors as potential filters.
4:
The apparent absence of P. homopolare in laboratoryinfected mosquitoes might have resulted from deficiencies in current diagnostic tools (Bernotienė et al. 2016;Clark et al. 2016). Only after several host passages and vector infectivity studies did we discover that the original donor song sparrow was infected with three Plasmodium species and lineages. The determination of the infection status in the host and the vector can be done by microscopy, ELISA, and PCR-based methodologies, although each method presents challenges. There is limited worldwide expertise available to morphologically identify the erythrocytic stages of parasites, and this is particularly difficult when infections are at the early trophozoite stages and at low parasitemias. In mosquito slide preparations, it is impossible to morphologically distinguish between parasite species. ELISA and PCR-based methodologies can detect conspecific infections provided species-specific circumsporozoite (CS) proteins and primers are available (Coleman et al. 2002;Marchand et al. 2011). Species-specific CS protein and primers are not available for most avian malaria parasites, and we propose that co-infections of avian Plasmodium are therefore underrepresented in host and vector prevalence studies. Additionally, the PCR primer set used in this study, which was not species-specific, has been shown to preferentially amplify one parasite species or lineage over another in the presence of co-infections (Zehtindjiev et al. 2012). These vector infectivity studies confirmed that mosquitoes can support co-infections as the thorax and the salivary glands from individual mosquitoes were infected with different lineages of P. cathemerium simultaneously.
Unfortunately, we were unsuccessful in coaxing the infected mosquitoes to take a second blood meal on naïve canaries to test whether they were capable of transmitting multiple Plasmodium to a recipient canary. The artificial capillary method that was used to detect the expectoration of sporozoites showed some promise. However, this method likely needs considerable refinement and testing before it can be used as a reliable surrogate method for testing in vivo transmission of avian sporozoites. In addition, suitable PCR methods must be available to determine which Plasmodium species sporozoites have been expectorated. Future avian vector competence studies, especially when performed on non-cultured Plasmodium species, should use a combination of transmission assays and multiple Plasmodium species-specific primers on various mosquito parts and extracts especially to identify potentials of co-infections/interactions and transmissibility. Boëte and Paul (2006) stated that current species dynamics of parasites could become disturbed by control measures and may lead to epidemiological changes, but the predictability of the changes would be dependent on the level of equilibrium within the system. Because there are multiple levels in which parasites can interact with one another through competition, such as resource, interference, and immune-mediated competition, it is not clear at which point these parasites reach a state of equilibrium, if ever at all (Snounou and White 2004).
Consequences of mixed infections and vector compatibility
As mentioned above, the general consensus is that hosts rather than the vectors drive avian Plasmodium parasite geographical ranges (Medeiros et al. 2013). However, in our field system, it is not clear how P. homopolare was able to be so abundant in the resident avian population. The results from our vector competence assay indicated that there needs to be more research conducted on how competition among parasites may be driving how mosquitoes transmit the parasites. There is support for this notion in the context of mixed infections. Paul et al. (2002) proposed that interspecific competition during transmission in the vector may have contributed to a restriction of P. gallinaceum around the world. Presence of P. juxtanucleare may reduce the R 0 of P. gallinaceum and could reduce the invasion or establishment of P. gallinaceum. Further supporting evidence can be found in examples of competition for red blood cells affecting parasitemias (McQueen and Mckenzie 2006) and gametocyte production (Bousema et al. 2008) which will alter the potential for establishing infections in mosquitoes. Because of mixed infections complicating outcomes of the vector competence trials in this study, we were unable to make tacit conclusions about the susceptibilities of mosquitoes tested for P. homopolare.
If vector compatibility differences do occur, heterogeneity in parasite ranges and temporal occurrence can be expected. Different prevalences of P. vivax phenotypes exist between Gulf of Mexico and Pacific coastlines because of differences in susceptibilities and geographic distributions between Anopheles albimanus (favoring phenotype VK210) and An. pseudopunctipennis (favoring phenotype VK247) (Rodriguez et al. 2000). Temporal fluctuations of these two P. vivax phenotypes were also noted in Thailand, and one of the explanations offered was related to seasonal abundance of susceptible vectors (Suwanabun et al. 1994). P. homopolare is likely quite widespread in the USA, based on 100% sequence matches in GenBank from various bird species (Walther et al. 2014), but on more local scales has a patchy distribution. In California, no P. homopolare parasites were seen in blood smears or detected by PCR in 200 birds in 2014 and 2015 at the Stone Lakes NWR, Elk Grove, CA (Carlson et al. unpublished data). Both China Creek and Stone Lakes share similar riverine habitat, bird species, and mosquito species except for Cx. restuans, which were not found in the latter site.
Vector competence for avian malaria parasites in the Cx. pipiens complex in California should be further studied. It is curious that along with this study, reports by Carlson et al. (2015) and by Carlson et al. (2016) identify Cx. pipiens complex as a low importance vector. Because Kothera et al. (2013) propose that Cx. pipiens are mostly hybrids of Cx. pipiens and Cx. quinquefasciatus in California, it is plausible that this genetic makeup allows them to be less permissible to avian malaria parasites in comparison to most reports on these species elsewhere in the world (Valkiūnas 2005).
|
v3-fos-license
|
2020-04-23T09:14:38.880Z
|
2020-04-20T00:00:00.000
|
216109322
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2308-3425/7/2/13/pdf",
"pdf_hash": "7db66358fa4f550cf9b8ea1cb4c5b6e6b37f2837",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46589",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "1a726d86f4b5a1e0069841bf25cafe89712490f9",
"year": 2020
}
|
pes2o/s2orc
|
Mis-Expression of a Cranial Neural Crest Cell-Specific Gene Program in Cardiac Neural Crest Cells Modulates HAND Factor Expression, Causing Cardiac Outflow Tract Phenotypes
Congenital heart defects (CHDs) occur with such a frequency that they constitute a significant cause of morbidity and mortality in both children and adults. A significant portion of CHDs can be attributed to aberrant development of the cardiac outflow tract (OFT), and of one of its cellular progenitors known as the cardiac neural crest cells (NCCs). The gene regulatory networks that identify cardiac NCCs as a distinct NCC population are not completely understood. Heart and neural crest derivatives (HAND) bHLH transcription factors play essential roles in NCC morphogenesis. The Hand1PA/OFT enhancer is dependent upon bone morphogenic protein (BMP) signaling in both cranial and cardiac NCCs. The Hand1PA/OFT enhancer is directly repressed by the endothelin-induced transcription factors DLX5 and DLX6 in cranial but not cardiac NCCs. This transcriptional distinction offers the unique opportunity to interrogate NCC specification, and to understand why, despite similarities, cranial NCC fate determination is so diverse. We generated a conditionally active transgene that can ectopically express DLX5 within the developing mouse embryo in a Cre-recombinase-dependent manner. Ectopic DLX5 expression represses cranial NCC Hand1PA/OFT-lacZ reporter expression more effectively than cardiac NCC reporter expression. Ectopic DLX5 expression induces broad domains of NCC cell death within the cranial pharyngeal arches, but minimal cell death in cardiac NCC populations. This study shows that transcription control of NCC gene regulatory programs is influenced by their initial specification at the dorsal neural tube.
Introduction
Congenital heart defects (CHDs) afflict roughly 1% of newborns and ultimately affect the quality of life of more than 1 million adults in the United States [1]. Many CHDs affect the cardiac outflow tract (OFT) [2]. A significant portion of CHDs can, therefore, be attributed to developmental dysfunction of one of the main developmental progenitors of the OFT, known as the neural crest cells (NCCs). NCCs migrate from the dorsal neural tube throughout the developing embryo [2]. Different NCC subpopulations differentiate into distinct tissue types. The cardiac NCCs differentiate into smooth muscle and connective tissue to form portions of the aorta, pulmonary artery, and nascent ventricular septum. Although NCC have been well studied, the gene regulatory networks that drive NCCs cells to
Lysotracker and TUNEL
Cell death analysis on control and mutant embryos was performed as described [18,19]. Lysotracker (Life Technologies) was incubated with embryos as per the manufacturer's instructions. Embryos were imaged in a well slide on a Leica DM5000 B compound florescent microscope. TUNEL analyses were performed upon sectioned embryos using the ApopTag Plus Fluorescein in situ Apoptosis detection kit (S7111 Chemicon International) as per the manufacturer's instructions.
Immunohistochemistry
Immunohistochemistry was performed as previously described [17] using an antibody against TUBULIN β3 (β-TUBB3, Abcam). Images were collected on a Leica DM5000 B microscope and Leica Application Suite software.
Quantitative RT-PCR
Total RNA was isolated from E11.5 mandibular pharyngeal arches using the High Pure RNA Isolation Kit (Roche). This RNA was then used to synthesize cDNA using the High-Capacity cDNA Reverse Transcription Kit (Applied Biosystems). For qRT-PCR, cDNA was amplified using TaqMan Probe-Based Gene Expression Assays (Applied Biosystems) and the QuantStudio 3 Real-Time PCR System (ThermoFisher). Normalization to Glyceraldehyde 3-phosphate dehydrogenase (GAPDH)was used to determine relative gene expression and statistical analysis automatically applied by the instrumental software. Significance of qRT-PCR results were determined by a two-tailed students t-test followed by post hoc Benjamini-Hochberg FDR correction as automatically calculated by the QuantStudio 3 qRT-PCR thermal cycler software analysis package.
NCC Expression of CAG-CAT-Dlx5 Results in Midface Clefting
To test the hypothesis that mis-expression of a Dlx5 within NCCs would result in craniofacial and cardiac NCC phenotypes, we generated a conditional Dlx5-gain-of-function mouse line (CAG-CAT-Dlx5, Figure 1A). We employed an NCC-specific Cre driver (Wnt1-Cre; [22]) to ectopically drive Dlx5 expression within all NCCs (Wnt1-Cre;CAG-CAT-Dlx5 (Dlx5 NCC oe). In order to determine if persistent NCC expression of Dlx5 causes a NCC phenotype, we first looked at craniofacial structures at E16.5 ( Figure 1). Compared with Cre-negative littermates ( Figure 1B), the upper jaw (uj) of Dlx5 NCC oe embryos is underdeveloped and split along the midline, which results in a severe midfacial cleft that allows visualization of the tongue (t, Figure 1C). The mandible (md) is also hypoplastic and misshaped ( Figure 1D, E). This phenotype is similar to what is observed in HAND1 dimer mutant mice [9] but is far more severe than observed in embryos recently reported in which a Dlx5 cDNA was inserted into the ROSA26 locus and then activated in NCCs (NCC Dlx5 ) [23]. mandible-like structure while embryos from another line have a large midfacial cleft. Since there is not a significant difference in mature EDN1 by Enzyme-Linked ImmunoSorbent Assay (ELISA) between the two lines, the differences likely reflect insertion site dynamics, with the varying phenotypic outcome reflecting a range of EDN1 action in NCCs. Overall, the phenotype of Dlx5 NCC oe embryos indicates that DLX5 has several unrecognized functions during early NCC patterning. A more detailed analysis of this is being reported elsewhere. Anterior and ventral whole mount views, respectively, of E16.5 heads from Control (CAG-CAT-Dlx5) (B) and Wnt1-Cre; CAG-CAT-Dlx5 (Dlx5 NCC oe) (C) embryos. Compared to control embryos, the midface of Dlx5 NCC oe embryos contain a large cleft separating the upper jaw (uj) into left and right sides and exposing the tongue (t). The lower jaw (lj) also appears misshapen and a coloboma (arrow) is present. (D-I.) E18.5 control (B, D, F, H) and Dlx5 NCC oe (C, E, G, I) embryos stained with Alizarin Red and Alcian Blue to visualize bone and cartilage, respectively. (D, E) Lateral views of skulls. In control embryos, the parietal (p) and frontal (f) are observed, which abut the squamosal (sq) bone (D). In the jaw region, the nasal (n), premaxilla (pmx), and maxilla (m) bones are observed, with the zygomatic process of the maxilla abutting the jugal (j) bone of the zygomatic arch. The tympanic ring (ty) bone is also observed. In Dlx5 NCC oe mice, the parietal, frontal bones are hypoplastic, as is the tympanic ring and squamosal bones (E). The premaxilla, maxilla, and jugal bones are dysmorphic, though the mandible appears relatively normal. (F, G) Ventral view of skulls with the mandible . Anterior and ventral whole mount views, respectively, of E16.5 heads from Control (CAG-CAT-Dlx5) (B) and Wnt1-Cre; CAG-CAT-Dlx5 (Dlx5 NCC oe) (C) embryos. Compared to control embryos, the midface of Dlx5 NCC oe embryos contain a large cleft separating the upper jaw (uj) into left and right sides and exposing the tongue (t). The lower jaw (lj) also appears misshapen and a coloboma (arrow) is present. (D-I). E18.5 control (B,D,F,H) and Dlx5 NCC oe (C,E,G,I) embryos stained with Alizarin Red and Alcian Blue to visualize bone and cartilage, respectively. (D,E) Lateral views of skulls. In control embryos, the parietal (p) and frontal (f) are observed, which abut the squamosal (sq) bone (D). In the jaw region, the nasal (n), premaxilla (pmx), and maxilla (m) bones are observed, with the zygomatic process of the maxilla abutting the jugal (j) bone of the zygomatic arch. The tympanic ring (ty) bone is also observed. In Dlx5 NCC oe mice, the parietal, frontal bones are hypoplastic, as is the tympanic ring and squamosal bones (E). The premaxilla, maxilla, and jugal bones are dysmorphic, though the mandible appears relatively normal. (F,G) Ventral view of skulls with the mandible removed. In contrast to control embryos (F), most skull base bones in Dlx5 NCC oe embryos are hypoplastic, including the basisphenoid (bs), pterygoids (pt), and alisphenoids (al). The palatine (p) bones fuse along the midline but are also hypoplastic, as are the tympanic rings (G). The dysmorphology of the maxilla along the midline is also apparent, along with the large midline cleft that now exists (*). (H,I) Lateral (intra-oral) view of the left mandible (md). The coronoid (cp), condylar (cdp), and angular (ap) processes of the mandible are present in both control (H) and Dlx5 NCC oe (I) embryos, as are the incisors (i). bo, basisoccipital bone; e, eye; hy, hyoid bone.
To determine how these changes were reflected in near term embryos, E18.5 Dlx5 NCC oe and Control (Cre-negative CAG-CAT-Dlx5 littermates) embryos were stained with Alizarin Red and Alcian Blue to visualize bone and cartilage, respectively. Compared to control embryos ( Figure 1C,E), the NCC-derived bones of the skull vault (frontal, f) and nasal, n) and skull base bone (basisphenoid (bs), alisphenoid (al), and palatine (p) bones) are hypoplastic or absent in Dlx5 NCC oe embryos ( Figure 1D,F). In addition, there is a large midfacial cleft in which the two halves of the premaxilla (pmx) fail to meet at the midline. This cleft extends back through the palatal processes of the maxilla (mx) but does not affect fusion of the palatine bones. The maxilla bones are quite dysmorphic, which makes it impossible to determine if they represent a homeotic transformation into more a mandible-like shape as observed in the NCC Dlx5 mice [23]. However, these changes are similar to those observed when an Endothelin-1 (Edn1) cDNA is driven in NCCs using Wnt1-Cre mice [24,25]. In one Chicken b-actin (CBA)-Edn1;Wnt1-Cre line, embryos have a classic homeotic transformation of the maxilla into a mandible-like structure while embryos from another line have a large midfacial cleft. Since there is not a significant difference in mature EDN1 by Enzyme-Linked ImmunoSorbent Assay (ELISA) between the two lines, the differences likely reflect insertion site dynamics, with the varying phenotypic outcome reflecting a range of EDN1 action in NCCs. Overall, the phenotype of Dlx5 NCC oe embryos indicates that DLX5 has several unrecognized functions during early NCC patterning. A more detailed analysis of this is being reported elsewhere.
NCC Expression of CAG-CAT-Dlx5 Down Regulates Hand1 Ventral Cap Expression
DLX5 regulates gene expression in post-migration cranial NCCs and, although expressed in cardiac NCCs, is not required for OFT morphogenesis, as heart defects are not reported in Dlx5;Dlx6 double knockout mice [12,[23][24][25][26][27]. DLX5 negatively regulates Hand1 expression [8] and positively regulates Hand2 expression [6,28] in cranial NCCs. To ensure that the Cre-activated transgene is functional, we first intercrossed CAG-CAT-Dlx5 mice with the Hand2-Cre driver ( Figure 2) [3]. E10.5 whole mount in situ hybridization of Dlx5 shows normal robust dorsal expression where the yellow line demarks its ventral most expression and the white line shows the ventral most cap of the arch of control embryos ( Figure 2A). Cre-activation of the CAG-CAT-Dlx5 allele reveals a noticeable expansion of Dlx5 expression ventrally such that the space between the yellow and white lined marked boundaries is noticeably reduced, which indicates the efficacy of the Cre-inducible CAG-CAT-Dlx5 allele ( Figure 2B). We next intercrossed CAG-CAT-Dlx5 mice with our Hand1 PA/OFT -lacZ reporter line, which we have previously shown is sensitive to DLX5 negative regulation [8]. At E11.5, control embryos show expected cranial and cardiac NCC expression as well as the second heart field derived myocardium of the myocardial cuff ( Figure 2C). Most notably, the ventral most NCC of the mandibular arch strongly expresses the Hand1 PA/OFT -lacZ reporter (1, arrowhead, Figure 2C, n = 4). When Hand1 PA/OFT -lacZ reporter expression is combined with the Wnt1-Cre; CAG-CAT-Dlx5 alleles, significantly reduced β-galactosidase staining is observed within the ventral cap domain of arch 1 ( Figure 2D arrowhead, n = 4). Transverse sections through the cardiac OFT reveal robust β-galactosidase staining within the NCC mesenchyme and OFT myocardium ( Figure 2E arrow). In Dlx5 NCC oe; Hand1 PA/OFT -lacZ mice, the β-galactosidase staining intensity is unchanged. However, the ventral boundary of the β-galactosidase positive NCCs is more dorsal (Figure 2F
Dlx5 NCC oe OFTs Present Persistent Truncus Arteriosus (PTA)
Disruption of Hand factor expression causes cardiac defects within the OFT [4,28]. To determine whether persistent Dlx5 expression within cardiac NCCs alters OFT morphogenesis and to assay Hand1 OFT expression, we looked at E16.5 hearts ( Figure 3). In control (CAG-CAT-Dlx5; Hand1 PA/OFT -lacZ) E16.5 hearts, OFT formation appears normal, wherein the pulmonary artery (PT) directly connects with the right ventricle (RV) and the aorta (Ao) directly connects with the left ventricle (LV, Figure 3A-C). Hand1 PA/OFT -lacZ expression, visualized by β-galactosidase staining, is detectable within the smooth muscle wall of the aorta and within myocardial cuff cardiomyocytes. In contrast, Dlx5 NCC oe mutants present with either a single OFT vessel, which is a condition termed persistent truncus arteriosus (PTA, Figure 3D and E, n = 6/10), or with a double outlet right ventricle (DORV), wherein the Ao connects directly with both right ventricle (RV) and left ventricle (LV) ( Figure 3F-H, n = 3/10). Hand1 PA/OFT -lacZ expression is observed within the aortic smooth muscle and cuff myocardium. Summary of the encountered phenotypes is presented in Table 1
Dlx5 NCC oe OFTs Present Persistent Truncus Arteriosus (PTA)
Disruption of Hand factor expression causes cardiac defects within the OFT [4,28]. To determine whether persistent Dlx5 expression within cardiac NCCs alters OFT morphogenesis and to assay Hand1 OFT expression, we looked at E16.5 hearts (Figure 3). In control (CAG-CAT-Dlx5; Hand1 PA/OFT -lacZ) E16.5 hearts, OFT formation appears normal, wherein the pulmonary artery (PT) directly connects with the right ventricle (RV) and the aorta (Ao) directly connects with the left ventricle (LV, Figures 3A-C). Hand1 PA/OFT -lacZ expression, visualized by β-galactosidase staining, is detectable within the smooth muscle wall of the aorta and within myocardial cuff cardiomyocytes. In contrast, Dlx5 NCC oe mutants present with either a single OFT vessel, which is a condition termed persistent truncus arteriosus (PTA, Figure 3D and E, n = 6/10), or with a double outlet right ventricle (DORV), wherein the Ao connects directly with both right ventricle (RV) and left ventricle (LV) ( Figure 3F-H, n = 3/10). Hand1 PA/OFT -lacZ expression is observed within the aortic smooth muscle and cuff myocardium. Summary of the encountered phenotypes is presented in Table 1.
NCC Migration into the OFT Is Limited in Dlx5 NCC oe Mice
The PTA and DORV observed in Dlx5 NCC oe mutants could be due to do altered NCC morphogenesis, increased NCC cell death, or lack of normal NCC migration. To determine what mechanisms are involved, we first performed a Wnt1-Cre lineage analysis between E9.5 and E11.5 to look for migration of Wnt1-Cre lineage cells within the OFT (Figure 4). In both E9.5 (28 and 29 somite) and E10.5 (36 somite) embryos, both the first and second pharyngeal arches are robustly positive for Wnt1-Cre marked NCCs ( Figure 4A,C, arrows). Cardiac NCCs are also robustly present dorsal to the OFT (bracket) and can be seen entering the OFT (arrowhead, Figure 4A,C). In Dlx5 NCC oe mutants, Wnt1-Cre marked NCCs are also observed. However, β-galactosidase staining is less robust ( Figure 4B,D). First and second phalangeal arches are smaller ( Figure 4B,D, arrows). The population of cardiac NCCs dorsal to the OFT is smaller ( Figure 4B,D, bracket) and the Wnt1-Cre marked NCCs visible within the OFT are clearly diminished ( Figure 4B,D, arrowheads).
NCC Migration into the OFT is Limited in Dlx5 NCC oe Mice
The PTA and DORV observed in Dlx5 NCC oe mutants could be due to do altered NCC morphogenesis, increased NCC cell death, or lack of normal NCC migration. To determine what mechanisms are involved, we first performed a Wnt1-Cre lineage analysis between E9.5 and E11.5 to look for migration of Wnt1-Cre lineage cells within the OFT (Figure 4). In both E9.5 (28 and 29 somite) and E10.5 (36 somite) embryos, both the first and second pharyngeal arches are robustly positive for Wnt1-Cre marked NCCs ( Figure 4A,C, arrows). Cardiac NCCs are also robustly present dorsal to the OFT (bracket) and can be seen entering the OFT (arrowhead, Figure 4A,C). In Dlx5 NCC oe mutants, Wnt1-Cre marked NCCs are also observed. However, β-galactosidase staining is less robust ( Figure 4B,D). First and second phalangeal arches are smaller ( Figure 4B,D, arrows). The population of cardiac NCCs dorsal to the OFT is smaller ( Figure 4B
Dlx5 NCC oe Embryos Exhibit NCC Cell Death Within the Neural Tube, Pharyngeal Arches but Minimally Within the OFT
To determine whether Wnt1-Cre marked NCC reduction within the pharyngeal arches reflects solely migration defects and not NCC cell death, we performed lysotracker staining to assess cell death in E9.5 and E10.5 Dlx5 NCC oe embryos. In E9.5 controls, developmentally normal cell death is observed with regions of the head as well as tissues located dorsally to the OFT ( Figure 5A, bracket). Cell death within the pharyngeal arches is minimal (arrow Figure 5A, n = 4). In contrast, significant cell death is observed at the dorsal neural tube (arrowheads) and within the first and second pharyngeal arches (arrow robust positive staining within the arches, Figure 5B, n = 4). Cell death in the caudal pharyngeal arches was not appreciably affected in E9.5 Dlx5 NCC oe embryos ( Figure 5A,B, brackets, n = 4). No significant cell death is observed within the heart. At E10.5, control (CAG-CAT-Dlx5) embryos displayed domains of lysotracker-positive cells within the proximal rostral pharyngeal arches ( Figure 5C, white arrows). Although the pharyngeal arches are now hypoplastic at this stage, the lysotracker-positive cells of proximal rostral pharyngeal arches of Dlx5 NCC oe littermates are not observed ( Figure 5D, white arrows). Continued cell death along the dorsal neural tube is still evident (arrowheads, Figure 5D). To characterize this aberrant cell death in closer detail, we performed TUNEL analyses upon embryonic sections at E10.5. (Figure 5E-H). Compared to control embryos, extensive cell death in the dorsal neural tube is visible in the Dlx5 NCC oe embryos at E10.5 (compare Figure 5E,F, arrowheads, n = 5). The cranial pharyngeal arches are also clearly hypoplastic in E10.5 Dlx5 NCC oe embryos. No significant changes in cell death are observed in E10.5 OFT of Dlx5 NCC oe embryos when compared to control embryos. Together, this data shows that decreased NCC contribution to the pharyngeal arches and OFTs of Dlx5 NCC oe embryos is mechanistically likely the result of the increased NCC death observed during NCC migration that initiates at their point of origin within the dorsal neural tube.
To determine whether gene expression is altered in the Dlx5 NCC oe NCCs that contribute to the OFT, we looked at the expression of the cardiac NCC and OFT myocardium marker Hand2 [28,29] and the NCC marker Sox9 [30,31] (Figure 6). Expression of Hand2 is modest within the ventricular myocardium of the RV and LV and more robust within the endocardium, the myocardial cuff myocardium, and cardiac NCC (arrow) of control E10.5 hearts ( Figure 6A). In Dlx5 NCC oe hearts, expression within the endocardium, ventricles, and myocardial cuff is similar to that of control hearts. However, expression within the cardiac NCC is diminished (arrow, Figure 6B). Robust Sox9 expression is observed within control OFT NCCs (arrow, Figure 6C), whereas Sox9 expression with Dlx5 NCC oe OFT NCCs is significantly reduced, which results from decreased expression and / or less Sox9-expressing NCCs (arrow, Figure 6D).
Persistent Dlx5 Expression Does Not Induce NCCs to Adopt a Neuronal Cell Fate
Loss of TWIST1 function induces NCCs to differentiate along a neuronal cell fate [13]. Within the OFT, Twist1-null cardiac NCCs organize into ganglia-like structures and express a number of neuronal genes [13]. Additionally, these trans-differentiating cardiac NCCs express Hand1 [13]. Crossing the Hand1 PA/OFT -lacZ reporter onto a Twist1 -/fx , Wnt1-Cre(+) background (Twist1 NCC CKO) reveals that these NCCs are specifically marked by Hand1 PA/OFT enhancer activity ( Figure 7B, arrowheads). Intriguingly, these ganglia-like structures also express Dlx5 ( Figure 7D, arrowhead, n = 3), which is not detectable in the cardiac NCCs of control OFTs ( Figure 7C). To determine whether persistent Dlx5 expression within the NCCs promotes cardiac NCC populations to differentiate along a neuronal path, we looked at the expression of the pan-neuronal marker Class III β-TUBULIN (TUBB3) [32] and the receptor tyrosine kinase Ret, which marks NCC-derived neurons [33]. TUBB3 immunostaining of E11.5 embryos on the Hand1 PA/OFT -lacZ reporter background revealed no detectable TUBB3 protein within OFT NCCs of either control or Dlx5 NCC oe embryos ( Figure 7E,F, n = 2). Ret in situ hybridization of E10.5 embryos yielded similar findings ( Figure 7G,H, n = 4). These results show that, although Dlx5 expression is induced in ectopic neurons within Twist1-null cardiac NCCs, ectopic Dlx5 expression within the cardiac NCCs is not sufficient to induce neurogenesis. and the NCC marker Sox9 [30,31] (Figure 6). Expression of Hand2 is modest within the ventricular myocardium of the RV and LV and more robust within the endocardium, the myocardial cuff myocardium, and cardiac NCC (arrow) of control E10.5 hearts ( Figure 6A). In Dlx5 NCC oe hearts, expression within the endocardium, ventricles, and myocardial cuff is similar to that of control hearts. However, expression within the cardiac NCC is diminished (arrow, Figure 6B). Robust Sox9 expression is observed within control OFT NCCs (arrow, Figure 6C), whereas Sox9 expression with Dlx5 NCC oe OFT NCCs is significantly reduced, which results from decreased expression and / or less Sox9-expressing NCCs (arrow, Figure 6D).
Persistent Dlx5 Expression Does Not Induce NCCs to Adopt a Neuronal Cell Fate
Loss of TWIST1 function induces NCCs to differentiate along a neuronal cell fate [13]. Within the OFT, Twist1-null cardiac NCCs organize into ganglia-like structures and express a number of neuronal genes [13]. Additionally, these trans-differentiating cardiac NCCs express Hand1 [13]. Crossing the Hand1 PA/OFT -lacZ reporter onto a Twist1 -/fx , Wnt1-Cre(+) background (Twist1 NCC CKO) reveals that these NCCs are specifically marked by Hand1 PA/OFT enhancer activity ( Figure 7B, arrowheads). Intriguingly, these ganglia-like structures also express Dlx5 ( Figure 7D, arrowhead, n = 3), which is not detectable in the cardiac NCCs of control OFTs ( Figure 7C). To determine whether persistent Dlx5 expression within the NCCs promotes cardiac NCC populations to differentiate along a neuronal path, we looked at the expression of the pan-neuronal marker Class III β-TUBULIN (TUBB3) [32] and the receptor tyrosine kinase Ret, which marks NCC-derived neurons [33]. TUBB3
Persistent Dlx5 Expression within NCCs Downregulates Dlx6 Expression
Signaling from the Endothelin receptor type A (EDNRA) induces Dlx5 expression in the pharyngeal arches [34]. In order to see whether DLX5 overexpression had regulatory effects upon other Endothelin 1-induced genes, we looked at expression of Hand2 and the related homeobox transcription factor Dlx6 in the pharyngeal arches of the E10.5 control and Dlx5 NCC oe embryos (Figure 8). In situ hybridization of Hand2 mRNA revealed that Hand2 expression appears unchanged within the first pharyngeal arch (1) in Dlx5 NCC oe embryos (Figures 8A,B, n = 3). This is confirmed quantitatively by qRT-PCR ( Figure 8E, n = 6). In contrast, Dlx6 expression is robust within the first pharyngeal arch of control embryos but is significantly reduced in in Dlx5 NCC oe embryos (Figure
Persistent Dlx5 Expression within NCCs Downregulates Dlx6 Expression
Signaling from the Endothelin receptor type A (EDNRA) induces Dlx5 expression in the pharyngeal arches [34]. In order to see whether DLX5 overexpression had regulatory effects upon other Endothelin 1-induced genes, we looked at expression of Hand2 and the related homeobox transcription factor Dlx6 in the pharyngeal arches of the E10.5 control and Dlx5 NCC oe embryos (Figure 8). In situ hybridization of Hand2 mRNA revealed that Hand2 expression appears unchanged within the first pharyngeal arch (1) in Dlx5 NCC oe embryos ( Figure 8A,B, n = 3). This is confirmed quantitatively by qRT-PCR ( Figure 8E, n = 6). In contrast, Dlx6 expression is robust within the first pharyngeal arch of control embryos but is significantly reduced in in Dlx5 NCC oe embryos ( Figure 8C,D,E). Hand1, which is known to be negatively regulated by DLX5 [8], is also significantly downregulated ( Figure 8E). J. Cardiovasc. Dev. Dis. 2020, 7, x FOR PEER REVIEW 13 of 17 8C,D,E). Hand1, which is known to be negatively regulated by DLX5 [8], is also significantly downregulated ( Figure 8E).
Discussion
NCCs are a dynamic and multipotent cell population that, during embryogenesis, migrate ventrally from the neural tube to contribute to organ morphogenesis [35]. Major components of the cardiac OFT and nearly the entire vertebrate facial complex are NCC-derived. Dysregulation of these NCC populations result in the majority of congenital abnormalities encountered in humans. In this study, we set out to interrogate how NCC gene regulatory networks that include HAND transcription factors facilitate NCC specialization into specific tissue fates. Hand1 and Hand2 mark both cranial and cardiac NCC populations [36,37], exhibit genetic interactions that, when disrupted, result in a phenotype [4,19,21,38], and set up tissue boundaries that are essential normal tissue morphogenesis within the post migration NCCs occupying the pharyngeal arches [6,8].
The cranial transcriptional enhancers that drive Hand1 and Hand2 within NCC as well as the cardiac NCC transcriptional enhancer for Hand1 are established [2,5,39]. Analysis of these enhancers has revealed transcriptional inputs from both Endothelin Receptor A EDNRA (through DLX5 and DLX6) and BMP signaling (through SMADs 1/5/8) as well as direct and required regulation of Hand1 by HAND2 and DLX5/6 [8,10,39]. Given the spatial changes in Hand1 expression that result from BMP gain-of-function and HAND2 loss-of-function, ectopic expression of DLX5 within all NCC was performed to look at the effects on cranio-facial and OFT formation.
The first observation that is noted is that Dlx5 NCC oe mice exhibit severe midface clefting ( Figure 1). We speculate that mechanistically this is likely the result of increased cranial NCC apoptosis ( Figure 4). Hand1 phospho mutant conditional knock-in mice exhibit a similar craniofacial phenotype [9]. Although Hand1 NCC loss-of-function mice exhibit no observable phenotypes [4], it is clear that
Discussion
NCCs are a dynamic and multipotent cell population that, during embryogenesis, migrate ventrally from the neural tube to contribute to organ morphogenesis [35]. Major components of the cardiac OFT and nearly the entire vertebrate facial complex are NCC-derived. Dysregulation of these NCC populations result in the majority of congenital abnormalities encountered in humans. In this study, we set out to interrogate how NCC gene regulatory networks that include HAND transcription factors facilitate NCC specialization into specific tissue fates. Hand1 and Hand2 mark both cranial and cardiac NCC populations [36,37], exhibit genetic interactions that, when disrupted, result in a phenotype [4,19,21,38], and set up tissue boundaries that are essential normal tissue morphogenesis within the post migration NCCs occupying the pharyngeal arches [6,8].
The cranial transcriptional enhancers that drive Hand1 and Hand2 within NCC as well as the cardiac NCC transcriptional enhancer for Hand1 are established [2,5,39]. Analysis of these enhancers has revealed transcriptional inputs from both Endothelin Receptor A EDNRA (through DLX5 and DLX6) and BMP signaling (through SMADs 1/5/8) as well as direct and required regulation of Hand1 by HAND2 and DLX5/6 [8,10,39]. Given the spatial changes in Hand1 expression that result from BMP gain-of-function and HAND2 loss-of-function, ectopic expression of DLX5 within all NCC was performed to look at the effects on cranio-facial and OFT formation.
The first observation that is noted is that Dlx5 NCC oe mice exhibit severe midface clefting (Figure 1). We speculate that mechanistically this is likely the result of increased cranial NCC apoptosis ( Figure 4). Hand1 phospho mutant conditional knock-in mice exhibit a similar craniofacial phenotype [9]. Although Hand1 NCC loss-of-function mice exhibit no observable phenotypes [4], it is clear that Hand1 is down regulated within Dlx5 NCC oe mice (Figures 2 and 8). If other potential HAND1 bHLH partners are also transcriptionally regulated, the combination of these changes to the bHLH gene regulatory networks via alteration of the bHLH dimer pool available to form transcriptional dimer complexes could account for the similar phenotypes. Of note, a similar study employing a Rosa locus Dlx5 knockin was recently reported [12,[23][24][25][26] and showed short snout, open eyelids, misaligned vibrissae, and a cleft palate with no clear signs of palatine rugae. In our study, using a traditional transgenic insertion increases severity and is likely the result of increased Dlx5 expression in our model.
The Hand1 PA/OFT enhancer drives expression in both cranial and cardiac NCCs, and DLX5 and DLX6 directly repress Hand1 PA/OFT enhancer activity [8]. However, Dlx5 and Dlx6 are not robustly expressed within cardiac NCC populations and Dlx5/Dlx6 loss-of-function or gain-of-function mutants display no observable cardiac phenotypes [12,[23][24][25][26]. Moreover, cardiac NCC expression of Dlx5 was observed in chicken [12,[23][24][25][26] where differences between mammals were observed [35]. When DLX5 is expressed with cardiac NCC, although significant OFT abnormalities like PTA and DORV are observed, the expression of the Hand1 PA/OFT enhancer is not clearly downregulated. This suggests that, although DLX5 is a dominate repressor in cranial NCC populations, its presence in cardiac NCCs does not impact Hand1 PA/OFT enhancer activity, which suggests that DLX5 may not be sufficient for Hand1 transcriptional repression and that additional factors are required. Along these lines, it is important to note that HAND2 is necessary for Hand1 expression within cranial but not cardiac NCC. This also suggests that HAND2 must act with additional factors to regulate Hand1 within the cardiac NCCs. GATA transcription factors are also required for Hand1 PA/OFT transcriptional activity [8]. In E10.5 reporter embryos in which GATA cis-regulatory elements in the Hand1 PA/OFT enhancer have been mutagenized, cranial Hand1 PA/OFT expression is ablated whereas cardiac NCC expression, although slightly reduced, persists [8]. Thus, factors required for enhancer activation in one NCC population (DLX5, SMAD, HAND2, GATA) do not significantly alter expression within another NCC population. Cranial NCC subpopulations share prepatterned chromatin states that are poised to respond to distinct local signaling cues depending upon where in the head they ultimately reside [40]. We propose that distinctions between cranial and cardiac NCC chromatin states enable these populations to respond to identical transcriptional inputs within the same cis-regulatory element in unique manners.
Given that there are observed OFT phenotypes, it is clear that ectopic DLX5 activity alters cardiac NCC gene expression. There is clear reduction in Hand2 expression within cardiac NCCs at E10.5 as well as a reduction of Sox9 expressing cells ( Figure 6). Changes in the cardiac NCC gene regulatory network combined with reductions in NCC numbers are the likely causes of the PTA and DORV observed in Dlx5 NCC oe embryos. To assess potential NCC trans-differentiation, we looked at the neuronal markers TUBB3 and Ret, and found that, even though Dlx5 expression is highly upregulated in Twist1-null cardiac NCCs, upregulation of DLX5 alone is insufficient to cause trans-differentiation ( Figure 7).
Lastly, it is clear that Dlx gene dosage is modulated in Dlx5 NCC oe mice. The highly related Dlx6, which is co-expressed with Dlx5 in cranial NCC, is significantly downregulated within the first and second pharyngeal arches of Dlx5 NCC oe embryos (Figure 8). A precise balance of specific transcription factors within subpopulations of NCCs appears necessary for these cells to migrate and differentiate to the correct tissue type and structures. NCC specification is thought to be governed from the rostral-caudal origin of delaminating NCCs from the neural tube. However, post migratory trans-differentiation is possible [13], which reflects the necessity for both positional and gene expression modulation. The data from this study reflects that altering the gene regulatory networks by transcription factor gain-of-function analysis can be used to reveal sensitive and insensitive actions of a single factor on a single enhancer in two separately fated populations of NCCs.
|
v3-fos-license
|
2018-04-03T00:10:51.671Z
|
2017-10-02T00:00:00.000
|
3271537
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-017-12679-8.pdf",
"pdf_hash": "082ec9fdd5230c20fa8a16057bed8f59a05f4821",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46590",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "082ec9fdd5230c20fa8a16057bed8f59a05f4821",
"year": 2017
}
|
pes2o/s2orc
|
Detection of known and novel ALK fusion transcripts in lung cancer patients using next-generation sequencing approaches
Rearrangements of the anaplastic lymphoma kinase (ALK) gene in non-small cell lung cancer (NSCLC) represent a novel molecular target in a small subset of tumors. Although ALK rearrangements are usually assessed by immunohistochemistry (IHC) and fluorescence in situ hybridization (FISH), molecular approaches have recently emerged as relevant alternatives in routine laboratories. Here, we evaluated the use of two different amplicon-based next-generation sequencing (NGS) methods (AmpliSeq and Archer®FusionPlex®) to detect ALK rearrangements, and compared these with IHC and FISH. A total of 1128 NSCLC specimens were screened using conventional analyses, and a subset of 37 (15 ALK-positive, and 22 ALK-negative) samples were selected for NGS assays. Although AmpliSeq correctly detected 25/37 (67.6%) samples, 1/37 (2.7%) and 11/37 (29.7%) specimens were discordant and uncertain, respectively, requiring further validation. In contrast, Archer®FusionPlex® accurately classified all samples and allowed the correct identification of one rare DCTN1-ALK fusion, one novel CLIP1-ALK fusion, and one novel GCC2-ALK transcript. Of particular interest, two out of three patients harboring these singular rearrangements were treated with and sensitive to crizotinib. These data show that Archer®FusionPlex® may provide an effective and accurate alternative to FISH testing for the detection of known and novel ALK rearrangements in clinical diagnostic settings.
inhibitors have been identified and successfully validated, first in preclinical models in vitro and in vivo, and then in clinical studies. The US Food and Drugs Administration (FDA) has therefore approved the use of some small molecules in advanced ALK-rearranged NSCLC patients. Crizotinib, a well-tolerated first generation ALK inhibitor 3,6 , has been shown to be superior to standard chemotherapy both as a first-and second-line treatment 1,7 , while second generation ALK inhibitors, such as alectinib and ceritinib, are effective not only in crizotinib-naïve patients, but also in patients with acquired resistance to crizotinib 1,[8][9][10] .
Fluorescence in situ hybridization (FISH) is currently acknowledged as the "gold standard" for detection of ALK rearrangements. The Vysis LSI ALK Break Apart FISH Probe Kit has been approved by the FDA as a companion diagnostic test for administration of ALK inhibitors in lung cancer patients. The immunohistochemical (IHC) method, which can detect ALK protein expression independently of the underlying mechanism mediating its overexpression, is used as a pre-screening test, alongside FISH, to determine ALK status in formalin-fixed paraffin embedded (FFPE) tissue specimens. However, even though IHC is widely implemented in pathology laboratories, easy-to-use, and automatically performed, its interpretation remains difficult to standardize and time-consuming. In addition, FISH is expensive, labor intensive, requires expert pathology assessment, and is not amenable to multiplexing.
It has been recognized that the development of molecular approaches strengthens the accuracy of ALK fusion diagnosis, by resolving discordant or borderline cases [11][12][13] . Several RNA-based methods, including the nCounter assay (NanoString Technologies), reverse transcription-polymerase chain reaction (RT-PCR), multiplex RT-PCR followed by capillary electrophoresis, and RT-quantitative PCR (RT-qPCR) have demonstrated their ability [14][15][16][17][18][19][20][21] . However, some limitations prevent their full implementation in the clinical setting. They easily highlight already known fusions, but may misdiagnose new variants and fusion partners due to the low precision of the 3′/5′ imbalance value. In addition, the multiplex capabilities of some of the techniques are limited. In this context, next-generation sequencing (NGS) amplicon-based approaches have been assessed for the detection of ALK fusions in NSCLC patients [22][23][24][25][26] . Two main molecular amplicon-based NGS approaches emerged, but have not been compared to date.
Here, we evaluated two different amplicon-based NGS methods (Ampliseq and Archer ® FusionPlex ® ) for the detection of ALK fusions in order to determine the most relevant approach available for routine clinical practice in pathology laboratories. Among a set of 1128 well-characterized FFPE NSCLC specimens, 10 and 13 samples with or without ALK fusion, respectively, were selected for NGS testing and results were compared to IHC and FISH. Interestingly, both amplicon-based assays gave relevant results; however, only one allowed us to detect and to correctly identify the presence of two new and one rare ALK rearrangements.
Results
Specimen characteristics. A total of 1128 NSCLC specimens submitted to the University Hospitals of Montpellier or Toulouse (France) for detection of ALK translocations were firstly screened using IHC. The ALK IHC-positive samples (69, 6.1%) were further explored using FISH. Among them, we randomly selected 15 samples positive for ALK rearrangement determined by both IHC and FISH. Twenty-two ALK-negative samples were also selected as negative controls. We then performed two amplicon-based NGS assays: the Ion AmpliSeq RNA Lung Cancer Research Fusion Panel and the Archer ® FusionPlex ® ALK, RET, ROS1 v2 kit.
Fusion gene detection using the AmpliSeq kit. The Ion AmpliSeq RNA Lung Cancer Research Fusion
Panel is based on an amplicon target enrichment approach that allows amplification and detection of 70 known fusion transcripts for the ALK, RET, ROS1, and NTRK1 genes using a couple of primers specific of each fusion (Fig. 1). If no common fusion transcripts are detected, a 3′/5′ ratio is calculated for the four genes included in the panel and, according to the value obtained, samples are classified into three categories: no evidence, uncertain evidence, or strong evidence of the presence of a fusion. Thus, an imbalanced ratio may reflect the presence of a novel or uncommon fusion transcript in the sample that requires further exploration using a complementary molecular technique.
To assess the reliability of the panel and the inter-run reproducibility, we analyzed RNA extracted from two well-characterized control samples included in FFPE blocks in three independent experiments. These control samples corresponded to an engineered standard sample which harbored two well-characterized fusion transcripts, EML4-ALK (E6:A20) and CCDC6-RET (C1:R12), and the human lung cancer cell line H2228 (EML4-ALK; E6:A20). As expected, using the AmpliSeq panel for library preparations in combination with the corresponding bioinformatics analysis, the variant fusion transcripts harbored by the two control samples were correctly identified in the three independent experiments. We next retrospectively analyzed the 37 selected tumor samples for which the ALK fusion status had been previously determined using conventional techniques (IHC and FISH). An ALK rearrangement was clearly detected in 12 cases: five EML4-ALK (E6:A20), six EML4-ALK (E13:A20), and one EML4-ALK (E13:A19) ( Table 1). For the 25 remaining samples, as no known fusion transcript was highlighted by the analysis, the 3′/5′ imbalance values were interpreted to determine the presence or not of a potential fusion transcript (Supplementary Table S1). Among these, 13 samples did not display evidence of a fusion and were considered negative. However, for 11 other samples, uncertain evidence of an ALK and RET fusion was reported in 10 samples and one sample, respectively (Tables 1 and S1).
When comparing the results obtained with the AmpliSeq RNA Fusion kit to those from the conventional techniques, concordant diagnoses were reported for 25 samples (67%) (Tables 1 and 2). Since a clear conclusion could not be made for 11 samples (30%), further experiments are required to deliver a molecular diagnosis (Tables 1 and 2). Finally, a discordant result occurred using this NGS kit for one sample (3%); an ALK fusion gene was detected using IHC/FISH but not using the AmpliSeq method (S37, Tables 1 and 2). Fusion Gene Detection in Tumor Specimens using the Archer ® FusionPlex ® . The Archer ® FusionPlex ® kits are based on a different targeted enrichment method known as AMP (Fig. 1). This open-ended technique has the advantage of being able to sequence fused partners of the targeted genes without a priori. This permits the detection of well-described fusion events as well as previously unknown partners, with identification of the detected novel fusion transcript.
To investigate the specificity of the kit, we analyzed the RNA extracted from the two well-characterized control samples in three independent experiments. The Archer ® FusionPlex ® ALK, RET, ROS1 v2 allowed the expected detection of the fusion breakpoints present in the samples: EML4-ALK (E6:A20) and CCDC6-RET (C1:R12) for the commercialized sample, and EML4-ALK (E6:A20) for the human lung cancer cell line H2228. Using the AMP target enrichment technique on the same 37 tumor samples, we detected the presence of ALK fusion transcripts in 15 specimens (Tables 1 and S2). Among them, 12 harbored a common EML4-ALK rearrangement (E6:A20 and E13:A20). Interestingly, one novel and two rare ALK fusion transcripts were also identified: GCC2-ALK (S35), DCTN1-ALK (S36), and CLIP1-ALK (S37) (Tables 1 and S2). For 22 cases, no rearrangements were highlighted for the ALK, RET, and ROS1 genes.
Results obtained using the Archer ® FusionPlex ® kit correlated perfectly with those from the 'gold standard' conventional methods. Indeed, all the specimens reported by IHC and FISH as negative cases (n = 22) or positive (n = 15) for ALK translocations were correctly classified using this molecular approach ( Table 2). More importantly, unlike the IHC and the FISH methods, this technique allowed identification of the fusion partners without a priori, revealing the presence of uncommon fusion transcripts in three tumor samples in our study.
Validation of the uncommon ALK fusion partners and patient clinical outcome. Among the 10
ALK-positive IHC/FISH samples, three exhibited a singular gene fusion detected by the Archer ® FusionPlex ® kit: CLIP1-ALK (S35), DCTN1-ALK (S36), and GCC2-ALK (S37) rearrangements. Primers flanking the specific fusion regions were designed and used for validation. After reverse transcription and PCR, the presence of gene fusions was further analyzed using Sanger sequencing. Of particular note, our results validated the presence of three uncommon DCTN1-ALK (D26:A20), CLIP1-ALK (C22:A20), and GCC2-ALK (G19:A20) fusion genes (Fig. 2). All three uncommon fusion transcripts detected included the first 1065, 1294, and 1482 amino acids (aa) of DCTN1, CLIP1, and GCC2, respectively, fused to the last 562 aa of the ALK protein and retained the intact kinase domain of ALK, which is located from 1116 to 1392 aa (Fig. 2a,d and f).
As IHC and FISH revealed an ALK-positive status (Fig. 2b), patient S36 received crizotinib orally at a dose of 250 mg twice daily in September 2015, which resulted in a significant symptomatic improvement and computed tomography (CT) response after three months of therapy (Fig. 2c). The patient remains on treatment with crizotinib and a recent CT scan demonstrated a significant shrinkage of all tumor sites outside the central nervous system. According to the ALK-positive status of patient S37 (Fig. 2e), crizotinib treatment was started in February 2016. Successive images showed a continuation of response to therapy, with stabilization of the skeletal metastases but evidence of local extension of brain metastases. Finally, despite positive IHC/FISH results (Fig. 2g), ALK-inhibitor efficiency could not be assessed in patient S35, as this patient is still in remission after surgery.
Discussion
ALK gene rearrangements are usually detectable using IHC or FISH, and guide patient selection for therapy. Currently, expert consensus proposes the use of ALK IHC assays as a screening tool in two-step testing, with FISH evaluation used to validate positive or equivocal IHC samples 10,[27][28][29] . However, several studies have reported ALK fusions in samples that had tested negative using IHC, demonstrating that protein expression it is not automatically linked to gene rearrangements. This highlights the risk of denying therapy with an ALK inhibitor delivery based only on IHC results 13,[30][31][32] . In addition, the interpretation of ALK rearrangement by FISH strongly relies on expert experience that requires long periods of training and can be compromise by technical pitfalls 33 . Moreover, neither FISH nor IHC allow for identification of fusion partners and exact breakpoints.
Molecular diagnosis could overcome the limits of both these conventional analyses. However, to date, no technical consensus has emerged. In this study, we used two commercial amplicon-based NGS assays to determine the presence of clinically actionable ALK fusion transcripts. To maximize efficiency, we used 37 true NSCLC patient-derived oncology specimens previously tested for ALK by IHC and FISH. Twelve common EML4-ALK E13:A20 (S14-S18) and E6:A20 (S19-S25) variants were detected by both NGS approaches. The two assays displayed different results for three samples (S35, S36, and S37). The AmpliSeq amplicon-based method delivered an "uncertain" result, whereas the Archer ® AMP-based approach detected an ALK-positive fusion for all of them.
Through combined RT-PCR and Sanger sequencing analysis, we validated the presence of a rearrangement in each sample. The AmpliSeq and AMP Archer ® FusionPlex ® methods identified a EML4(13)-ALK (19) fusion and a EML4(13)-ALK (20) fusion, respectively, in one FISH-positive sample (S24). This fusion was validated as EML4 (13)-ALK (20) using RT-PCR and Sanger sequencing (data not shown). Importantly, among the 22 ALK fusion-negative samples initially detected by conventional approaches, the AMP Archer ® FusionPlex ® assay correctly established a negative result for all specimens, whereas the AmpliSeq assay was "uncertain" for nine of them, rendering further investigations necessary before ALK fusion status could be concluded 22 . Altogether, these results demonstrate the clear advantage of AMP Archer ® FusionPlex ® over AmpliSeq amplicon-based methodology in terms of giving clinically relevant, highly accurate results in a timely manner.
The Archer ® FusionPlex ® results suggest that it could be routinely used for the molecular diagnosis of NSCLC rearrangements. It is an easy-to-use laboratory test with kits developed for both PGM sequencer (Thermo Fisher Scientific) and MiSeq sequencer (Illumina) technologies. The workflow design provides a result in five working days. Furthermore, the accuracy of the test observed in our cohort demonstrates that confirmation of the result using another molecular approach is not required, as has previously been suggested 34 . There was no screening failure in our study, even though, in some cases, the RNA analyzed was extracted from samples that contained less than 20% tumor cells. In this respect, as cytological samples are the only source of material for a significant number of patients, we are planning to examine this approach in this setting. Finally, there have been concerns that the bioinformatics aspect of NGS may be challenging for regional/county hospitals. However, using Archer analysis software, we could detect and validate all known and novel rearrangements despite the absence of strong bioinformatical infrastructure in our unit, and without specific pipeline development. Different techniques based on high-throughput molecular approaches have been improved recently, and used to detect the presence of fusion transcripts in NSCLC samples. Thus, NanoString Technologies developed a technique based on the dual hybridization of a capture probe and a molecularly barcoded reporter probe complementary to a contiguous target sequence, allowing an accurate count of molecules, even where the RNA is poor quality. The nCounter Vantage ™ Lung Fusion Panel included junction probes specific to the fusion breakpoint, and probes upstream and downstream of a potential fusion junction for detection of gene-expression imbalance. Comparisons of NanoString performance with IHC and FISH have clearly shown showed a high degree of concordance with these gold standard techniques 17,20,35,36 . Amplicon-based NGS fusion panels have also been developed by different suppliers, and two main methods are available: the target enrichment-based (e.g. Thermo Fisher) and the AMP-based approaches (e.g. ArcherDx, Qiagen or MolecularMD). As demonstrated in this study, both molecular methods are highly sensitive, easy to perform, and give comparable results to conventional techniques. The main difference is that, unlike target enrichment-based methods, AMP-based approaches allow identification and correct naming of rare and new fusion transcripts. Indeed, although the imbalance detection used in the target enrichment-based approach is a good method to detect samples harboring new fusion genes, further experiments must then be performed to confirm the identity of the fusion partner 22,37 . Very recently, Rogers et al. also evaluated a new technology developed by Agena Bioscience, based on cDNA synthesis, amplification, labeling, and detection using mass spectrometry, in combination with the Agena LungFusion panel 35 . In this study, the authors compared the three transcriptome-based approaches (nCounter Vantage ™ Lung Fusion Panel from NanoString Technologies, AmpliSeq RNA Lung fusion panel from Thermo Fisher, and Agena LungFusion panel) to FISH, and showed an overall agreement ranging from 86-96%, depending on the technique. Interestingly, both the Agena panel and AmpliSeq fusion panel reported fusions that were not detectable by FISH.
In the present study, we identified one rare DCTN1-ALK fusion transcript (S36), one new CLIP1-ALK fusion (S37), and one new GCC2-ALK rearrangement (S35). DCTN1, for dynactin subunit 1, encodes the largest subunit of dynactin, a macromolecular complex that binds to both microtubules and cytoplasmic dynein. DCTN1-ALK fusions have been rarely reported: four inflammatory myofibroblastic tumors (IMT) [38][39][40] , six Spitz tumors 41,42 , and one pancreatic tumor 43 . It has also been observed in two specimens with NSCLC 44,45 . Fusion of ALK with DCTN1 induces the constitutive activation of ALK, which can be inhibited in vitro by treatment of the cells with crizotinib 42 . However, patients' responses to crizotinib was poorly described in these studies and, at present, only one patient with an IMT that responded to ALK inhibitor has been reported 39 . Interestingly, the patient harboring Table 2. Concordance between diagnoses delivered using conventional techniques (IHC and/or FISH) and NGS-based molecular approaches. a The 3′/5′ imbalance value obtained could not allow to clearly determined the presence or not of an fusion transcript in the samples. Another technique must be performed to deliver a diagnosis. this fusion rearrangement in our study showed sensitivity to crizotinib (S36), consistent with the results of the IMT crizotinib-treated patient. Moreover, we highlighted, for the first time, the presence of a CLIP1-ALK fusion in an NSCLC sample. Although one case of a Spitz tumor harboring a CLIP1-ALK fusion has previously been reported, the breakpoint described differs 46 . Yeh and colleagues identified a breakpoint located between CLIP1 exon 13 and ALK exon 20, whereas in our study it is between the exon 22 of CLIP1 and 20 of ALK. CLIP1 protein is a member of the cytoskeleton-associated protein family with a conserved glycine-rich domain. It binds to microtubules and thereby plays an important role in intracellular vesicle trafficking. As observed for patient S36, patient S37 responded to crizotinib therapy and continues crizotinib monotherapy with no evidence of major disease progression.
Finally, we also identified, for the first time to our knowledge, a new ALK fusion partner: GCC2. The breakpoint is located between GCC2 exon 19 and ALK exon 20. GCC2, for GRIP and coiled-coil domain containing 2, encodes for Golgi proteins involved in the tethering of transport vesicles to the trans-Golgi network. As previously reported for other ALK fusion partners, the large coiled-coil domain harbored by GCC2, when fused with ALK, may facilitate dimerization and induce the constitutive activation of ALK. The patient harboring this fusion is still in remission after surgery, rendering it impossible to determine the ALK-inhibitor efficiency in this case (S35).
The response of cell lines or patients to ALK inhibitors depending on the ALK fusion variant expressed has been insufficiently explored to date [47][48][49] . Of greatest interest is the observation that patients with an EML4-ALK variant 1 (E13:A20) exhibit better outcomes with crizotinib treatment than patients without this variant 49 , suggesting that ALK variants might influence the response duration of crizotinib in ALK-positive NSCLC. Moreover, Heuckmann and colleagues demonstrated that the cellular localization of the EML4-ALK fusion protein depends on the variant expressed, which may affect the oncogenic activity of the fusion protein 47 . At present, the specific ALK variant status and the fusion partner involved is not routinely considered when determining a prognosis or a therapeutic stratification for patients. Further comprehensive studies are now required to monitor patient outcomes according to the specific ALK variant status. AMP-based assays, which allow the precise determination of the fusion partner and breakpoint, are a simple tool for acquiring this information. This paves the way for the development of large cohort studies to determine the impact of this information on the healthcare of lung cancer patients.
With new targetable driver genes identified, and with therapeutic options evolving, a new composite decisional algorithm must be defined. As ALK-positive lung cancer patients benefit from tyrosine kinase inhibitor therapy in the first-line setting, ALK must be tested at the time of diagnosis. Our results suggest that an amplicon-based NGS assay could be performed initially. However, for laboratories that would prefer to continue using IHC as a screening test, Archer ® FusionPlex ® could be performed as a second step, to replace FISH. Moreover, since current guidelines recommend routine ALK testing as well as EGFR testing, it is important to point out that all these actionable driver genes should be tested as part of a one-test multiplex NGS panel, extracting DNA-RNA from the same FFPE sample. ROS and RET fusions, as well as a broader spectrum of genes (i.e. KRAS, BRAF, or ERBB2), could also be included in such routine tests. Finally, since the optimal amount of RNA recommended for Archer ® FusionPlex ® analysis is 200 ng (range from 20 to 250 ng), this parameter could represent a limitation for very small biopsies, even if in our study all specimens were successfully analyzed. This point has been addressed recently by Evangelista et al. that implemented NanoString panel for ALK fusion detection and demonstrated its applicability in series of 43 lung cancer biopsies using up to 100 ng of RNA with only 7% of sample failure 36 .
In summary, our study investigated ALK fusion detection based on two different commercially NGS-based approaches in FFPE-derived cancer specimens. In contrast to the AmpliSeq amplicon-based approach that was unable to detect several variants, the Archer ® AMP-based technique successfully identified all ALK fusion-positive samples, rendering this method highly applicable for routine ALK fusion detection and variant identification. In addition, in contrast to the conventional IHC and FISH techniques, this amplicon-based NGS approach has the distinct advantage of requiring knowledge only of one partner in the fusion. This allows the identification of novel gene rearrangements with previously unknown partners, which could clinically impact patient management.
Materials and Methods
Tumor samples. This study was performed with approval from the Institutional Review Board of both hospitals (Toulouse and Montpellier) and in concordance with regulatory guidelines regarding clinical assay validation. For this non-interventional study, an approved informed consent statement was acquired for all patients. FFPE tissue samples from NSCLC patients that had been submitted in 2014 to the University Hospital of Montpellier or Toulouse (France) for detection of ALK translocations were included in this study (n = 1128). Among them, 37 samples with previously determined ALK rearrangement status were randomly selected. Table 3 lists the characteristics of the patients and the corresponding specimens enrolled in the NGS assay. All lesions were submitted for pathological examination using standard techniques. The percentage of tumor cells in the specimens ranged from 10-90%. For each sample, ALK fusions were explored using IHC, a dual-color break-apart FISH, and NGS approaches using two different assays: ALK FISH. Where IHC analysis was positive, FISH was performed on 3 μm FFPE tissue sections using the ALK FISH DNA break-apart Probe, Split Signal (Dako) according to the manufacturer's recommendations. Slides were pretreated at 98 °C in solution for 10 min and digested with pepsin for 3 min at 37 °C using the histology FISH accessory Kit (Dako). Slides were incubated for 18 h at 45 °C with ALK probes diluted at 1:10, and had been previously denatured for 5 min at 85 °C. Slides were then washed and dehydrated before counterstaining and application of mounting medium. Slides were analyzed with a Zeiss AxioImager Z1 fluorescence microscope (Labexchange, Burladingen, Germany). Slides were analyzed independently by two pathologists. A minimum of 100 nuclei were scored and cases were considered positive when more than 15% of cells displayed split signals.
Total RNA extraction. RNA extraction was performed on the same FFPE blocks as the IHC and/or FISH exploration. RNA was extracted from 10 µm-thick paraffin sections using the RecoverAll ™ Total Nucleic Acid Isolation Kit (Thermo Fisher Scientific, Wilmington, USA) according to the manufacturer's recommendations.
RNA from control samples was extracted using the same kit. Extracted RNA was quantified using the Qubit ® RNA HS Assay kit in combination with a Qubit ® 2.0 fluorometer (Thermo Fisher Scientific) and qualified using the RNA 6000 Nano kit in combination with the BioAnalyzer 2100 ™ (Agilent Technologies, Palo Alto, CA, USA).
Molecular testing by the two NGS-based approaches was performed on the same RNA samples.
Ion AmpliSeq RNA Lung Cancer Research Fusion Panel experiment. The AmpliSeq RNA Lung
Cancer Research Fusion Panel is based on an amplicon sequencing approach ( Table 4). The panel is composed of 83 pairs of unique primers in a single pool that includes: (i) primers that allow the amplification and detection of 70 known ALK, RET, ROS1, and NTRK1 fusion transcripts; (ii) primers located in the 5′ and 3′ regions of ALK, RET, ROS1, and NTRK1 mRNA genes; (iii) primers that target five housekeeping genes to serve as internal controls of the experiment.
For library preparation, 10 ng of total RNA was used according to the manufacturer's recommendations.
Briefly, RNA was reverse transcribed using the SuperScript ® VILO ™ cDNA Synthesis Kit (Thermo Fisher Scientific). Target cDNA was amplified using AmpliSeq primer pool (Fig. 1). Primer sequences were then partially digested using FuPa reagent, and adapters and barcodes were ligated using DNA ligase. Libraries were For samples where the software did not detect a known fusion transcript, the 3′/5′ imbalance value given by the software was used to determine the presence or not of novel or uncommon fusion transcripts 25 . For each gene present in the panel, a specific threshold has been determined by the supplier to classify samples into three categories: no evidence, uncertain evidence, or strong evidence of the presence of a fusion involving the corresponding genes.
Archer ® FusionPlex ® ALK, RET, ROS1 v2 Kit experiment. The Archer ® FusionPlex ® ALK, RET, ROS1 v2 kit is based on a targeted enrichment method called anchored multiplex PCR (AMP), derived from the rapid amplification of cDNA ends (RACE) method (Table 4) 50 . After reverse transcription, double-stranded cDNA undergoes end repair, adenylation, and ligation with a half-functional universal adapter (Fig. 1). Obtained cDNA are then amplified by two rounds of nested low-cycle PCR using nested gene-specific primers (GSP1 and GSP2) in combination with the first half-functional universal adapter. GSP2 primers are also 5′ tagged with a common sequencing adapter to allow the clonal amplification necessary for the sequencing step.
The panel used is composed of: (i) 29 GSP that allow the detection of gene fusion events involving ALK, RET, and ROS, and also ALK and RET specific point mutations, at the same time; (ii) GSP specific for five housekeeping genes.
For this kit, 200 ng of total RNA was used as input for library generation using the Archer Universal RNA reagent Kit v2, Archer Molecular Barcode (MBC) Adapters for Ion Torrent, and the Archer FusionPlex ALK, RET, ROS v2 Panel GSPs v2 (ArcherDX, Boulder, CO, USA) according to the manufacturer's instructions. Briefly, RNA was reverse transcribed using random primers, first strand cDNA was synthesized, and RNA quality was assessed using the Archer PreSeq RNA QC assay (ArcherDX). After second strand cDNA synthesis, end repair and A-tailing steps were performed, cDNA was purified using Agencourt ® AMPure ® XP beads (Beckman Coulter), and MBC adapters were ligated. Purified cDNA was firstly amplified using the GSP1 pool, then purified using Agencourt ® AMPure ® XP beads, and amplified again using the GSP2 pool. After another purification step, libraries were quantified using D1000 ScreenTapes in combination with a 4200 TapeStation instrument (Agilent Technologies, Santa Clara, CA, USA) and pooled to equimolar concentration. Emulsion PCR, chip loading, and sequencing was performed as described above and results were analyzed using the Archer Analysis v3.3 software. A sample was considered as positive when the fusion breakpoint was supported by at least two unique reads.
RT-PCR and Sanger sequencing. 200 ng of RNA was reverse transcribed with random hexamers using the SuperScript ® III First-Strand Synthesis System (Thermo Fisher). Primers specific for the detected fusion events were designed (Supplementary Table 3) and direct Sanger sequencing was performed as previously described 51 .
|
v3-fos-license
|
2021-03-29T05:15:02.596Z
|
2021-03-01T00:00:00.000
|
232385163
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6651/13/3/209/pdf",
"pdf_hash": "160f74f33f69b743d37aa5532c99e896c3441a16",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46591",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "160f74f33f69b743d37aa5532c99e896c3441a16",
"year": 2021
}
|
pes2o/s2orc
|
Saccharomyces cerevisiae Cell Wall-Based Adsorbent Reduces Aflatoxin B1 Absorption in Rats
Mycotoxins are naturally occurring toxins that can affect livestock health and performance upon consumption of contaminated feedstuffs. To mitigate the negative effects of mycotoxins, sequestering agents, adsorbents, or binders can be included to feed to interact with toxins, aiding their passage through the gastrointestinal tract (GI) and reducing their bioavailability. The parietal cell wall components of Saccharomyces cerevisiae have been found to interact in vitro with mycotoxins, such as, but not limited to, aflatoxin B1 (AFB1), and to improve animal performance when added to contaminated diets in vivo. The present study aimed to examine the pharmacokinetics of the absorption of radiolabeled AFB1 in rats in the presence of a yeast cell wall-based adsorbent (YCW) compared with that in the presence of the clay-based binder hydrated sodium calcium aluminosilicate (HSCAS). The results of the initial pharmacokinetic analysis showed that the absorption process across the GI tract was relatively slow, occurring over a matter of hours rather than minutes. The inclusion of mycotoxin binders increased the recovery of radiolabeled AFB1 in the small intestine, cecum, and colon at 5 and 10 h, revealing that they prevented AFB1 absorption compared with a control diet. Additionally, the accumulation of radiolabeled AFB1 was more significant in the blood plasma, kidney, and liver of animals fed the control diet, again showing the ability of the binders to reduce the assimilation of AFB1 into the body. The results showed the potential of YCW in reducing the absorption of AFB1 in vivo, and in protecting against the damaging effects of AFB1 contamination.
Introduction
Mycotoxins are major natural contaminants present in food and feed materials, such as grains or forages [1,2]. The spores of mycotoxin-producing fungi are ubiquitous in the environment, hence, they inevitably contaminate grains and other plant-based feed materials [3]. Under high humidity, moderate temperature, and aerobic conditions, spores can germinate and grow. Under specific biotic and abiotic stress conditions, some can release mycotoxins as secondary metabolites directly to plants or stored ingredients [4]. Moreover, environmental challenges, such as meteorological events, the plant health using intestinal tissues [24,25]. Multiple in vivo studies of various animal species have shown the mitigation capabilities of binders following the administration of synthetic or natural mycotoxins via diet, whereas the binder's efficacy has been appraised based on an observed improvement in animal performance [26][27][28]. Other animal studies have primarily focused on indirect, often non-specific/shared biomarkers of exposure as an outcome in the evaluation of mitigation strategies, such as measuring changes in intestinal health using histopathological assessment [29], modification of blood chemistry [30][31][32][33], changes in immunological titers [34][35][36][37][38], changes in microbiota [32,39], genomic and antioxidant markers [40][41][42][43], and changes in organ morphology [44,45]. However, only a few in vivo studies have measured toxin partitioning in the animal body and revealed the pharmacokinetics of toxin accumulation in different tissues and digesta [46][47][48].
Therefore, in the present study, we aimed to assess the efficiency of YCW as a binder for AFB1 compared with that of hydrated sodium calcium aluminosilicate (HSCAS), which has been previously shown to have high affinity specifically for AFB1 [18,49]. For this purpose, after evaluating the characteristics of both YCW and HSCAS adsorbents toward AFB1 in vitro, we assessed the effect of YCW on AFB1 absorption in vivo in a rat model. Prior to the main animal study, a preliminary study was conducted to reveal the kinetics of AFB1 absorption with a specific diet and to optimize the sampling time points that would be further used. In the main study, the distribution of radiolabeled AFB1 in digesta (the stomach, small intestine, cecum, and colon) and systemic tissues (the plasma, liver, and kidney) was measured in the presence and absence of a commercial source of YCW, Mycosorb ® .
In Vitro Preliminary Study of the Adsorption Capacity of the Tested Adsorbents toward AFB1
The percentage of AFB1 bound on an individual basis to each tested concentration (Table 1) of each adsorbent tested ranged from 81% to 94% for YCW and was 100% for HSCAS. The average adsorbed percentage for YCW was approximately 89% of the AFB1 present in the medium when tested at pH 3.0, which differed significantly from that for HSCAS (p < 0.0001) with 100% AFB1 adsorption. The coefficients of variation obtained for YCW were <5% and 0.01% for HSCAS. Regression analyses were performed on data for the three batches of YCW and one of HSCAS using three models recommended by FEFANA to test the adsorption properties of the adsorbents [50], however, using sub-ppm levels of AFB1 ranging from 0.05 to 1.00 ng/mL ( Figure 1). All models fitted the data points with a regression coefficient above 0.9760. Hill's model with n sites being the best fitting model for all the YCW-tested materials (0.9853), however, the overall models were difficult to differentiate for the tested concentrations. Using Freundlich equation, we determined that the average adsorption capacity K F values of YCW and HSCAS were 3.06 and 1.03, respectively. Using Hill's model, where the cooperativity of the interaction can be evaluated, there was little difference between adsorbents, as the n value averaged across sorbents at 0.94 ± 0.25, showing a linear behavior of the model for the tested concentration bracket. Table 1. Measure of the adsorption rate (%) of aflatoxin B1 (AFB1) at each mycotoxin concentration point evaluate in three replicates and evaluation of the average individual adsorption rate (%) of three batches of yeast cell wall-based adsorbent (YCW) and one batch of hydrated sodium calcium aluminosilicate (HSCAS). Concentrations of bound vs. free aflatoxin B1 (AFB1) evaluated at pH 3.0 using three independent batches of yeast cell wall-based adsorbent (YCW) or one hydrated sodium calcium aluminosilicate (HSCAS). (a) The concentration of free AFB1 at equilibrium is expressed as µg/mL, and the corresponding bound concentration of AFB1 is expressed as mg/g for each adsorbent used: YCW (open blue squares, circles, triangles) and HSCAS (red triangles) at pH 3.0; ((b) subfigure window) the concentration of the initially added AFB1 is expressed as µg/mL, and the corresponding bound concentration of AFB1 at equilibrium is expressed as mg/g for each adsorbent used: YCW (open blue squares, circles, triangles) and HSCAS (red triangles) at pH 3.0. All replicate values (three replicates per concentration tested for each individual product) are displayed in the graphic. Adsorption curves were fitted using the Freundlich equation. Concentrations of bound vs. free aflatoxin B1 (AFB1) evaluated at pH 3.0 using three independent batches of yeast cell wall-based adsorbent (YCW) or one hydrated sodium calcium aluminosilicate (HSCAS). (a) The concentration of free AFB1 at equilibrium is expressed as µg/mL, and the corresponding bound concentration of AFB1 is expressed as mg/g for each adsorbent used: YCW (open blue squares, circles, triangles) and HSCAS (red triangles) at pH 3.0; ((b) subfigure window) the concentration of the initially added AFB1 is expressed as µg/mL, and the corresponding bound concentration of AFB1 at equilibrium is expressed as mg/g for each adsorbent used: YCW (open blue squares, circles, triangles) and HSCAS (red triangles) at pH 3.0. All replicate values (three replicates per concentration tested for each individual product) are displayed in the graphic. Adsorption curves were fitted using the Freundlich equation.
In Vivo Experiment on the Total Recovery of AFB1 in Rat Fed AFB1-Contaminated Diet
The preliminary study results showed that 3 H-AFB1 levels increased after 2-3 h in all samples, except for those in the duodenal/jejunal samples and, to a lesser extent, in the ileal digesta, where the levels were constant or reached a plateau. In most of the tissues analyzed, the peak absorption level was not reached even at the final time point of 7 h. Based on the preliminary study results, the time points of 5 and 10 h were selected for the final study. Before 5 h, the accumulation of AFB1 in systemic tissues (the plasma, liver, and kidney) was expected to be low. Therefore, the probability of achieving significant effects was estimated to be poor. A 10-h collection point was chosen to obtain information on the distribution of AFB1 between digesta and tissues at a later time point. A considerable portion of the tested toxin was anticipated to be excreted via urine and feces. As a separate analysis of duodenal/jejunal and ileal digesta samples did not yield any additional information, they were combined in the final study as 'small intestine'. We decided to enlarge the study's scope to encompass the stomach and cecum samples to expand the recovery profile of AFB1.
In the main study, at 5 and 10 h post-feeding, the rats were sacrificed, and the radioactivity measured in the different tissues sampled. The distribution of the radiolabel signal in the digesta of the various sections of the gastrointestinal tract, plasma, liver, and kidney showed an average recovery at 5 and 10 h of 72% and 55%, respectively, of the total 3 H-AFB1 administered via diet ( Figure 2).
In Vivo Experiment on the Total Recovery of AFB1 in Rat Fed AFB1-Contaminated Diet
The preliminary study results showed that 3 H-AFB1 levels increased after 2-3 h in all samples, except for those in the duodenal/jejunal samples and, to a lesser extent, in the ileal digesta, where the levels were constant or reached a plateau. In most of the tissues analyzed, the peak absorption level was not reached even at the final time point of 7 h. Based on the preliminary study results, the time points of 5 and 10 h were selected for the final study. Before 5 h, the accumulation of AFB1 in systemic tissues (the plasma, liver, and kidney) was expected to be low. Therefore, the probability of achieving significant effects was estimated to be poor. A 10-h collection point was chosen to obtain information on the distribution of AFB1 between digesta and tissues at a later time point. A considerable portion of the tested toxin was anticipated to be excreted via urine and feces. As a separate analysis of duodenal/jejunal and ileal digesta samples did not yield any additional information, they were combined in the final study as 'small intestine'. We decided to enlarge the study's scope to encompass the stomach and cecum samples to expand the recovery profile of AFB1.
In the main study, at 5 and 10 h post-feeding, the rats were sacrificed, and the radioactivity measured in the different tissues sampled. The distribution of the radiolabel signal in the digesta of the various sections of the gastrointestinal tract, plasma, liver, and kidney showed an average recovery at 5 and 10 h of 72% and 55%, respectively, of the total 3 H-AFB1 administered via diet ( Figure 2).
Figure 2.
Total recovery of the 3 H-label from 3 H-aflatoxin B1 ( 3 H-AFB1) expressed as the percentage of the initial dose administered in all samples analyzed after the oral administration of AFB1-contaminated diet to rats in the presence or absence of yeast cell wall-based adsorbent (YCW) or hydrated sodium calcium aluminosilicate (HSCAS) at different concentrations. All replicate (open circles/squares) and average values (cross) are displayed in the graphic: (1) Box and whisker chart, as well as median (horizontal line), average (cross) and quartiles calculations (box); and (2) the regression curve of the average values shows the magnitude of the recovery. Bars (in black) in boxes correspond to the standard errors of the mean of the replicate rats. The study was performed initially on n = 64 rats, or 16 rats per treatment. At 5 h (in blue), n = 9 rats for the 10 g/kg YCW treatment and n = 8 for the rest of the treatments were collected for analysis; At 10 h (in red), the reminder rats (4 rats were excluded due to morbidity/mortality issues before the start of the main experimental study period) per treatments were collected for analysis, n = 6 in the control group and n = 7 in each of the adsorbent treated groups. The study was performed initially on n = 64 rats, or 16 rats per treatment. At 5 h (in blue), n = 9 rats for the 10 g/kg YCW treatment and n = 8 for the rest of the treatments were collected for analysis; At 10 h (in red), the reminder rats (4 rats were excluded due to morbidity/mortality issues before the start of the main experimental study period) per treatments were collected for analysis, n = 6 in the control group and n = 7 in each of the adsorbent treated groups.
Evaluation of the Absorption Kinetics of AFB1 in Rat Fed AFB1-Contaminated Diet
The kinetics of AFB1 absorption was assessed by measuring toxin distribution in selected tissues and intestinal digesta. As shown in Figure 3, 3 H-AFB1 was found in high abundance in the stomach (~26%) and small intestine (~13%) after 5 h post-feeding but was observed in low abundance of~4% at 10 h post-feeding. In contrast, the amount of 3 H-AFB1 in the cecum and colon increased at 10 h, even though significant absorption to tissues had occurred ( Figure 3).
Evaluation of the Absorption Kinetics of AFB1 in Rat Fed AFB1-Contaminated Diet
The kinetics of AFB1 absorption was assessed by measuring toxin distribution in selected tissues and intestinal digesta. As shown in Figure 3, 3 H-AFB1 was found in high abundance in the stomach (~26%) and small intestine (~13%) after 5 h post-feeding but was observed in low abundance of ~4% at 10 h post-feeding. In contrast, the amount of 3 H-AFB1 in the cecum and colon increased at 10 h, even though significant absorption to tissues had occurred ( Figure 3).
This finding reflected the overall evolution of the 3 H-AFB1 digesta transit from the proximal to distal compartments of the gastrointestinal tract. At the 5 h time point, 35% of the recovered label was found in the systemic tissues comprising the plasma, liver, and kidney, whereas the proportion increased to 55% in the same tissues at 10 h after AFB1 feeding. The results indicated that the bulk of aflatoxin present was absorbed in the gastrointestinal tract. Bars (in black) in boxes correspond to standard errors of the mean of the replicate rats. Control treatment initially comprised 16 rats. The integrality of each gastrointestinal compartiment was collected for: n = 8 rats at 5 h; the reminder n = 6 rats at 10 h (two rats were excluded from this analysis due to morbidity/mortality issues before the start of the main experimental study period) for analysis.
Effect of Mycotoxin Adsorbents on AFB1 Retention in the Gastrointestinal Tract
Evaluation of the binder strategy's effect involved comparing the adsorbents with a control diet supplemented only with AFB1. Figure 4a-d show the sequential evolution of the recovery rate of 3 H-AFB1 in the digesta collected from the stomach, small intestine, cecum, and colon. At 5 h, more than 20% of the recovered radiolabeled AFB1 was found in the stomach (Figure 4a). No differences in recovery were observed between the respective dietary treatments, suggesting that the stomach was not a significant place of AFB1 absorption. Hence, any portion of toxin present in this compartment would remain in the (2) the regression curve of the average values shows the magnitude of the recovery. Bars (in black) in boxes correspond to standard errors of the mean of the replicate rats. Control treatment initially comprised 16 rats. The integrality of each gastrointestinal compartiment was collected for: n = 8 rats at 5 h; the reminder n = 6 rats at 10 h (two rats were excluded from this analysis due to morbidity/mortality issues before the start of the main experimental study period) for analysis. This finding reflected the overall evolution of the 3 H-AFB1 digesta transit from the proximal to distal compartments of the gastrointestinal tract. At the 5 h time point, 35% of the recovered label was found in the systemic tissues comprising the plasma, liver, and kidney, whereas the proportion increased to 55% in the same tissues at 10 h after AFB1 feeding. The results indicated that the bulk of aflatoxin present was absorbed in the gastrointestinal tract.
Effect of Mycotoxin Adsorbents on AFB1 Retention in the Gastrointestinal Tract
Evaluation of the binder strategy's effect involved comparing the adsorbents with a control diet supplemented only with AFB1. Figure 4a-d show the sequential evolution of the recovery rate of 3 H-AFB1 in the digesta collected from the stomach, small intestine, cecum, and colon. At 5 h, more than 20% of the recovered radiolabeled AFB1 was found in the stomach (Figure 4a). No differences in recovery were observed between the respective dietary treatments, suggesting that the stomach was not a significant place of AFB1 absorption. Hence, any portion of toxin present in this compartment would remain in the digesta. At the 10 h timepoint, the stomach compartment was empty, and no detectable levels of 3 H-AFB1 were found in the samples from any treatment.
was not significant. HSCAS at 10 h showed a significant increase in toxin retainment compared with the control, but YCW did not (Figure 4d). There was no significant difference in toxin retainment at 10 h post-feeding in the colon between the YCW and control groups.
The total levels of recovered 3 H-AFB1 in the different digesta of the gastrointestinal tract highlighted a dose-dependent toxin-binding effect of YCW and HSCAS. Treatment with the binders at 10 g/kg led to a significant increase in AFB1 detected in the total digesta (p < 0.001). The overall effect of both products tested was highly significant at both time points (Figure 4e, Tables 2 and 3). 3 H-aflatoxin B1 ( 3 H-AFB1) in digesta at 5 (in blue) and 10 h (in red) after toxin administration with or without the addition of yeast cell wall-based adsorbent (YCW) at two concentrations or hydrated sodium calcium aluminosilicate (HSCAS). Panels (a-e) show the percentage of recovered 3 H-AFB1 found in the (a) stomach, (b) small intestine, (c) cecum, (d) colon, and (e) total digesta. Bars in the columns correspond to standard errors of the mean of the replicate rats. The significant difference between the control and amended feeds are indicated by asterisks as follows: * 0.01 ≤ p value < 0.05; ** 0.001 ≤ p value < 0.01; *** 0.001 ≤ p value < 0.001; **** p value < 0.0001 using Dunnett's post-hoc test. In addition, pairwise comparisons were tested by Tukey's posthoc test. Different letters indicate significant difference between treatments within a sampling time point. Study was performed initially on n = 64 rats or 16 rats per treatment. At 5 h, n = 9 rats for the 10 g/kg YCW treatment and n = 8 for the rest of the treatments were collected for analysis; At 10 h, the reminder rats (four rats were excluded due to morbidity/mortality issues before the start of the main experimental study period) per treatments were collected for analysis, n = 6 in the correspond to standard errors of the mean of the replicate rats. The significant difference between the control and amended feeds are indicated by asterisks as follows: * 0.01 ≤ p value < 0.05; ** 0.001 ≤ p value < 0.01; *** 0.001 ≤ p value < 0.001; **** p value < 0.0001 using Dunnett's post-hoc test. In addition, pairwise comparisons were tested by Tukey's post-hoc test. Different letters indicate significant difference between treatments within a sampling time point. Study was performed initially on n = 64 rats or 16 rats per treatment. At 5 h, n = 9 rats for the 10 g/kg YCW treatment and n = 8 for the rest of the treatments were collected for analysis; At 10 h, the reminder rats (four rats were excluded due to morbidity/mortality issues before the start of the main experimental study period) per treatments were collected for analysis, n = 6 in the control group and n = 7 in each of the adsorbent treated groups. Integrality of each digestive compartiment and systemic tissue was collected for each rat.
In the small intestine, the apparent recovery rate of 3 H-AFB1 tended to numerically increase with the addition of an adsorbent, at 5 h, with an increase from~12% in the control to 15% in rats fed 2 g/kg of YCW, and to~20% in rats fed 10 g/kg of YCW or HSCAS (Figure 4b). A similar trend was observed at 10 h post-feeding, but the level of AFB1 recovered fluctuated between~3% and~8%, respectively, for the control and 10 g/kg YCW groups. The effects were not significant at the risk levels used in the Dunnett's and Tukey's post-hoc tests. However, the multiple linear regression (MLR) model showed a significant dose-dependent effect using YCW at both time points (Tables 2 and 3). Table 2. Significance of the effect and percentage of changes observed for two mycotoxin adsorbents, yeast cell wall-based adsorbent (YCW) and hydrated sodium calcium aluminosilicate (HSCAS), on the distribution of 3 H-labeled aflatoxin B1 ( 3 H-AFB1) in the gastrointestinal digesta and in the tested organs and biological fluids of rats at 5 h post-feeding, as evaluated using three post-hoc statistical tests.
ANOVA
Dunnett MLR Tissue YCW 2 g/kg YCW 10 g/kg The radioactive recoveries in the cecal digesta showed a similar effect to those observed in the small intestine. In contrast, higher recoveries were obtained with the diets containing the mycotoxin binders compared with those obtained with the control diet, showing increases at 5 h post-feeding from~16% in the control group up to~28% in the 10 g/kg YCW group and from~21% in the control group up to 39% in the HSCAS group. However, the results showed higher significance than those in the small intestine ( Figure 4c, Tables 2 and 3). The effect of HSCAS and YCW supplementation at 10 g/kg was almost identical in both the small intestine and cecum at the 5-h time point. Conversely to the small intestine and as described previously, toxin concentration in the cecum was higher at the 10 h timepoint. This indicated that at 10 h post-feeding, the small intestine started to empty, whereas the digesta content of the cecum and colon remained high. In the cecum, the increase in the AFB1 content was significantly higher with YCW at 5 h (p < 0.01) than with HSCAS (p < 0.05), which revealed a potentially higher adsorption affinity for YCW. At 10 h, the AFB1 content was significantly higher with HSCAS treatment (p < 0.001) than YCW treatment (p < 0.01), which showed a potentially higher capacity of adsorption for HSCAS.
In the colon, toxin retention tended to increase with adsorbent use, but this increase was not significant. HSCAS at 10 h showed a significant increase in toxin retainment compared with the control, but YCW did not (Figure 4d). There was no significant difference in toxin retainment at 10 h post-feeding in the colon between the YCW and control groups. The total levels of recovered 3 H-AFB1 in the different digesta of the gastrointestinal tract highlighted a dose-dependent toxin-binding effect of YCW and HSCAS. Treatment with the binders at 10 g/kg led to a significant increase in AFB1 detected in the total digesta (p < 0.001). The overall effect of both products tested was highly significant at both time points (Figure 4e, Tables 2 and 3). Table 3. Significance of the effect and percentage of changes observed for two mycotoxin adsorbents, yeast cell wall-based adsorbent (YCW) and hydrated sodium calcium aluminosilicate (HSCAS), on the distribution of 3 H-labeled aflatoxin B1 ( 3 H-AFB1) in the gastrointestinal digesta and the tested organs and biological fluids of rats at 10 h post-feeding, as evaluated using three post-hoc statistical tests. ** +22% +90% * +86% ** +89% ** Cecum (% recovery) *** +15% +62% ** +80% *** +61% ** Colon (% DPM) * +38% +43% +92% * +32% Colon (% recovery) * +34% +21% +83% ** +11%
Effect of Mycotoxin Binders on AFB1 Absorption into Animal Tissues
In the present study, AFB1 absorption was analyzed in a fixed volume of blood and then calculated to estimate the aflatoxin absorption in the entire volume of blood in rats [51]. Analysis of blood plasma samples showed that the diets containing a mycotoxin binder significantly reduced the concentration of recovered 3 H-AFB1 in a dose-dependent manner (Figure 5a). At the 5-h time point, the diets containing YCW and HSCAS at 10 g/kg reduced the toxin concentration by~50% (p < 0.001) and~65% (p < 0.0001), respectively, compared with the control diet. These two treatments did not differ significantly from each other but differed from the control. At the 10-h timepoint, 30% of the labeled aflatoxin was found in the rats' plasma fed the control diet. The diets containing 2.0 g/kg (p < 0.01) and 10 g/kg (p < 0.0001) YCW as well as the diet containing HSCAS (p < 0001) showed a respective reduction in plasma AFB1 of~20%,~40%, and~65%. At this time point, the responses of all four feeds differed significantly from each other, and the MLR model (Table 3) for the YCW dose responses were also highly significant (p < 0.0001). g/kg reduced the toxin concentration by ~50% (p < 0.001) and ~65% (p < 0.0001), respectively, compared with the control diet. These two treatments did not differ significantly from each other but differed from the control. At the 10-h timepoint, 30% of the labeled aflatoxin was found in the rats' plasma fed the control diet. The diets containing 2.0 g/kg (p < 0.01) and 10 g/kg (p < 0.0001) YCW as well as the diet containing HSCAS (p < 0001) showed a respective reduction in plasma AFB1 of ~20%, ~40%, and ~65%. At this time point, the responses of all four feeds differed significantly from each other, and the MLR model (Table 3) for the YCW dose responses were also highly significant (p < 0.0001).
Figure 5. The effect of mycotoxin binders on the residual level of the 3 H-label from 3 H-aflatoxin B1 ( 3 H-AFB1
) in digesta at 5 (in blue) and 10 h (in red) after toxin administration with or without the addition of yeast cell wall-based adsorbent (YCW) at two concentrations or hydrated sodium calcium aluminosilicate (HSCAS). Panels (a-d) show the percentage of recovered radioactivity found in the (a) plasma, (b) liver, (c) kidney, and (d) total systemic. Bars in columns correspond to standard errors of the mean of the replicate rats. Significant differences between the control and amended feeds are indicated by asterisks as follows: * 0.01 ≤ p < 0.05; ** 0.001 ≤ p <0.01; *** 0.001 ≤ p < 0.001; **** p < 0.0001 as analyzed using Dunnett's post-hoc test. In addition, pairwise comparison was tested by Tukey's post-hoc test; different letters indicate significant difference between treatments within a sampling time point. Study was performed on initially n = 64 rats or 16 rats per treatment. At 5 h, n = 9 rats for the 10 g/kg YCW treatment and n = 8 for the rest of the treatments were collected for analysis; At 10 h, the reminder rats (four rats were excluded due to morbidity/mortality issues before the start of the to standard errors of the mean of the replicate rats. Significant differences between the control and amended feeds are indicated by asterisks as follows: * 0.01 ≤ p < 0.05; ** 0.001 ≤ p <0.01; *** 0.001 ≤ p < 0.001; **** p < 0.0001 as analyzed using Dunnett's post-hoc test. In addition, pairwise comparison was tested by Tukey's post-hoc test; different letters indicate significant difference between treatments within a sampling time point. Study was performed on initially n = 64 rats or 16 rats per treatment. At 5 h, n = 9 rats for the 10 g/kg YCW treatment and n = 8 for the rest of the treatments were collected for analysis; At 10 h, the reminder rats (four rats were excluded due to morbidity/mortality issues before the start of the main experimental study period) per treatments were collected for analysis, n = 6 in the control group and n = 7 in each of the adsorbent treated groups. Integrality of each digestive compartiment and systemic tissue was collected for each rat.
YCW and HSCAS at 10 g/kg significantly reduced the toxin concentration in the liver (p < 0.0001) by~40% and~60%, respectively, at both time points (Figure 5b, Tables 2 and 3). There was no significant reduction in the toxin concentration in the 2.0 g/kg YCW group than in the control group.
At the 5-and 10-h timepoints, only 0.7% and 1% of 3 H-AFB1 were found in the control rats' kidneys. Even though the total radioactivity in the kidneys represented only a small proportion of the total radioactivity, the two tested products' effects were similar to those observed in the plasma and liver, with a decrease in the accumulated levels. Again, HSCAS (p < 0.001) and YCW (p < 0.05) significantly reduced the level of radiolabeled aflatoxin at both time points (Figure 5c). However, when administered at 10 g/kg, YCW and HSCAS exhibited no significant differences from one another at any post-feeding times.
Overall, both adsorbents significantly reduced the total systemic accumulation of AFB1 from~47% in the control down to~20% and~15% after 5 h of exposure and from 55% down to~30 and~20% after 10 h of exposure a following dietary treatments with YCW and HSCAS, respectively (Figure 5d).
When both digesta and systemic accumulation were evaluated in combination at the 5-h timepoint,~60% and~40% of the labeled aflatoxin were found respectively in the intestinal digesta and systemic samples of the animals fed the diet containing no mycotoxin binders ( Figure 6). The two mycotoxin adsorbents significantly changed this distribution, with 80% of AFB1 recovered in digesta and only less than 20% in the tissue samples when HSCAS was introduced in the diet. Similarly, YCW at 10 g/kg reduced the proportion of absorbed AFB1 from 40% to 20%. At 10 h post-feeding, as high as 55% of AFB1 was recovered in the animals' tissues fed the control diet. HSCAS also reduced the level of absorbed aflatoxin to 20% at the 10-h time point. YCW also significantly reduced the toxin absorption by 40%, thereby exerting a protective effect. Error bars indicate standard errors of the mean. This study was performed initially on n = 64 rats, or 16 rats per treatment. At 5 h, n = 9 rats for the 10 g/kg YCW treatment and n = 8 for the rest of the treatments were collected for analysis; At 10 h, the reminder rats (four rats were excluded due to morbidity/mortality issues before the start of the main experimental study period) per treatments were collected for analysis, n = 6 in the control group and n = 7 in each of the adsorbent treated groups. Integrality of each digestive compartiment and systemic tissue was collected for each rat. Error bars indicate standard errors of the mean. This study was performed initially on n = 64 rats, or 16 rats per treatment. At 5 h, n = 9 rats for the 10 g/kg YCW treatment and n = 8 for the rest of the treatments were collected for analysis; At 10 h, the reminder rats (four rats were excluded due to morbidity/mortality issues before the start of the main experimental study period) per treatments were collected for analysis, n = 6 in the control group and n = 7 in each of the adsorbent treated groups. Integrality of each digestive compartiment and systemic tissue was collected for each rat.
When evaluating the effect of the 0, 2 and 10 g/kg dose response on YCW (Figure 7), we accounted for a linear increase in the 3 H-AFB1 label in the digesta content and conversely a decrease of the label in the systemic tissue investigated. This representation confirmed the statistical results obtained with the MLR model, establishing a significant dose-dependent effect using YCW (Tables 2 and 3). This study was performed initially on 16 rats per treatment. At 5 h, n = 9 rats for the 10 g/kg YCW treatment and n = 8 for the rest of the treatments were collected for analysis; at 10 h, the reminder rats (three rats were excluded from this analysis due to morbidity/mortality issues before the start of the main experimental study period) per treatments were collected for analysis, n = 6 in the control group and n = 7 in each of the YCW treated groups. Integrality of each digestive compartiment and systemic tissue was collected for each rat. All replicate (open circles/squares) and average values (cross) are displayed in the graphic.
Discussion
This study's primary aim was to investigate the digestive and systemic distribution of AFB1 in the rat, in order to elucidate the bioavailability and the dispersal pattern of this mycotoxin. Based on a literature search, this is the first report describing the pharmacokinetics of AFB1 in different digestive compartments and organs. Several advantages were apparent through the application of tritium labelled AFB1 in this study. It allowed to map the overall aflatoxin distribution (including AFB1 and any metabolites thereof) without the need to develop complex analytical methodologies or account for subsequent recovery, separation, and detection variables. However, using this strategy, limitations arose from our inability to discriminate those species and define different AFB1 metabolite pro- Figure 7. Dose response evaluation measured from the disintegration per minute of the 3 H-label from 3 H-aflatoxin B1 ( 3 H-AFB1) normalized per gram of digesta or tissue collected (×1000 DPM/g) or per milliliter of plasma (×1000 DPM/mL) in rat at (a-1,a-2) 5 h (blue) and (b-1,b-2) 10 h (red) after the toxin administration with 0, 2, and 10 g/kg dose of yeast cell wall-based adsorbent (YCW). All data points measured are reported on: (1) Box and wiskers chart, as well as median (horizontal line), average (cross), and quartile calculations (box); and (2) the regression curve on the average values evaluating the direction and magnitude of the effect relative to the YCW dose. This study was performed initially on 16 rats per treatment. At 5 h, n = 9 rats for the 10 g/kg YCW treatment and n = 8 for the rest of the treatments were collected for analysis; at 10 h, the reminder rats (three rats were excluded from this analysis due to morbidity/mortality issues before the start of the main experimental study period) per treatments were collected for analysis, n = 6 in the control group and n = 7 in each of the YCW treated groups. Integrality of each digestive compartiment and systemic tissue was collected for each rat. All replicate (open circles/squares) and average values (cross) are displayed in the graphic.
Discussion
This study's primary aim was to investigate the digestive and systemic distribution of AFB1 in the rat, in order to elucidate the bioavailability and the dispersal pattern of this mycotoxin. Based on a literature search, this is the first report describing the pharmacokinetics of AFB1 in different digestive compartments and organs. Several advantages were apparent through the application of tritium labelled AFB1 in this study. It allowed to map the overall aflatoxin distribution (including AFB1 and any metabolites thereof) without the need to develop complex analytical methodologies or account for subsequent recovery, separation, and detection variables. However, using this strategy, limitations arose from our inability to discriminate those species and define different AFB1 metabolite profiles within the animal compartment studied herein and how they could be influenced by the other dietary treatments evaluated.
In this study, we also assessed the efficiency of YCW as a binder for AFB1 compared to that of HSCAS. The in vitro evaluation of the adsorption properties of three batches of YCW and HSCAS, tested at pH 3.0 and 37 • C for 90 min, highlighted a very high interaction affinity of above 89% for YCW and 100% for HSCAS at the tested concentrations. This in vitro experiment differed from previous experimental methods, as it focused on fieldlevels of AFB1 concentrations in the sub-parts per million range. We confirmed the capacity of both materials to interact with AFB1 effectively, and that the affinity of interaction in the domain of definition of the tested concentration was almost linear, as defined by the slope of the curve using the Freundlich model, the model previously identified as most suited for comparing adsorbents of different nature [24,25]. This model generally defines adsorption events occurring on heterogeneous surfaces, making it more appropriate for a study of both YCW and HSCAS than the Langmuir equation, which is intended for gassolid interaction [52]. Hill's equation with n sites has been defined as a suitable equation to characterize cooperative phenomena that considers the dynamic changes influencing the number of binding sites on an adsorbent, which is particularly suitable for the study of YCW, as shown previously for different types of mycotoxins [25,53]. In the present study, little difference was observed among the various models owing to the linear behavior of the relationship between free and bound AFB1 at equilibrium, for the tested concentrations.
In the current animal experiment, the observation that a smaller proportion of 3 H-AFB1 was recovered after 10 h suggested a significant portion of the label was lost. This deficit may be due to the biotransformation of the AFB1 molecule through microsomal activities, and the consequent loss of tritium, which could occur during the epoxidation of AFB1 into 8,9-epoxy-AFB1 (representing a loss of two hydrogens out of 14 per one AFB1 molecule, or a 15% tritium deficit) or biodetoxification following demethylation of AFB1 into AFP1 (three hydrogens lost or a 21% tritium deficit). Unfortunately, the rate of AFB1 biotransformation into those metabolites has not been adequately described in the literature, but it is known to vary according to the microsomal CYP polymorphism, with dominant bioactivation from CYP1A2 found in rodents and studied in human liver microsomes or genetically-modified yeast models [54,55]. The decrease in the recovery of radiolabeled AFB1 could be due to its excretion from the body via the urine or feces or the further partitioning into other tissues, e.g., muscle tissues, which were not analyzed in this study. The total radioactivity recovery rate was not affected by the rats' diets, treatment type, or adsorbent concentration, as shown in Figure 2.
The kinetics of AFB1 absorption was assessed by measuring toxin distribution in selected tissues and intestinal digesta. The results showed a logical evolution of the 3 H-AFB1 digesta transit from the proximal to the distal parts of the gastrointestinal tract. Surprisingly, the absorption process was relatively slow compared with protein digestion and absorption, which generally occur within 2 h in the gut's proximal region. A previous study has shown that, when perfused, AFB1 tends to be absorbed in the duodenum. However, when investigated in vitro, on an everted intestine, jejunal absorption was slightly better than duodenal absorption, pointing to differences in the absorption transfer of AFB1 epithelial cells to blood circulation, making it dependent on intestinal tract section, and the animals growth and reproductive stages [56]. Previous studies on aflatoxin absorption toxicokinetics showed that aflatoxin's uptake from the proximal intestine was very rapid. Following intraperitoneal application at a high dose of 20 mg/kg, AFB1 showed peak absorption in blood after 15 min, which was delayed to 30 min after oral ingestion in pregnant mice [57]. However, recirculation through bile was also very rapid, with an absorption rate of 5.0 µg/min and elimination of 3.0 µg/min, which could explain the slow transit progression observed in our study.
Mycotoxin binders are intended to protect animals against the mycotoxins' toxic effects by adsorbing the toxin molecules. Bound toxins have reduced intestinal absorption, provided that the interaction between the binder and toxin is sufficiently strong to remain unaffected by the physiological changes encountered in the digestive tract. If the mitigation is efficient, mycotoxins are retained in the digesta and eventually removed from the body when excreted via feces [58]. In the present study, the animal diet contained YCW or HSCAS as an adsorbent and AFB1 as a toxin; the former was used at two different concentrations, namely 2.0 and 10 g/kg of feed, whereas the latter was administered at a unique dose of 10 g/kg of feed. We evaluated the effect of the two mycotoxin adsorbents in retaining AFB1 in the gastrointestinal tract. Our results revealed that the two adsorbents exhibited a highly significant propensity for maintaining higher toxin concentrations in the digestive compartment at both tested time points. This finding confirmed the ability of the adsorbents to limit the intestinal bioavailability of AFB1, leading to a decrease in the absorption of 3 H-AFB1 through the intestine, which further confirmed the previously studied direct [25,46] and indirect mitigation effects observed in various animal species [31,32,44].
When mycotoxins are absorbed in livestock, the first systemic biological compartment where the toxin can be quantified is the blood [59], which becomes an interesting biological marker of AFB1 exposure in an animal organism. In our study, we were able to highlight both binders' capacity in significantly decreasing the plasma concentration of AFB1 in rats subjected to dietary AFB1 exposure (Figure 5a). We can draw a parallel between this finding and recent findings obtained employing a bicameral Ussing chamber system in an ex vivo setup, in which a reduction in the transfer of AFB1 through the rat intestinal explants led to a decrease in the concentration of AFB1 in the serosal compartment following the use of both YCW and HSCAS [25]. Interestingly, when comparing the 5-and 10-h post-feeding timepoints of the present study, further accumulation of AFB1 could be observed over time, which was effectively prevented by both YCW and HSCAS. This finding also confirmed some of the results previously obtained in other animal species [48].
The liver is a vital organ when evaluating mycotoxicosis as it accumulates and metabolizes toxic compounds [60]. As such, it was expected that the radiolabeled aflatoxin would be detected at an appreciable concentration in the liver. Analysis of the accumulation of 3 H-AFB1 in the liver yielded similar results to those observed in blood plasma (Figure 5b).
In our study, only a low proportion of the total radiolabeled AFB1 was found in the kidney. As expected, AFB1 only marginally accumulates in the kidney. Still, it is implicated in an indirect effect stemming from the activation of oxidative stress through modulation of L-proline levels [61] or an increase in urinary excretion of sodium and potassium and urinary gamma-glutamyl transferase and a decrease in glomerular filtration, reabsorption of glucose, or transport of p-aminohippurate [62].
As summarized in Figure 6, the two tested materials' efficiency significantly decreased the absorption of 3 H-AFB1 based on the recovered quantities from the intestinal digesta to systemic tissues in rats. The total amount of AFB1 in digesta and systemic samples, including plasma, liver, and kidney samples, showed a gradual decrease in the transfer via intestinal absorption of AFB1 with diet adsorbent inclusion. In contrast, an increase in the recovery of AFB1 was observed in the digesta in the presence of dietary absorbent, demonstrating the efficacy of the compartmentalization of AFB1 and the concomitant decrease in the bioavailability and ultimately sequestration of AFB1. The results observed were in line with those described by Firmin and coworkers [46], who analyzed radiolabeled AFB1 activity in feces, urine, and blood plasma following the oral administration of AFB1to rats fed diets containing or not containing YCW at two different doses. Results of that study showed that the proportion of radiolabeled AFB1 in feces increased significantly by 55% compared with that in the control group, with a concomitant decrease in urine, suggesting that AFB1 intestinal absorption was significantly reduced in rats fed a diet containing YCW. Interestingly, no dose-response relationship was observed in eliminating AFB1 in the test groups, potentially because there was a lack of response in the animal due to the low levels of AFB1 tested.
Conclusions
In this study, we evaluated the effects of an organic (YCW) and inorganic (HSCAS) adsorbent added to rats' diets. We observed that at 5 and 10 h post-feeding, there was a significant effect on the pharmacokinetics of AFB1. The results accumulated throughout the study showed a consistent distribution of AFB1 in all digesta and tissue samples analyzed according to treatments, showing a significant decrease following treatment with YCW and HSCAS at 10 g/kg of feed and, to a lesser extent, following YCW treatment at 2.0 g/kg of feed. Taken together, the previous and current findings presented herein revealed the ability of YCW, to the same extent as that of HSCAS, efficiently adsorbing AFB1 in vivo, thus decreasing the toxin levels transferring across the digestive barrier to the systemic circulation of animals. Thus, contributing to the mitigation of the harmful effects of exposure to the AFB1 present in feed. The present study contributes to our understanding of the pharmacokinetics of AFB1 in an animal model, therefore, followup studies should focus on using more deterministic approaches such as employing adapted analytical methodologies (i.e., targeted metabolomics) to further elucidate the AFB1 metabolite profiles in the different animal compartments, and measure the animals inherent metabolic efficiency at detoxifying AFB1, in the presence or absence of a dietary mitigation aid.
In Vitro Main Study Assessing AFB1 Sequestration
A stock solution of 1.0 mg/mL AFB1 (Sigma Chemical Co., St. Louis, MO, USA) was prepared in acetonitrile. The accurate concentration of this stock solution was determined by spectrophotometry (λ max = 362 nm; ε = 21,865). The adsorption efficacy of the two tested binders was determined in vitro with focus on the sub-parts per million levels of AFB1; five concentration points were evaluated: 0.05, 0.10, 0.25, 0.50, and 1.00 ng/mL. Three production batches of the YCW (Mycosorb ® ; Alltech Inc., Nicholasville, KY, USA) and one batch of HSCAS (bentonite T-150 containing >70% smectite (dioctahedral montmorillonite), Tolsa, Madrid, Spain) were tested for AFB1 at a concentration of 1 mg/mL.
Test concentrations for the main in vitro study were prepared by diluting the stock solution in 10 mM citrate buffer adjusted to pH 3.0 to match the physiological conditions of the proximal area of the digestive tract. Analysis was performed using a Waters Corp. (Milford, MA, USA) comprising an Acquity H-class ultra-performance liquid chromatograph fitted with a reverse phase C18 BEH analytical column (100 mm × 2.1 mm × 1.7 µm particle size) maintained at 30 • C and equipped with a fluorescent detector set up at excitation/emission wavelengths of 360/440 nm. The mobile phase consisted of ultrapure analytical grade water and acetonitrile both acidified with 0.1% formic acid (MS grade; Sigma-Aldrich, St-Louis, MO, USA) running in a gradient from 10% to 90% water over a period of 8 min and re-equilibration to the initial conditions for 1 min at a flow rate of 0.6 mL/min and injection volume of 5 µL. A 250-mL amber, silanized glass reaction bottle was used to allow precise weighing and correct mixing of the tested materials. After suspension of 100 mg of adsorbent in a volume of 100 mL (1 mg/mL) with each respective tested concentration of AFB1, samples were incubated for 90 min under orbital agitation (150 ppm) in an incubator maintained at 37 • C (I-series; New Brunswick™, Edison, NJ, USA).
In Vitro Kinetic of Adsorption
The responses were measured by integrating the chromatographic response area under the curve and calculating the corresponding concentration using a linear calibration curve forced through zero over the range of AFB1 tested concentrations. To measure the adsorption of the tested product, the amount of toxin bound was plotted against the amount of toxin initially added to the reaction media. The individual adsorption rate (ratio of toxin bound over the initial amount of toxin added) for every concentration point of toxin was calculated. The average adsorption rate for each product batch and the coefficient of variation were calculated. The kinetics of interaction were also evaluated using three individual equations previously considered suitable for sorbent evaluation [50], namely the Freundlich equation, Langmuir equation, and Hill's model with the n sites equation, per previous studies [25,53], and establishing adsorption equilibrium, distribution constants, capacity, and intensity by evaluating the regression of the amount of toxin bound against the amount of free toxin at equilibrium in the reaction media using Datafit software (version 8.1.69, Oakdale Engineering, Oakdale, PA, USA)
Animals and Diets
Sprague-Dawley rats for the present study were obtained from Harlan Laboratories (Horst, The Netherlands). The animals were seven weeks old at the time of arrival and approximately nine weeks of age at the initiation of the study. The animal room was environmentally controlled, with a 12-h light/dark cycle with constant room temperature and humidity. Water was provided ad libitum, except during feeding time. The feeding period started in the mornings at 08.00-09.00 and in the evenings at 16.00-17.00. The feed base used was a Harlan special diet Teklad 2016 Global Rodent (Harlan-now Envigo, Madison, WI, USA-feed was produced in the Netherlands) containing by weight 16.4% proteins, 48.5% carbohydrate, 4.0% fat for a total energy density of 3.0 kcal/g. Crude fiber and neutral detergent fiber contributed to 3.3% and 15.2%, respectively. The Harlan Tecklad 2016 diet was amended with YCW and HSCAS material by the diet manufacturer at the tested inclusion rates.The target AFB1 dose was 0.2 µg per rat (25 µg/kg feed), including 0.63 µCi of [ 3 H]-labeled AFB1 (Moravek Biochemicals Inc., Brea, CA, USA). The exact amount of labeled toxin was individually added to each 8-g dose of the experimental diet in a small amount of ethanol, which was evaporated prior to feeding.
Animal Pre-Experimental Study
A preliminary study was conducted to determine the absorption kinetics and distribution patterns of AFB1 in rats. Two rats were sacrificed at each of the following five-time points: 1, 2, 3, 5, and 7 h post-feeding, and 3 H-AFB1 content in the jejunum, ileum, colon, plasma, liver, and kidney were measured quantitatively based on 3 H radioactivity counting using a liquid scintillation counter.
Animal Principal Experimental Study
The trial was conducted in the research facility of Alimetrics, Ltd. (Espoo, Finland) in accordance with EU Directive 2010/63/EU. Following the standard operating procedures of Alimetrics Ltd., ethical approval or animal trial permit was not required because the substance under investigation is an approved feed ingredient in the EU, and the level of aflatoxin B1 included in the diets was below the EU regulatory levels. Animals were weighed and randomized into four groups of 16 animals and then identified with ear markings. The groups were divided in each cage into four separate parts using metal partitioning. The rats were conditioned for 9 days to eat 8-g diet portions immediately after the feed was provided. This was to ensure that all pellets containing the radiolabeled AFB1 were ingested within a short period of time. The rats were fasted between morning and evening feeding times. On day 10, after the administration of the radiolabeled feed, the cage was cleaned, the partitioning was removed, and water was provided ad libitum.
Study was performed on 16 rats per treatment. At 5 h, n = 9 rats for the 10 g/kg YCW treatment and n = 8 for the rest of the treatments were collected for analysis; at 10 h, the reminder rats (four rats were excluded due to morbidity/mortality issues before the start of the main experimental study period, n = 60) per treatments were collected for analysis, n = 6 in the control group and n = 7 in each of the adsorbent treated groups.
Sample Collection
At the 5 and 10 h sampling points, the rats were euthanized by CO 2 inhalation, and blood was removed via cardiac puncture. Blood samples were drawn into heparinized syringes and transferred to heparinized test tubes for plasma separation via 5 min of centrifugation at 9000× g. The rats were then dissected, and their livers and kidneys were removed and rinsed with 0.9% NaCl. The gastrointestinal tract was separated into its constituent parts: the stomach, small intestine, cecum, and colon, and their contents were removed quantitatively for radioactivity counting. All samples were stored frozen at −20 • C until processed.
Radioactivity Determination
Frozen tissues were weighted, homogenized using pestle and mortar. The average weight of livers and kidneys collected was 11.2 and 2.3 g, respectively. An amount of 100 mg of homogenized tissue sample was placed into glass vials and dissolved with 2 mL of SOLVABLE™ aqueous-based tissue solubilizer (Perkin Elmer, Beaconsfield, UK) during 2 h at 60 • C. After cooling and solubilization, 300 µL of hydroxide peroxide 30% was added for color elimination and incubated during 30 min at 60 • C. After cooling, 15 mL of Ultima Gold™ scintillation liquid was added to each sample replicate for radioactive counting (Perkin Elmer, Waltham, MA, USA). The exact weight of each subsample and the total weight of the tissue sample collected were used in mass balance calculations.
Collected digesta were weighted and homogenized. An amount of 100 mg of digesta and 1 mL of plasma was processed via direct dample addition. A volume of 15 mL of Ultima Gold™ scintillation liquid was directly added to each sample replicate. For digesta compartments, sample size was dependent on the collection time point, and varied from 0.02 to 4.92 g. For the calculation of the total AFB1 in blood, an assumed blood volume of 20 mL was used.
The [ 3 H] label was measured in duplicate using a scintillation counter (Microbeta 1450, Perkin Elmer, Waltham, MA, USA) using direct counting functionality of disintegrations per minute (DPM) accounting also for counting efficiency and any reduction in chemiluminescence.
Statistical Analysis
One-way Analysis of Variance (ANOVA) followed by a post-hoc test was performed for all measured parameters to test for differences between the treatments and to assess which treatments differed from each other. Dunnett's post-hoc test was used to test the difference between the treatments and the negative control, and Tukey's post-hoc test was used to compare all treatments pairwise. Multiple linear regression models were run for the doses of YCW, using the slope of each curve for each digesta and tissue samples and and based on 0, 20, and 100% of the YCW dose corresponding respectively to the control, 2 and 10 g/kg inclusion rate, to gain an understanding of how AFB1 levels in each tissue were affected by the respective doses used. All tests were performed in the SPSS statistical software package (IBM, version 22, Armonk, NY, USA) at a risk level of α = 0.05.
|
v3-fos-license
|
2016-01-22T01:30:34.548Z
|
2015-01-01T00:00:00.000
|
18728255
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://aaqr.org/articles/aaqr-14-10-oa-0252.pdf",
"pdf_hash": "d943fcc015a63771043a5ecfaf583dca7b9b0658",
"pdf_src": "Crawler",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46592",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "d943fcc015a63771043a5ecfaf583dca7b9b0658",
"year": 2015
}
|
pes2o/s2orc
|
Surface and Column-Integrated Aerosol Properties of Heavy Haze Events in January 2013 over the North China Plain
Heavy haze events were recorded over the North China Plain (NCP) during January 2013. The meteorological condition, in-situ measurement, and ground remote sensing of aerosol size distributions and aerosol optical properties were analyzed to study the meteorological effects on surface and column-integrated aerosol loading. Besides special terrain, analysis of meteorological parameters showed that such a long-standing pollution event was attributable to stagnant weather with high humidity, frequent inversion and low wind speed. The monthly average mass concentration of particulate matter smaller than 1.0 μm (PM1), 2.5 μm (PM2.5), and 10 μm (PM10) was 169, 190, and 233 μg/m, respectively. High mass fraction of PM1 (73%) and PM2.5 (82%) in PM10 indicated the domination of fine mode particles. Increase of the fraction of PM1–2.5 during haze events was attributed to the increase of secondary aerosol under high humidity. Two polluted aerosol types (A1, A3) and one background aerosol (A2) were classified based on aerosol optical depth at 440 nm (AOD440) and columnintegrated size distributions. The AOD440 of cloud/fog processed aerosol (1.43) was about two and seven times larger than that of A1 and A2, respectively. The single scattering albedo at 675 nm (SSA675) of A3 was ~0.93, which was larger than that of A1 (0.85) and A2 (0.80) due to hygroscopic growth under humid environment.
INTRODUCTION
Aerosols can alter the radiation directly by absorbing and scattering incident light.Furthermore, aerosols serve as cloud condensation nuclei to influence the formation, lifetime of clouds and their radiation budgets.It is widely accepted that aerosols are important agents to influence global and regional climate change (IPCC, 2007).In particular, climate changes in China during the past half century such as weaker East Asian monsoon since the 1970s, cooling in the Yangtze Delta region and Sichuan Basin, flooding in South China and drought in North China since the 1970s, and widespread decrease in surface solar radiation and decreasing cloud coverage since the 1960s are suggested to be closely related to an increase in aerosol loading (Li et al., 2007;Li et al., 2011 and references therein), although quantitative assessment of the role of aerosols in regional climate changes requires further study.In addition, aerosols pose a threat to respiratory morbidity and cardiopulmonary health (Cohen and Pope, 1995).The particulate matter (PM) with aerodynamic diameters smaller than 2.5 µm (PM 2.5 ) can be suspended in the atmosphere for lengthy periods and can be inhaled into the respiratory system (Cao et al., 2013).Furthermore, particles with aerodynamic diameters smaller than 1 µm play an important role in visibility degradation and radiative interaction (IPCC, 2007).
China has undergone very rapid economic growth since the economic reforms began in the end of 1970s.The country's economic growth has resulted in an increase in energy consumption and thereby air pollution and associated health effects, particularly in megacities (Chan and Yao, 2008).The rapid urban growth and economic development in Beijing during the past three decades, in addition to the significant increase in the number of vehicles in operation, have led to an increasing number of air pollution episodes and low visibility days.A series of laws, regulations, standards, and measures has been implemented to reduce air pollutant emissions and to improve the air quality in Beijing.For example, the municipal government of Beijing launched the "Defending the Blue Sky" project in 1998, when the number of days with clear skies, i.e., days with grade I or II air quality, was only 100.Since 1998, 12 phases of air pollution control measures were adopted and dozens of measures were implemented in planning for the Beijing 2008 Olympic Games (Zhang et al., 2009).For example, high emissions plants were relocated out of Beijing, cleaner production techniques were utilized, and a total industrial emissions control measure was implemented.Significant progress has been made in reducing air pollution as a result of these effective control measures.For example, SO 2 emissions have been successfully controlled, and NO 2 and CO concentrations have not increased even though the number of vehicles has increased by approximately 10% per year in Beijing.It has been estimated that the total emissions of soot particles and non-combustion industrial dust emissions decreased by 60% from 1999 to 2005 (Hao and Wang, 2005).In addition, a slight decreasing trend although not significant was identified for aerosol optical depth (AOD) in Beijing (Xia et al., 2013).However, during January 2013, heavy haze and fog events occurred over east of China, especially the North China Plain (NCP), as a consequence of the combination of anthropogenic emissions, stable weather, and specific terrain.The maximum area enveloped by haze and fog was as high as 1.4 million square kilometers, and about 800 million people were influenced (http://www.nhfpc.gov.cn/).Fig. 1 shows the Moderate Resolution Imaging Spectroradiometer (MODIS) true color images captured from January 6 to January 29.Extensive haze, fog, and low clouds are clearly visible over the southeast region of Yanshan-Taihang Mountain.In order to further understanding of the causes for this heavy air pollution episode and its impact on aerosol optical properties, the meteorological conditions over Beijing and Xianghe were firstly analyzed in detail.Moreover, in situ measurements of aerosol concentration and column-integrated optical properties recorded during January 2013 were studied extensively to reveal the manner in which ground and column-integrated aerosol optical and physical properties varied between haze and no-haze days.
Site
Xianghe is located between two megacities, Beijing and Tianjin, about 50 km southeast of Beijing and 70 km northwest of Tianjin.The two megacities expand fast with economic growth and suffer from heavy anthropogenic emission.Most measurements in this study were conducted in Xianghe (39.754°N, 116.962°E, 8 m above sea level),.Furthermore, the data of aerosol optical properties and radiosonde of meteorological condition measured in Beijing were used.It has been proved that AOD of Xianghe correlates significantly with that of Beijing, and the difference in AOD between the two sites is negligible (Xia et al., 2005).This result indicates that the aerosol pollution in the NCP is regional in nature.
Aerosol Optical Properties
Beijing and Xianghe belong to the Aerosol Robotic Network (AERONET), which is a globally distributed network that provides ground-based remote sensing observation of aerosol optical properties.The CIMEL sunphotometer is the standard instrument used to measure direct and sky radiance at wavelengths ranging from ultraviolet (UV) to near infrared which are used to retrieve the columnintegrated parameters such as AOD, refractive index, size distribution, single scattering albedo (SSA), asymmetry factor, and phase function (Holben et al., 2001).Details on aerosol retrievals have been discussed by Dubovik et al. (2000).The AERONET data during January 2013 in Xianghe were very limited due to a malfunction of the instrument, therefore, the Level 1.5 AERONET products in Beijing (39.977°N, 116.381°E, 92 m above sea level) that were cloud screened by using Smirnov et al. (2000) method were used in the present study.Only 14 days data were available at Beijing AERONET site from January 6 to 28 except for the dates 13, 15, 16, and 19-23 due to cloud contamination.
Aerosol Size Distribution
The size distribution of aerosols at Xianghe during January 2013 was measured by a Scanning Mobility Particle Spectrometer (SMPS, Model 3936, TSI, USA) in combination with an Aerodynamic Particle Sizer (APS, Model 3321, TSI, USA), and mass concentration of aerosols was derived from the size distribution by assuming the density of aerosols ~1.7 g/cm 3 (DeCarlo et al., 2004; Chow and Watson, 2007).A diffusion silicone drier was installed downstream of the aerosol inlet to eliminate the influence of relative humidity (RH) on particle size.SMPS measures the number and size distribution of particles ranging from 10 nm to 700 nm, whereas APS measures those with aerodynamic diameters of 0.5-20 µm.The combination of SMPS and APS provides the number and size distribution of particles in the size range of 10 nm-15 µm in 5 min intervals.PM 1 , PM 2.5 , and PM 10 mass concentration for dry aerosols were calculated on the basis of aerosol size distribution.It should be noted that measurements were not available from January 2 to 4 due to a malfunction of the APS.
Temperature and Humidity Profiles
A 14-channel microwave radiometer (MWR, RPG-HATPRO, Germany) was used to retrieve the temperature and humidity profile in the boundary layer during January 2013 at Xianghe.RPG-HAPRO has 14 receivers that can detect brightness temperatures at 14 wavelengths ranging from 22.2 to 58.0 GHz.The accuracy of the system is within 0.5 K and more details are given in Rose et al. (2005).The temperature profile was retrieved from RPG-HATPRO measurements for 25 levels below 2 km with the vertical resolution decreasing from 10 to 200 m from surface to 2 km.The inversion layer was identified when the temperature gradient of the layer was positive.Meanwhile, the altitude of inversion bottom and top was determined to be the minimum and maximum height respectively at which temperature gradient was positive.Temperature gradient of 100 m in inversion (∆°C/100 m) was used to represent the intensity of the inversion layer.Surface temperature, humidity, and wind were measured by AWS on the MWR, which provided important initial values for temperature and water vapor retrieval.
Surface Meteorological Data in January during 2000-2013
To enable a comprehensive understanding of the meteorological conditions in January 2013 over NCP, we compared it with historical meteorological records of Beijing in January from 2001 to 2012.The radiosonde data at Beijing station, recorded twice a day (08:00 and 20:00 LST), were used to calculate the inversion height and occurrence probability (http://weather.uwyo.edu/upperair/sounding.html).Unless specified, all of the parameters analyzed in this paper are the daily averaged values.
Temporal Variation of Particulate Matter
Fig. 2 shows the temporal variation of PM mass concentration at various size ranges during January 2013.According to the grade II criterion of National Ambient Air Quality Standard (NAAQS) of China (GB3095-2012)released in 2012, atmosphere is polluted and severely polluted when the daily mean mass concentration of PM 2.5 are larger than 75 and 250 µg/m 3 respectively.As Fig. 2 shows, there were four long-duration haze episodes that were characterized by daily mean PM 2.5 > 75 µg/m 3 and maintaining at least two days during the measurement period, including January 6-8, 10-18, 20-23, and 25-31.About 60.7% of measured days in January were lightly polluted and 25% were severely polluted.
The maximum daily average of PM 2.5 concentration reached 426.6 µg/m 3 on January 12.It was a little larger than those values previously recorded over NCP, such as 200 µg/m 3 by Duan et al. (2006) and 357 µg/m 3 by He et al. (2001).The minimum value of daily PM 2.5 concentration, 44.5 µg/m 3 , occurred on January 24.Monthly averaged mass concentrations of PM 1 , PM 2.5 , and PM 10 were 169, 190 and 233µg/m 3 , respectively.High mass ratio of PM 1 , PM 2.5 in PM 10 (PM 1 /PM 2.5 ~0.73; PM 2.5 /PM 10 ~0.82) indicated the domination of fine mode particles over NCP.Analysis of PM 2.5 /PM 10 showed a higher value, 0.81, on haze days than that on no-haze days (0.76).In contrast, the PM 1 /PM 2.5 were 0.90 on haze days, lower than 0.93 on nohaze days.
The high concentration of PM in January 2013 was outstanding even compared with the historical data.(119).The day number of pollution dominated by PM was 90% and 84% in 2006 and 2013, respectively.A lower value in 2013 was attributed to the clear days of January 1-3 before the haze occurred.The extreme events were more frequent in 2013, and the maximum API reached 406 on January 12, 2013 which is the maximum value for the past 13 years.
Meteorological Conditions
Analysis of the weather maps of NCP during January 2013 showed that the weather condition was featured by the strong zonal circulation at 500 hPa, weak pressure gradient and low wind speed near surface.Fig. 4 shows typical weather maps of severely polluted day on January 12 and a relatively long-standing episode during January 17 and 18.On January 12, the isobars were sparse over NCP indicating the stagnant weather system here.High speed of zonal wind was observed (26 m/s) at 500 hPa (Fig. 4(a)) that favors for the formation and maintenance of stable weather (Wang et al., 2014), while the surface wind speed (Fig. 5(b)) was smaller than 0.2 m/s at 20:00 LST.At 08:00 LST on January 17, the meridional wind was prevalent at 500 hPa with northern wind speed around 20 m/s at upper level.Surface wind speed reached 1.8 m/s and relatively clean air mass transported by northerly winds diluted the pollutants in the boundary layer.The instantaneous concentration of PM 2.5 was only 27.2 µg/m 3 .The meridional wind decreased to 10 m/s at 20:00 LST on January 17 and the surface was dominated by uniform pressure field.The surface wind speed decreased to 1.0 m/s.The instantaneous concentration of PM 2.5 was 225.3 µg/m 3 .After that, the weakening meridional wind was replaced by strong zonal wind (22 m/s) at 08:00 LST on January 18 at 500 hPa.The surface was still dominated by uniform pressure field and the surface wind speed was smaller than 0.1 m/s, the instantaneous PM 2.5 concentration was as high as 692.7 µg/m 3 .It can be seen that persistent upper air zonal wind, weak pressure gradient and low surface wind speed contributed to the enhancement of air pollution.
The temporal variation of meteorological variables including temperature, RH, wind speed and direction near surface over Xianghe was shown in Fig. 5.It was shown that polluted days were always characterized by higher RH, lower temperature and wind speed, especially on severely polluted days.It can be speculated that low temperature, wind speed and high RH favored aerosol accumulation over NCP.Fig. 6 shows the surface wind speed and direction dependence of PM 2.5 mass concentration in Xianghe during January 2013.PM 2.5 concentrations were mostly higher than 100 µg/m 3 when the wind speed was lower than 2 m/s, indicating strong local emissions.Moreover, the concentration enhanced significantly when the wind came from southern, especially from SSW direction.Therefore, the long-range transport of anthropogenic aerosols by southerly winds also favor for the formation of heavy haze in Xianghe and Beijing.Temperature inversion is an important factor for air pollution enhancement.It traps pollutants near ground by reducing turbulence and mixing with air aloft (Silva et al., 2007).Fig. 7 shows the temporal variation of inversion heights, temperature and RH profiles over Xianghe within boundary layer during January 2013.As Fig. 7(a) shows, temperature inversions occurred frequently during the measurement period, especially on haze days.For example, on January 12, a severely polluted day, the inversion layer was below 1 km and lasted all day long with mean temperature gradient of 1.0 °C/100 m.High PM 2.5 mass concentration (326.7 µg/m 3 ) was observed on January 18 when the inversion with a weaker temperature gradient (0.44 °C/100 m) occurred all day long.On the contrary, on January 24, a clean day, there was no inversion layer at all.Except temperature inversion, abundant water vapor in boundary layer also contributed to the heavy haze.As Fig. 7(b) shows, the values of RH during haze days within boundary layer were mostly greater than 75%.
Variation of Aerosol Size Distribution during Haze Events The Influence of Meteorological Conditions on Aerosol Size
The mass concentration ratio between PM 1 /PM 2.5 and PM 2.5 /PM 10 varied with haze and no-hazy days.Daily minimum of PM 2.5 /PM 10 (0.68) occurred on January 19 when relatively low RH (62%) and high wind speed (RH ~1.24 m/s) were measured.The maximum PM 2.5 /PM 10 value (0.92) occurred on January 22 when high RH (97%) and low wind speed (0.67 m/s) was recorded.The monthly mean ratio of PM 1 to PM 2.5 was 0.90 (± 0.06), with daily minimum, 0.75, occurred on January 23 (RH ~97%; wind speed 1.05 m/s) and daily maximum, 0.95, occurred on January 6 (RH ~42%; wind speed ~0.80 m/s).It can be seen that the dry aerosols were dominated by fine mode particles.The correlation coefficients between PM 1 /PM 2.5 , PM 2.5 /PM 10 and meteorological parameters including wind speed and RH were calculated.The results showed that wind speed had little influence on mass concentration ratio between PM 1 and PM 2.5 (R ~0.04) but a negative correlation with mass concentration ratio between PM 2.5 and PM 10 (R ~-0.41).It was likely that the strong wind suspended more coarse particles in the atmosphere and enhanced the proportion of it in PM 10 but only diluted the mass concentration of fine particles without significant variation of mass ratio between PM 1 and PM 2.5 .The mass concentration ratio of PM 1 /PM 2.5 and PM 2.5 /PM 10 showed opposite correlation with RH (R ~-0.44 for PM 1 /PM 2.5 ; R ~0.59 for PM 2.5 /PM 10 ).These results likely indicated that the growth of PM 10 with RH elevation was mainly attributed to the growth of PM 2.5 , and the diameters of secondary aerosols formed under high RH mainly ranged from 1 to 2.5 µm.Besides the variation of emission source, the possible reason for the size growth of dry particles were likely attributable to the increasing dissolution of soluble gas, and adhesion or coagulation between particles.
Fig. 9 shows the particle number size distribution (PNSD) normalized by aerosol number concentration and the volume concentration of normalized PNSD of ground-based dry aerosols at different ambient RH ranges.It can be seen that the proportion of large fine particles increased with ambient RH especially at RH lower than 80%.As pointed by Wang et al. (2013), the quick transformation mechanism from primary to secondary aerosols, and heterogeneous reaction on the surface of fine mode particles enhanced the hygroscopic growth of particles and thereby the surface area of particles for aqueous reaction during haze and fog that resulted in larger solid particles after drying.Moreover, it has been observed that the concentration of inorganic salts in the RH range from 70% to 80% increased by more than 3 times higher than those in low RH values (Moon et al. 2013).The diminishment of coarse particles at high RH condition may be attributable to the sedimentation of coarse mode particles.
Size Distribution of Aerosols Modified by Cloud/Fog Process
The aerosols were classified into three types (A1, A2, and A3) according to their size distributions and loadings.A1 and A3 represented polluted aerosols with AOD at 440 nm (AOD 440 ) > 0.4 while A3 was characterized by larger fine particle size as compared to A1. A2 represented background aerosols with AOD 440 < 0.4.Except on January 24, the Angstrom exponents calculated from AOD 440 and 870 nm (AE 440-870 ) were larger than 0.8, indicating the domination of fine mode particles over coarse particles during this episode.The averaged AE 440-870 was 1.4, 1.1 and 1.0 for A1, A2, and A3 respectively.Fig. 10(c) shows the daily mean size distribution of A3 on January 11, 12, 14 and 28.The outstanding feature is that the peak radii of the fine mode particles were 0.38, 0.38, 0.44 and 0.23µm, respectively, while it was 0.12 ± 0.01 µm for A1 and A2.Large fine mode dominated aerosols (submicron particles) or residual submicron fine mode aerosols retrieved from AERONET have been observed after fog and low-level cloud dissipation at many sites, which indicates the aerosols are modified by fog/cloud process (Dall'Osto et al., 2009;Eck et al., 2012).As shown in Fig. 1, all MODIS satellite images showed low-level cloud or fog either over or near the site for all these days.The smaller daily peak radius of column-integrated aerosol on January 28 was attributable to decay of cloud-processed aerosol, and in turn a larger contribution of smaller-radius aerosol owing to drying of humidified aerosol or fresh aerosol emission (not shown here).
The corresponding daily volume size distribution of dry aerosol measured in Xianghe also showed a larger fine mode peak radius on January 11 and 12 (Fig. 10(f)).The peak radius of dry aerosols was 0.30 and 0.36 µm on January 11 and 12 respectively while it was 0.19 ± 0.01 µm for the other days.Different with January 11 and 12, the peak radius on January 14 and 28 was only 0.20 and 0.21 µm respectively.This was likely attributable to spatial variation of aerosols between Beijing and Xianghe or vertical variation of aerosols.
Considering the size growth of dry particles, it can be seen that physical interaction, aqueous reaction also played an important role in aerosol growth during the haze or fog event besides hygroscopic growth of particles.
Aerosol Optical Properties
Optical properties including AOD, SSA, and aerosol absorption optical depth (AAOD) of A1, A2, and A3 are shown in Fig. 11.Significant differences can be found between these aerosol types.AODs were decreased with wavelength and the averaged AOD 440 was 0.65 (± 0.13), 0.20 (± 0.03), and 1.43 (± 0.36) for A1, A2, and A3 respectively.Higher scattering efficiency at 675 nm was observed for A1 and A3, and the SSA at 675 nm was 0.85 (± 0.01) and 0.93 (± 0.01) respectively.The high scattering efficiency of A3 could be attributed to the high water content and inorganic salts, especially sulfate, enhancement during haze and fog (Sun et al., 2013).In contrast, high absorbing efficiency (SSA 675 ~0.80 ± 0.02) with decreasing spectral dependence within the measured wavelength range were observed for A2 indicating the domination of carbonaceous aerosol over Beijing.The AAOD at 440 nm for A1, A2, and A3 was 0.12 (± 0.02), 0.04 (± 0.01), and 0.18 (± 0.04) respectively.Similar with AOD, the relationship between AAOD and wavelength can be described by a power law equation and thereby absorption angstrom exponent (AAE) is derived.The AAE between 440 and 870 nm (AAE 440-870 ) of A1, A2, and A3 was 1.7, 0.9 and 1.8 respectively.As pointed by Sokolik and Toon (1999), dust particles aggregated with clay, quartz and hematite exhibit strong absorption in the blue wavelength but lower absorption in visible and infrared spectral range and thereby the value of AAE is generally larger than 2.0 for dust aerosols (Bergstrom et al., 2007;Russell et al., 2010).However, the value of AAE of urban pollution is usually around or slightly larger than 1.0 (Bergstrom et al., 2007).This indicated that both A1 and A3 were mixed aerosol with dust and pollution.Low AAE 440-870 value and strong
SUMMARIES AND CONCLUSIONS
Heavy pollution episodes were observed during January 2013 over the NCP.Due to the special terrain, most pollutants were trapped in the southeast region of Yanshan-Taihang Mountain.Prevailing westerly upper zonal wind, weak surface pressure gradient, low surface wind speed, high RH, and frequent inversion provided favorable environment for aerosol accumulation and growth.The monthly averaged RH was as high as 67% (± 24%), and the surface wind speed was only 1.19 ± 1.11 m/s in Xianghe.Compared with the meteorological parameters of Beijing in January from 2001 to 2012, the occurrence frequency of temperature inversion and RH reached the maximum levels in 2013, whereas the surface wind speed and temperature reached the minimum, which contributed to such persistent pollution.Analysis of PM 2.5 dependence on surface wind speed and direction showed a high PM 2.5 value under weak wind (< 2 m/s) and southerly wind, indicating the contribution of strong local emission and long-range transport to air pollution over the NCP.
Measurements of aerosols concentration showed high values over the Xianghe site.Expect for those measured on January 5, 9, 19, and 24, the daily mass concentrations of PM 2.5 were all greater than 75 µg/m 3 and some even greater than 250 µg/m 3 , suggesting heavy pollution over the NCP.The maximum daily averaged PM 2.5 concentration, 426.6 µg/m 3 , occurred on January 12 and was higher than that in the historical records.The monthly averaged PM 1 , PM 2.5 , and PM 10 were 169, 190, and 233 µg/m 3 respectively.High mass concentration ratios (PM 1 /PM 10 ~0.73; PM 2.5 /PM 10 ~0.82) demonstrated the domination of fine mode particles during the haze period.
The mass ratio of PM 2.5 /PM 10 showed a positive correlation with RH while that of PM 1 /PM 2.5 was opposite (R ~0.59 for PM 2.5 /PM 10 ; R ~-0.44 for PM 1 /PM 2.5 ), indicating the secondary formation of particles in range of 1-2.5 µm under high RH.The peak radius of volume size distribution showed an increase with RH elevation.Accordingly, high PM 2.5 values recorded in January 2013 are partly attributed to aerosol growth under favorable weather conditions such as high ambient RH.
Fig. 2 .
Fig. 2. Temporal variation of particulate matter measured in Xianghe during January 2013.The haze days with daily PM 2.5 mass concentration larger than 75 µg/m 3 are labeled with gray background.
Fig. 3 presents the Air Pollution Indexes (API) of January from 2001 to 2013 (downloaded from http://datacenter.mep.gov.cn/).Their calculations are based on the levels of five atmospheric pollutants including SO 2 , NO 2 , PM 10 , CO, and O 3 .The final score of API is the highest value of these five pollutants.As shown in the Fig. 3, higher monthly means of API in January were observed in 2006 (128) and in 2013
Fig. 3 .
Fig. 3. Box plot of Air Pollution Index (API) and the probability of pollution dominated by particulate matter (PDP) in January from 2001 to 2013 at Beijing.The monthly arithmetic mean value of API is indicated by black diamond and that of PDP is indicated by blue circle.In each box, the red central bar is the median, and the lower and upper limits are the first and third quartiles, respectively.The lines extending vertically from the box represent the spread of the distribution with the length being 1.5 times of the difference between the first and the third quartiles.Observations falling beyond the limits of those lines are indicated by plus symbols.
Fig. 5 .
Fig. 5. Temporal variation of surface meteorological parameters including temperature (blue line) and relative humidity (green; the red dotted line indicates RH = 80%) (a), wind speed and wind direction (wind direction is represented by the color of data point and the corresponding color bar is shown right side) (b) in Xianghe during January 2013.
Fig. 6 .
Fig. 6.PM 2.5 dependence on wind speed and direction in Xianghe during January 2013.PM 2.5 mass concentration is represented by color for varying wind speeds (radial direction) and wind direction (transverse direction).
Fig. 8 .
Fig. 8. Box plot of January temperature (a), inversion height and probability of inversion (b), wind speed (c) and relative humidity (d) from 2001 to 2013 over Beijing.The monthly arithmetic mean value is represented by black diamond.In each box, the red central bar is the median, and the lower and upper limits are the first and third quartiles, respectively.The lines extending vertically from the box represent the spread of the distribution with the length being 1.5 times of the difference between the first and the third quartiles.Observations falling beyond the limits of those lines are indicated by plus symbols.
Fig. 10 .
Fig. 10.Daily averaged size distribution of column-integrated aerosols retrieved by AERONET (top panel: a-c) in Beijing and the corresponding size distribution of dry aerosols measured by SMPS and APS (bottom panel: d-f) in Xianghe during January 2013.
|
v3-fos-license
|
2018-04-03T00:28:16.439Z
|
2014-01-30T00:00:00.000
|
11541156
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2014/673248",
"pdf_hash": "a9047da0999e1993c4e42af6fa0a4b5affbfe798",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46593",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "bdc275f408dc77175f6858a36902882dd5421be7",
"year": 2014
}
|
pes2o/s2orc
|
An Innovative Time-Cost-Quality Tradeoff Modeling of Building Construction Project Based on Resource Allocation
The time, quality, and cost are three important but contradictive objectives in a building construction project. It is a tough challenge for project managers to optimize them since they are different parameters. This paper presents a time-cost-quality optimization model that enables managers to optimize multiobjectives. The model is from the project breakdown structure method where task resources in a construction project are divided into a series of activities and further into construction labors, materials, equipment, and administration. The resources utilized in a construction activity would eventually determine its construction time, cost, and quality, and a complex time-cost-quality trade-off model is finally generated based on correlations between construction activities. A genetic algorithm tool is applied in the model to solve the comprehensive nonlinear time-cost-quality problems. Building of a three-storey house is an example to illustrate the implementation of the model, demonstrate its advantages in optimizing trade-off of construction time, cost, and quality, and help make a winning decision in construction practices. The computational time-cost-quality curves in visual graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality trade-off model sophisticated.
Introduction
The time, quality, and cost are usually three contradictive objectives which are often traded off in project practices by managers randomly if they lack efficient tools. The time, quality, and cost are interdependent parameters in a building project.
When the construction time is shortened, the project cost should be added. It is a tough challenge to balance those objectives in practice. The cost is usually the most important determinant of selecting a contractor in current construction industry. A contractor is undergoing fewer profit margins now than ever when current construction industry is more competitive. He might lose all profit or even go bankrupt if he fails to implement one or two projects properly in right quality, time, and cost. In order to reduce cost, some contractors risk using inferior construction materials and incapable labor which frequently results in poor quality and thus compromise safety standards. Local government offices have to monitor and regulate construction quality to secure the minimum standards. Otherwise any contractor might be punished or ousted by governments if he fails to obey regulations of construction quality and safety. Besides, the time is such a top visible parameter that contractors would deliver the construction project within their promised schedule based on agreements and contracts. Obviously the time is sensitive in contracting projects and controlling cost. A construction manager should deliberately balance the cost, time, and quality, as well as construction resources, in the early planning phase.
Since the cost and time are two of the most important objectives which are easily quantified in a construction project, time-cost tradeoff problem has been researched for a long time [1]. Basic common assumptions for cost-time tradeoff are deterministic durations and linear time-cost functions where discrete construction resources such as labor 2 The Scientific World Journal and machine are crashed up to a continuous extent. In order to reach global optimum in a large-scale time-cost tradeoff model, numerous trials are usually needed. There are more than 23 optimal techniques thought to be the most effective in achieving time-cost tradeoff problems [2]. The evolutionary algorithms are more efficient to avoid local optimization. A practical method to solve construction timecost tradeoff problems is the genetic algorithm (GA) [3,4]. Recently the ant colony optimization (ACO) algorithm and the particle swarm optimization algorithm are applied to obtain a global optimization solution [5,6]. Geem [7] employed the harmony search algorithm to perform timecost biobjective tradeoff where a network of up to 18 nodes was tested well. These new paradigm algorithms were able to obtain optimal solutions for cost-time tradeoff models within only moderate computational effort.
Quality is an important parameter correlating highly with time and cost parameters. But it is not a quantitative parameter in nature, practical time-cost-quality tradeoff models are seldom developed from previous research works of the literature. Babu and Suresh [8] proposed a framework to study the tradeoff among time, cost, and quality using three interrelated linear programming models. Then Khang and Myint [9] applied the linear programming models in an actual cement factory construction project, which was depicted by a 52-activity CPM incorporated with their time, cost, and quality individually, and quality parameter in every activity varied from 0.85 to 1. Tareghian and Taheri [10] assumed the duration and quality of project activities to be discrete and developed a three interrelated integer programming model but simplified the optimization algorithm with solving one of the given entities by assigning desired bounds on the other two. El-Rayes and Kandil [11] presented a multiobjective model to search for an optimal resource utilization plan that minimizes construction cost and time while maximizing its quality, applied genetic algorithm to provide the capability of quantifying and considering quality, and visualized optimal tradeoffs among construction time, cost, and quality by an application example.
Although the objectives of cost and time might be mentioned frequently by natural numbers, the objective of quality is seldom described in quantities, which worsens numerical tradeoff among project time, cost, and quality. This paper will present a new solution for solving time-cost-quality tradeoff problem based on project breakdown structure method and task resource allocation.
Problem Definition
Project time-cost-quality tradeoff problem (PTCQTP) can be defined as follows: a project is represented by an activity-onnode network with activities that is an acyclic digraph = ( ), where = {0, . . . , + 1} is the set of nodes (construction activities). In the network both node (0) and node ( + 1) are dummy activities. is the set of all paths in the activity-onnode network, starting from activity (0) and ending at activity ( + 1) and is the set of activities contained in path ∈ .
Each activity ∈ is associated with its time , cost , and quality . The earliest/latest starting times (EST /LST ) for each activity are easily calculated using the forwardbackward passes. Each activity can be decomposed into four resources of (construction labor), (construction material and machine), (construction equipment), and (construction administration). Construction labor is associated with labor productivity LP , labor cost LC , labor amount LA , and labor quality LQ . Construction material is associated with material and machine cost MC and material quality MQ . Construction equipment is associated with equipment productivity EP , equipment cost EC , equipment amount EA , and equipment quality EQ . Construction administration is associated with administration cost AC and administration quality AQ .
In order to guarantee public safety and interest, local governments would supervise and secure all construction projects to be above a minimum quality level ( min ) [12]. If any part of a construction project fails to conform with the minimum construction quality standards, the project could not be delivered properly, and the unqualified parts (LQ , MQ , EQ ) should be replaced or reworked until this quality conforms to the minimum requirements such as the minimum labor quality (LQ min ), the minimum material quality (MQ min ), the minimum equipment quality (EQ min ), and the minimum administration quality (AQ min ). The reworks or replacement of construction parts obviously increase cost and delay schedule if the parts of inferior quality are detected by supervisors according to regulations and codes [12]. Research on rework or replacement is so complicated that it would mislead this paper into game method rather than optimization analysis, so this model assumes that any part of a construction part could not be below its minimum standard.
Since the project delivery time defined clearly in construction agreements is a crucial factor for project owner and contractors, contractors should complete and deliver the project to the owner in time. Otherwise the contractors will pay a certain penalty because of delay delivery [13]. This kind of contract conditions will encourage the contractor to set up a project time plan in advance. Contractors are naturally interested in controlling cost actively and minimize the project total cost ( ).
As discussed previously in aspects of local government's regulations in securing construction quality, project owner's stimulus to shorten construction time, and contractor's intrinsic motivation to reduce construction cost, PTCQTP can be formally stated as follows: given a network with a lot of nodes, that is, activities by their sequences, durations, costs, and qualities, a general status is determined by each activity according to at least one of the following objectives: minimize the project duration, maximize the requirements of construction quality codes and standards, and minimize budget.
Decision Variables and Assumptions
The project time-cost-quality performance is essentially formed from each activity's time, cost, and quality, respectively. The work breakdown structure (WBS) is a method to
Construction
Overall time
Overall cost
Overall quality Work breakdown
Resource breakdown
Time-cost-quality breakdown Construction project activity (· · · ) Construction activity (· · · ) · · · · · · · · · · · · + + + + = ∑ i Material cost (i) Figure 1: Breakdown and formation structure of time-cost-quality in a construction project essential relationships among the time, cost, and quality parameters in construction activities are assumed and discussed as follows.
decompose a project into a number of construction activities and further into construction resources such as labors, materials, equipment, and administration, whose utilization determines each activity's time, cost, and quality parameters and the project's overall time/cost/quality performance is finally formed, as shown in Figure 1.
Relationship between Labor Productivity and Labor Quality.
Assign a construction team to excavate earth, to erect formwork, to mix concrete, or to do other jobs; their working quality will decline if they intend to increase productivity [14], and an approximate linear relationship between labor productivity and labor quality is observed here. A more complex relationship function between labor productivity and labor quality is also considered in later case study: where LQ ( ) = actual quality level of construction labor ( ) working in activity ( ), LQ ( ) ∈ (LQ min , LQ max ); LQ max = maximum quality level of construction labor ( ) working in activity ( ); LQK = (LPRD max − LPRD min )/(LQ max − LQ min ); LPRD min = minimum productivity level of construction labor ( ) working in activity ( ); LPRD max = maximum productivity level of construction labor ( ) working in activity ( ); LPRD ( ) = actual productivity level of construction labor ( ) working in activity ( ), LPRD ( ) ∈ (LPRD min , LPRD max ).
Relationship between Material Quality and Material Cost.
The time of a construction activity is mainly determined by its job quantities and productivities rather than its material or machine quality, and the material can hardly interfere with the activity's time, either [15]. Thereafter an approximate linear relationship between material quality and material cost is determined.
The relationship between quality and quality cost for a manufacturing company
Relationship among Equipment Productivity, Quality, and
Cost. Construction equipment is a crucial factor of construction techniques to increase construction quality, to reduce cost, and to shorten time. In order to calculate construction time variation impacted by equipment, a modified factor to labor productivity caused by equipment ( ) is introduced [14]: where PRD ( ) is the actual productivity in activity ( ); DEK ( ) is a modified factor to labor ( ) productivity by changes of construction equipment parameters; LPRD ( ) is labor productivity in activity ( ).
A better equipment quality performance will improve construction productivity [15], so the modified factor DEK ( ) could be derived from the equipment quality EQ ( ) : The Scientific World Journal Construction equipment quality and equipment cost [15] is also assumed as an approximate linear function just like construction material: where EQ ( ) = actual quality level of construction equipment ( ) in activity ( ), EQ ( ) ∈ (EQ min , EQ max ); EQ min = minimum quality level of construction equipment ( ) in activity ( ); EQ max = maximum quality level of construction equipment ( ) in activity ( ); EQK = (EC max − EC min )/(EQ max − EQ min ); EC min = minimum cost of construction equipment ( ) in activity ( ); EC max = maximum cost of construction equipment ( ) in activity ( ); EC ( ) = actual cost of construction equipment in activity ( ), EC ( ) ∈ (EC min , EC max ). Work overtime usually decreases construction productivity and increases hourly cost rate [16]. Then construction equipment cost EC ( ) will be modified by factor : where = construction equipment cost modification factor during overtime because of extra or additional construction equipment, = 1+(DPK ( ) −1)×EOK ; EOK = productivity decreased rate during overtime per unit time (e.g., hour), normally 20%.
Relationship between Construction Administration Quality and Administration Cost.
A construction team consisting of sufficient crew members could improve construction quality and consume a reasonable cost [17], but the construction team hardly impacts on construction productivities. Therefore it is assumed that administration cost and administration quality are an approximate linear function: where AQ ( ) = actual quality level of construction administration ( ) in activity ( ), AQ ( ) ∈ (AQ min , AQ max ); AQ min = minimum quality level of construction administration ( ) in activity ( ); AQ max = maximum quality level of construction administration ( ) in activity ( ); AQK = (AC max − AC min )/(AQ max − AQ min ); AC min = minimum cost of construction administration ( ) in activity ( ); AC max = maximum cost of construction administration ( ) in activity ( ); AC ( ) = actual cost of construction administration ( ) in activity, AC ( ) ∈ (AC min , AC max ).
Since work overtime might increase administration cost, the construction administration cost will be modified by factor : where = administration cost modification factor during work overtime because of extra or additional construction equipment, = ACRK + (1 − ACRK )/DPK ( ) ; ACRK = administration hourly cost rate factors in activity ( ) when overtime working is applicable, usually 2.0.
Calculating Labor Cost and Activity Time.
When labors are working together as a construction crew in a construction activity ( ), their working duration is the time of the construction activity, which could be estimated by the construction quantities (QNT) and the actual productivity, overtime factors: Labor cost in construction activity ( ) would be determined by standard daily cost and overtime work cost [18]. It is assumed that work overtime is paid in an hourly cost rate comparing with standard labor cost: where LC ( ) = labor cost in construction activity ( ); LCD = standard labor cost per unit time (e.g., day) in construction activity ( ); LCRK = labor cost rate factors in activity ( ) when overtime work is applicable, usually 2.0.
The Scientific World Journal 5
Formulating PTCQTP Model
The purpose of formulation of a PTCQTP model is to optimize comprehensive construction time-cost-quality problem and to provide construction managers with a deliberate tool to balance critical construction resources in competitive construction industry. The project quality, time, and cost are quantified as follows.
Calculating Overall Quality.
Since a construction project comprises various resources such as materials, machines, method, labors, and even management, the overall quality of a construction project is calculated by each activity's quality AQP ( ) and its quality weight WT : where = the general quality of a construction project; = number of activities in a construction project; WT = quality weight indicator of each construction activity ( ), and ∑ =1 WT = 1.0; AQP ( ) = quality performance of construction activity ( ) calculated by its labor quality, material quality, construction equipment quality, and administration quality, AQP ( ) = LWT × LQ ( ) + MWT × MQ ( ) + EWT × EQ ( ) + AWT × AQ ( ) ; LWT , MWT , EWT , AWT = weight indicators of construction labor, material, equipment, and administration in activity ( ) respectfully; LQ ( ) , MQ ( ) , EQ ( ) , AQ ( ) = quality of construction labor, material, equipment, and administration in activity ( ) respectfully.
Calculating Overall Cost.
The overall cost of a construction project is added up with each construction activity's cost and its administration cost AC ( ) . Thereby the overall cost is calculated as follows: where = the overall cost of a construction project; LC ( ) , MC ( ) , EC ( ) , AC ( ) = the cost of labor, materials, equipment, and administration in construction activity ( ) respectfully; = number of all construction activities.
Calculating Overall Time in Construction Project.
The overall time of a construction project can be easily calculated by nodes as activities in an acyclic digraph network: where = overall time in construction project, EST ( ) = max ℎ=1, −1 (EST (ℎ) + Dur (ℎ) ), as well as the earliest starting time of activity ( ) derived by its predecessors, and the first EST (1) = 0.
Implementation of the PTCQTP Model
The PTCQTP model is to minimize the project overall time while conforming to the requirements of construction quality standards within a specified budget, which can be sated as follows: where = the specified maximum cost; = the minimum requirements of construction quality; X = [DPK (1) , a vector of all variables in the PTCQTP model. In order to solve this comprehensive time-cost-quality tradeoff problem, a global optimization algorithm is necessary. The genetic algorithm is widely applied in optimization solution and pattern search is one of direct search methods, both methods can solve global optimization problems. Tests of different algorithms developed for this model reveal that the pattern search algorithm cannot solve this complex nonlinear programming problem by direct search which often falls into local optimization, but the genetic algorithm can find out a global optimization though solutions are not precise but acceptable.
The genetic algorithm has been widely applied in previous works of the literature; thereafter the development process of genetic algorithm for the PTCQTP model is not worthy of detailing here.
Example Illustration.
Here is a typical brick concrete house with a concrete raft slab foundation and three stories as a study example to illustrate application of the PTCQTP model, shown in Figure 2. Construction lot is nearly 300 square meters (m 2 ), and depth of the shallow foundation is one-half meter and earthwork volume needed to be moved is 240 m 3 . The first floor is 120 m 2 , the second floor is about 105 m 2 , and the third floor is 90 m 2 .
The building consists of 20 construction activities, and the construction procedure is shown in an activity-on-node network in Figure 3. Each activity has a number of possible resource options and work options that can be used to construct the activity as shown in Table 1. Construction resources in each activity are labors, materials, equipment, and administration. Normal working shift is eight hours a day and the maximum working overtime is additional four 6 The Scientific World Journal hours a day. During working overtime, construction labors and administrators will be paid doubly (LCRK = 2.0 and ACRK = 2.0).
Since construction materials weigh heavily on the overall quality of a construction project, the quality weight indicators of construction materials are bigger than any other indicators. In this case the quality weight indicators of administration are the least important when comparing with other indicators.
After deliberating on all construction activities, 20 activities are grouped as 7 works: earth work, foundation work, 1st story work, 2nd story work, 3rd story work, and roof work. The quality weight indicators of group works (GWT ) are suggested first and then the quality weight indicators of all 20 construction activities (AWT ) are assigned from the quality weight indicators of their groups. All quality weight indicators applied in this building case are shown in Table 2.
If the overall quality objective is 0.8 or above and the overall cost objective is $350000 or below, the optimal overall time is 64 days suggested by the PTCQTP model.
All optimized construction arrangements and resource utilizations suggested by this model are available; for example, the optimal working hours of each activity are shown in Figure 4. The optimal working arrangement reveals that nearly all activities are carried out in overtime working hours since the construction time is limited. Although overtime working demands higher labor cost, overtime working shortens construction time and secures construction quality.
The optimized material quality performances in 20 construction activities vary much, as shown in Figure 5. For example, materials in the activity (6) "-Refill foundation earth" are required to maintain the highest quality level (= 0.95), while the activity (10) "-Build block in 1st story" and (12) "-Install reinforcing for 2nd story" only need to be the lowest quality level (= 0.7). Obviously the lowest quality requirement would save cost.
A part of optimized results (shown in Table 3) are excerpted from numerous calculations when construction cost and quality are confined within potential solution boundaries. A visual tradeoff among construction time, cost, and quality is presented in Figure 6.
When a moderate quality performance, for example, 0.86, is set and the overall cost is $320,000, the overall time is about 90 days by the tradeoff model. The overall cost will climb to $390,000 if the overall time is shortened to 54 days and the quality objective is unchanged.
Tradeoff curves between the overall cost and time under different quality objectives are shown in Figure 7, which proves that traditional linear assumptions between project cost and time are reasonable somehow.
An application example of a three-story house construction is introduced to illustrate the implementation of the PTCQTP model and demonstrate its advantages in optimizing tradeoff of construction time, cost, and quality. The example provides useful three-dimensional and twodimensional visual relationships among project time, cost, and quality and resource utilization planning which enable construction managers and engineers to make a winning decision in fiercely construction competition. The computational time-cost-quality curves in visual three-dimensional graphics from the case study prove traditional cost-time assumptions reasonable and also prove this time-cost-quality tradeoff model sophisticated.
Future studies are to assume more sophisticated relationships among the cost, time, and quality of different project resources, test more project cases needed, and find out other more efficient optimal algorithms.
|
v3-fos-license
|
2024-06-08T05:17:42.171Z
|
2024-06-07T00:00:00.000
|
270310656
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "386df6e9495171489142c2dc8ee74850e01f017e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46595",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "386df6e9495171489142c2dc8ee74850e01f017e",
"year": 2024
}
|
pes2o/s2orc
|
The potential mechanism of HIF-1α and CD147 in the development of triple-negative breast cancer
Background: Triple-negative breast cancer (TNBC) is a subtype of breast cancer with a poor prognosis, and the outcomes of common therapy were not favorable. Methods: The samples of 84 patients with TNBC and 40 patients with breast fibroadenoma were collected in the pathology department specimen library of our hospital. The prognosis of patients was obtained through outpatient follow-up information, telephone and WeChat contacts, and medical records. The mRNA expression was analyzed using bioinformation and quantitative real-time polymerase chain reaction (qPCR). The protein expression was determined by hematoxylin-eosin staining and immunohistochemical staining. The results of survival analysis were visualized using Kaplan–Meier curves. Results: The immunohistochemical staining showed that hypoxia-inducible factor-1alpha (HIF-1α) was mainly distributed in the nucleus and cytoplasm, while CD147 is mainly distributed in cell membrane and cytoplasm. The qPCR results exhibited that the expression level of HIF-1α and CD147 in TNBC tissue was significantly higher than that in breast fibroadenoma tissue. The expression of HIF-1α was related to the histological grade and lymph node metastasis in TNBC, and the expression of CD147 was related to Ki-67, histological grade and lymph node metastasis. There was a positive relationship between the expression of CD147 and HIF-1α. The upregulated expression of CD147 was closely related to the poor prognosis of OS in TNBC. Conclusion: CD147 could be a biomarker for the prognosis of TNBC and closely related to the expression of HIF-1α.
Introduction
According to the GLOBOCAN database developed by the International Agency for Research on Cancer, the number of cases of breast cancer has surpassed that of lung cancer to become the cancer type with the highest number of cases and the incidence rate of breast cancer also ranks first in this list. [1]t present, the molecular typing including Luminal A type, Luminal B type, human epidermal growth factor receptor 2 over-expression and triple-negative breast cancer (TNBC) was closely related to the prognosis of BC. [2,3] As previously reported, TNBC is one of the highly aggressive types and accounts for 15% to 20% of BC with a poor prognosis. [4,5]Chemotherapy is now the main treatment for TNBC patients, but a higher risk of chemotherapy resistance and more serious side effects were observed in the treatment. [6]Therefore, it is urgent to explore the mechanism of the development of TNBC and find potential therapeutic targets.
Hypoxia-inducible factor-1alpha (HIF-1α) is a transcription factor with an essential role in the response to hypoxia, [7] and it has been reported that the intra-tumoral hypoxia has a more negative role in TNBC than other subtypes of BC. [8] At present, some studies identified that HIF-1α was involved in tumor angiogenesis and metabolic reprogramming, as well as multiple steps in the process of breast cancer invasion and metastasis. [9]Moreover, Fourie et al found that the drug resistance of BC patients can be improved by regulating the expression of HIF-1α, [10] which indicated that HIF-1α may be a promising target for the treatment of TNBC.
CD147, also known as an extracellular matrix metalloproteinase inducer, is a cell-surface glycoprotein and widely expressed This work was supported by the Wu Jieping Medical Foundation (No. 320.6750.2021-10-32)
and Xiaogan Municipal Science and Technology Commission (XGKJ2021010047).
The authors have no conflicts of interest to disclose.
The datasets generated during and/or analyzed during the current study are not publicly available, but are available from the corresponding author on reasonable request.in various cancers. [11]The abnormal expression of CD147 was observed in various cancers including BC. [12] Besides, it promotes tumor invasion, growth, and metastasis by stimulating matrix metalloproteinase synthesis in neighboring fibroblasts and inhibiting cell apoptosis. [13]Furthermore, it could interact with some oncogenic proteins resulting in the deterioration of tumor malignancy and drug resistance. [14]However, the function of CD147 in TNBC was not clear.
The study was performed in line with the principles of the
Thus, in this study, we tried to identify the expression of HIF-1α and CD147 in TNBC and analyzed the prognosis of them.
Data collection and processing
We collected the TNBC patient-related case data of the target experimental wax block in the medical record system and searched for the target wax block with complete tissue structure in the pathology department specimen library.The wax block embedding boxes are grouped and coded.The wax on the surface of the wax block tissue is removed by a slicer followed by slicing, with the thickness of each paraffin section being about 3 to 4 um.The paraffin sections are dispersed and placed in the floating pool of the spreading machine at 40°C to 45°C.Then paraffin sections were fished out and adhered to the anti-falling glass slide.After sections were air-dried, they were placed in a constant-temperature blast drying oven for 20 minutes at 60°C to 65°C.
In this study, the patients in the TNBC group were followed up after the operation, and the follow-up time was until June 2022.The follow-up information including disease free survival (DFS) and overall survival (OS) was recorded through outpatient follow-up information, telephone and WeChat contacts, and medical records.
Hematoxylin-eosin staining
The sections were dewaxed in xylene for 10 minutes.Then, these sections were subjected to gradient alcohol hydration (100% for 2 min, 95% for 2 min, 80% for 2 min) and washed for 2 minutes with water.Next, the sections were stained with hematoxylin for 5 minutes and washed for 10 minutes with water.The 0.5% hydrochloric acid alcohol was used to differentiate for 5 seconds.In order to dye hematoxylin blue, the sections were soaked in saturated lithium carbonate for several seconds followed by the water wash for 15 seconds.Then, after the sections were subjected to gradient alcohol dehydration (80% for 2 min, 95% for 2 min, 100% for 2 min), they were soaked in xylene for 2 minutes.Finally, neutral gum was used for sealing sections.
Immunohistochemical staining
The sections were put into xylene for 5 minutes.Then, the samples were subjected to gradient alcohol hydration (100% for 10 min, 90% for 10 min, 70% for 10 min).After samples were washed 3 times with PBS for 3 minutes, they were flicked dry and put into a wet box.Then, hydrogen peroxide (3%) was added in sections for incubation for 15 minutes at room temperature.After sections were washed 3 times with PBS for 3 minutes and flicked dry, they were incubated with 100 μL primary antibodies against HIF-1α or CD147 for 15 minutes in the dark at room temperature.Next, the sections were incubated with 100 μL secondary antibodies for 45 minutes at room temperature after they were washed 3 times with PBS for 3 minutes and flicked dry.The sections were color developed by 100 μL DAB working solution.Sections were counterstained with hematoxylin, differentiated with alcohol hydrochloride (0.1%) and turned back to blue using ammonia (0.5%).The sections were dehydrated by graded ethanol (80% for 2 min, 95% for 2 min, 100% for 2 min) and cleared 3 times for 3 minutes with xylene and coverslipped before being visualized with an Olympus microscope.
Quantitative real-time polymerase chain reaction
Total RNA Extractor was applied to extract total RNA from normal and CRC cells and tissues.After the ratio of A260/ A230 and A260/A280 was measurement, reverse transcription of RNA samples and quantitative real-time polymerase chain reaction was performed with SwsScript All-in-One Firststrand-cDNA-synthesis SuperMIx for quantitative real-time polymerase chain reaction (qPCR) (One step gDNA Remover) in the 2720 Thermal Cycler.The expressions of CD147 and HIF-1α were determined using the 2− ΔΔCt method with glyceraldehyde-3-phosphate dehydrogenase chosen as control references.
PCR reaction conditions were set in accordance with the manufacturer's instructions: 60 seconds at 95°C, then 20 seconds at 95°C, 20 seconds at 55°C, and 30 seconds at 72°C for 40 cycles.The primer sequences are shown in Table 1.
Statistical analysis
All experimental data in this study were analyzed by SPSS26.0 computer software.The expression of HIF-1α and CD147 in TNBC and breast fibroadenoma, and the relationship between the expression of the 2 indexes and the clinicopathological factors of TNBC patients were analyzed by 2 tests (Fisher exact test was used when the theoretical value was less than 5).The correlation between the 2 indexes was analyzed by the Spearman correlation test.The relationship between the expression of the 2 indexes and the prognosis of patients was analyzed by Kaplan-Meier, and the Log-rank test of survival analysis was carried out.Using the database of breath cancer gene-expression miner v4.9 (BC-genexminer v4.9), 1988 TNBCs with complete HIF-1α information were screened from 10,923 cases of breast cancer and 2022 TNBCs with complete CD147 information were screened from 10,923 cases of breast cancer, and the relevant survival analysis curves (DFS, OS) were obtained through Kaplan-Meier.Cox multivariate analysis was used to analyze the expression of the 2 indexes, the clinicopathological risk factors and prognosis of TNBC patients.All p values were tested by statistical bilateral test, and P < .05indicated that the difference was statistically significant.
The characteristics of the participants
In this study, a total of 84 TNBC patients were enrolled.The baseline characteristics of patients with TNBC were shown in
The prognosis of hypoxia-inducible factor-1alpha and CD147 in triple-negative breast cancer using bioinformation
Using the database of breath cancer gene-expression miner v4.9 (BC-genexminer v4.9), 1988 TNBCs with complete HIF-1α information and 2022 TNBCs with complete CD147 information were screened from 10,923 cases of BC.The Kaplan-Meier curves exhibited that there was no significant correlation between the expression of HIF-1α and the prognosis of TNBC (DFS: P = .290and OS: P = .125)(Fig. 1A-B).The expression of CD147 was significantly correlated with the prognosis of TNBC (DFS: P < .001and OS: P < .001)(Fig. 1C-D).
The prognosis factors in triple-negative breast cancer
Then, we tried to explore the function of HIF-1α, CD147 and clinical characteristics in the TNBC using multivariate Cox analysis.The results showed that only CD147 could independently predict the prognosis of TNBC (P < .05)(Table 3).
The expression of hypoxia-inducible factor-1alpha in breast cancer
Next, we identified the expression of HIF-1α in different types of breast cancer.From Table 4, it was obvious that 63 patients had positive expression of HIF-1α among 84 TNBC patients, while 11 patients had positive HIF-1α expression among 40 patients with fibroadenoma of the breast.The further experiments showed that HIF-1α was expressed in the nucleus and cytoplasm which exhibited pale yellow and brownish yellow granules, and a few cells were strongly stained with dark brownish yellow (Fig. 2A-B).Besides, the qPCR results identified that expression of HIF-1α was significantly upregulated in TNBC samples compared with normal samples (Fig. 3, P < .01).
The correlation between hypoxia-inducible factor-1alpha expression and clinical characteristics in triplenegative breast cancer
As shown in Table 5, it can be seen that HIF-1α was positively expressed in 43 (49.4%)patients with grade I + II and 20 (90.1%) patients with grade III.Besides, there were 13 (50.0%)patients with positive HIF-1α expression and 50 (86.2%)patients with positive HIF-1α expression.
The expression of CD147 in breast cancer
Additionally, we also identified the expression of HIF-1α in different types of breast cancer.As shown in Table 6, there was a significant difference of CD147 expression between TNBC and fibroadenoma of breast (67 [79.8%] vs 10 [25.0%],P < .001).
The immunohistochemical results showed that HIF-1α was usually expressed in the cell membrane and cytoplasm, which was brownish yellow particles.(Fig. 4A-B).Furthermore, the results of qPCR showed that CD147 expression was upregulated in TNBC in comparison with normal samples (Fig. 5, P < .05).
The correlation between CD147 expression and clinical characteristics in triple-negative breast cancer
Furthermore, we determined the correlation between CD147 expression and clinical characteristics (
The correlation between hypoxia-inducible factor-1alpha and CD147 in triple-negative breast cancer
The further analysis showed that there was a significant relationship between the expression of HIF-1α and CD147 (Table 8, P < .001).Moreover, the survival analysis showed that HIF-1α expression was not related to the prognosis of TNBC (Fig. 6A-B), while CD147 was only significantly related to the OS in TNBC patients (Fig. 6D, P < .05 and Fig. 6C, P > .05).
Discussion
TNBC is a highly invasive malignant tumor, and its prognosis is worse than other subtypes of breast cancer. [15]Although TNBC is sensitive to chemotherapy, patients have limited benefit from it. [16]he main reason is that the tumor has strong drug resistance and obvious side effects of chemotherapy, and it is easy to relapse and metastasize. [17,18]At present, chemotherapy is still its main adjuvant treatment.How to improve the prognosis of TNBC patients, reduce their recurrence and metastasis rate and transform them into chronic diseases is the focus of current research.Therefore, finding new effective therapeutic targets or molecular markers is the most important thing to improve the prognosis of TNBC patients.
Yang et al found that the positive expression rate of HIF-1α in TNBC tissue was 45.3%.The expression of HIF-1 was significantly correlated with the patient's age, histological grade and lymph node status, but not with the expression of Ki-67.The multivariate Cox regression analysis showed that HIF-1α expression, histological grade and lymph node status were independent risk factors for postoperative survival of TNBC patients. [19]Besides, Ge et al claimed that the positive expression rate of HIF-1α in TNBC tissue was 86.7%, which was significantly higher than that in its adjacent tissues (15.0%, 6/40).The expression of HIF-1α was related to the histological grade of TNBC and whether lymph nodes occurred or not, but not related to the patient's age, tumor size, and menopause. [20]Zhang et al demonstrated that the positive expression rate of HIF-1α in 103 patients with TNBC was 54.4%, and the expression of HIF-1α was significantly correlated with patients' age, tumor size, histological grade, lymph node metastasis, and tumor stage.Table 4 The positive rate of HIF-1α expression in TNBC and fibroadenoma of breast.Kaplan-Meier survival curve and Log-rank analysis showed that HIF-1α can affect the 3-year disease-free survival of TNBC patients.The multivariate Cox regression analysis showed that HIF-1α expression was related to the 3-year disease-free survival of TNBC patients, but its value as an independent risk factor for prognosis was not found. [21]In this study, the expression of HIF-1α in 84 cases of TNBC and 40 cases of breast fibroadenoma was detected by immunohistochemical technique.
The positive rate of HIF-1α in the TNBC group was 75%, while that in the control group was 27.5%.The expression of HIF-1α in TNBC was significantly higher than that in the breast.In exploring the relationship between the expression of HIF-1α in TNBC tissue and the different clinicopathological factors of patients, we found that the expression of HIF-1α was related to the histological grade and lymph node metastasis of TNBC, but had no significant correlation with the patient's age, tumor size, vascular tumor thrombus and Ki-67 expression, which was basically consistent with previous studies.In the survival analysis, we found that the DFS and OS time of the HIF-1α negative group was higher than the HIF-1α positive group, but there was no statistical difference between the 2 groups.The multivariate Cox regression analysis showed that the expression of HIF-1α could not be used as an independent prognostic factor for TNBC patients.The survival analysis curves (OS, DFS) based on the bioinformation also suggested that the expression of HIF-1α was not correlated with the prognosis of TNBC.
In the research of Wang et al, the positive expression rate of CDl47 in TNBC tissues was 81.82% and was significantly correlated with high histological grade, high expression of Ki-67 and positive expression of p53.The results of the survival analysis showed that the different expression intensity of CDl47 in TNBC tissue was related to the OS and DFS time.Multivariate Cox regression analysis showed that CD147 and Ki-67 were risk factors for the prognosis of TNBC.The higher the positive expression rate, the shorter the survival time of patients. [22]hao et al found that expression of CD147 was observed in all 147 TNBC samples and was significantly correlated with histological grade, tumor size, Ki-67 and lymph node metastasis in TNBC.The study also found that CD147 was closely related to the PFS and OS of TNBC. [23]In this study, CD147 was positively expressed in 79.8% of TNBC tissues.The positive rate of CD147 in the control group was 25.0%, which indicated that the expression of CD147 in the TNBC group was significantly higher than that in breast fibroadenoma tissue, which was consistent with previous studies.In exploring the relationship between the expression of CD147 in TNBC tissue and different clinicopathological factors of patients, we found that the expression of CD147 was related to histological grade, Ki-67 and lymph node metastasis, but not to the patient's age, tumor size, and vascular tumor thrombus, which was basically consistent with previous studies.In the survival analysis, we found that the OS of the CD147 negative group was significantly better than the HIF-1α positive group.
In addition, we also found that the there was a close relationship between HIF1α and CD147 expression.A previous study revealed that the expression of HIF-1α was downregulated by CD147.Wang et al demonstrated that CD147 could induce angiogenesis through regulating HIF-1α expression. [24]Thus, we suggested that CD147 could regulate the expression of HIF-1α to promote TNBC development through inducing angiogenesis.Although there were indeed some limitations of the study, such as limited sample sizes, single center, and retrospective study, we had already probed into the role of HIF-1a and CD147 in the prognosis of patients with TNBC, which might initiate noval thought for subsequent mechanism reseach.In conclusion, we found that HIF-1α and CD147 were upregulated in TNBC, and CD147 was closely related to the OS of TNBC patients.Moreover, we found a close relationship between CD147 and HIF-1α, which suggested the potential mechanism of them in the development of TNBC.The correlation between HIF-1α and CD147 in TNBC.
Figure 1 .
Figure 1.The correlation between the expression of HIF-1α and CD147 and the prognosis of TNBC.(A) The correlation between the expression of HIF-1α and DFS.(B) The correlation between the expression of HIF-1α and OS.(C) The correlation between the expression of CD147 and DFS.(D) The correlation between the expression of CD147 and OS.DFS = disease free survival, OS = overall survival, TNBC = triple-negative breast cancer.
Figure 2 .
Figure 2. The positive expression characteristics of HIF-1α in the TNBC and fibroadenoma of the breast.(A) The expression of HIF-1α in TNBC.(B) The expression of HIF-1α in fibroadenoma of the breast.HIF-1α = hypoxia-inducible factor-1alpha, TNBC = triple-negative breast cancer.
Figure 4 .
Figure 4.The positive expression characteristics of CD147 in the TNBC and fibroadenoma of the breast.(A) The expression of CD147 in TNBC.(B) The expression of CD147 in fibroadenoma of the breast.TNBC = triple-negative breast cancer.
Figure 6 .
Figure 6.The prognosis value of HIF-1α and CD147 in TNBC.(A) K-M curve of HIF-1α in DFS of TNBC patients (B) K-M curve of HIF-1α in OS of TNBC patients (C) K-M curve of CD147 in DFS of TNBC patients (D) K-M curve of CD147 in OS of TNBC patients.DFS = disease free survival, OS = overall survival, TNBC = triple-negative breast cancer.
Table 1
All primers in qPCR experiments in this study.
Table 2
The clinical characteristics of patients.
Histological grading All sections were graded histologically by 2 senior pathologists according to the latest Nottingham grading standard.Ki-67 ≥ 20 was considered as high expression.
Table 7
).The results exhibited that the number of patients with positive CD147 expression in grade I + II was significantly more than that of grade III (46 [74.2%] vs 21 [95.5%],P < .05).The patients with Ki-67 value ≥ 20% have a high ratio of positive CD147 expression compared with the patients with Ki-67 < 20%.Besides, CD147 was positively expressed in 48 (82.8%) patients with lymph node metastasis and 19 (73.1%) patients without lymph node metastasis (P < .05).
Table 3
Multivariate Cox analysis of prognostic factors in 84 patients with TNBC.
Table 5
The correlation between HIF-1α expression and clinical characteristics.
Table 6
The expression of CD147 in TNBC and fibroadenoma of the breast.
Table 7
The correlation between CD147 expression and clinical characteristics.
|
v3-fos-license
|
2023-09-01T15:14:59.483Z
|
2023-08-29T00:00:00.000
|
261407440
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/23/17/7508/pdf?version=1693355607",
"pdf_hash": "c83310f558843f1a3a2a93470f9afd478c8aafaa",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46596",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Engineering"
],
"sha1": "5af2f17cc1ef5f518b4965bfd07bf2ff5dfa4541",
"year": 2023
}
|
pes2o/s2orc
|
An Enhanced Food Digestion Algorithm for Mobile Sensor Localization
Mobile sensors can extend the range of monitoring and overcome static sensors’ limitations and are increasingly used in real-life applications. Since there can be significant errors in mobile sensor localization using the Monte Carlo Localization (MCL), this paper improves the food digestion algorithm (FDA). This paper applies the improved algorithm to the mobile sensor localization problem to reduce localization errors and improve localization accuracy. Firstly, this paper proposes three inter-group communication strategies to speed up the convergence of the algorithm based on the topology that exists between groups. Finally, the improved algorithm is applied to the mobile sensor localization problem, reducing the localization error and achieving good localization results.
Although metaheuristics are excellent at solving problems with real-world applications, they are not a panacea, and, as mentioned in the No Free Lunch Theorem [18], each optimization algorithm may be good at solving different problems.Therefore, researchers are constantly exploring new optimization algorithms.For example, Holland proposed the Genetic Algorithm (GA) in 1975 based on Darwinian evolutionary theory [19].Dorigo et al. proposed the Ant Colony Optimization (ACO) in 1992 [20].Storn et al. proposed Differential Evolution (DE) in 1995 [21].Kennedy and Eberhart proposed the Particle Swarm Optimization (PSO) algorithm in 1995 [22].Karaboga et al. proposed the Artificial Bee Colony algorithm (ABC) in 2005 [23].Yang et al. proposed the Cuckoo Search (CS) in 2009 [24].Rashedi et al. proposed the Gravitational Search Algorithm in 2009 [25].Yang et al. proposed the Bat Algorithm (BA) in 2010 [26].Mirjalili et al. proposed the Grey Wolf Optimizer (GWO) in 2014 [27].Mirjalili et al. proposed the Sine Cosine Algorithm(SCA) in 2016 [28].Abualigah et al. proposed the Aquila Optimizer (AO) in 2021 [29].Song et al. proposed the Phasmatodea Population Evolution algorithm (PPE) in 2021 [30].Pan et al. proposed the Gannet Optimization Algorithm (GOA) in 2022 [31].
Numerous researchers have dedicated their efforts to enhancing the performance of metaheuristic algorithms.Among the various approaches, parallel and compact strategies have gained significant attention due to their simplicity and effectiveness.The parallel strategy emphasizes the grouping of populations, facilitating the exchange of information between groups to accelerate the algorithm's convergence and enhance its ability to discover optimal solutions accurately.On the other hand, the compact strategy involves mapping the population onto a probabilistic model and performing operations on the entire population through manipulations of this model.This approach offers notable benefits such as reduced computational time and memory usage.In this study, we propose a novel approach that combines both parallel and compact strategies to enhance the performance of the food digestion algorithm.We expect that this integrated methodology will effectively enhance the algorithm's ability to seek optimal solutions in the optimization process, leading to improved outcomes.
Numerous researchers have combined these two strategies to improve metaheuristic algorithms.In reference [32], the authors combine the parallel and compact strategies to enhance DE and utilize the enhanced algorithm for image segmentation, yielding superior outcomes.In reference [33], the authors initially introduce six enhancements to the compact strategy CS, subsequently selecting the algorithm with the most favorable results and incorporating the parallel strategy.Ultimately, the authors apply the improved algorithm to underwater robot path planning, which yields promising results.
Wireless sensor networks (WSNs) are self-organized communication systems consisting of multiple nodes that enable the monitoring of specific areas through multi-hop communication.In a static WSN, the nodes are randomly distributed and their locations remain fixed once determined.However, in practical environments, mobile sensor nodes are in greater demand.For instance, in target tracking applications, real-time positioning of moving targets is essential [34,35].The mobility of sensor nodes allows for an extended monitoring range, overcoming coverage gaps that may occur due to the failure of static nodes.Furthermore, the movement of nodes enables the network to discover and observe events more effectively, while also enhancing the communication quality among the sensor nodes [36].Despite the importance of mobile node localization, there is a relative scarcity of research in this area.Most localization methods developed for static sensor nodes are unsuitable for the mobile sensor localization problem, making the study of mobile sensor localization a current research focal point [37].Additionally, the study of outdoor mobile sensors holds particular significance due to the complex and ever-changing nature of the outdoor environment.
Based on the above reasons, this paper uses parallel and compact strategies to improve the food digestion algorithm and apply it to the outdoor mobile sensor localization problem.Section 2 mainly introduces the food digestion algorithm and mobile sensor localization techniques.Section 3 mainly introduces the implementation of the Parallel Compact Food Digestion Algorithm (PCFDA).Section 4 tests the performance of PCFDA.Section 5 uses PCFDA to optimize the error in mobile sensor localization.Section 6 gives the conclusion of this paper.
Related Works
This section mainly introduces the food digestion algorithm and the mobile sensor localization problem.
Food Digestion Algorithm
The food digestion algorithm mainly covers the process of food digestion in the mouth, stomach, and small intestine.This section describes the modeling processes in these three sites in detail.
Digestion in the Oral Cavity
The digestion of food in the mouth involves both physical and chemical digestion.The process of physical digestion mainly consists of the action of forces, which are represented as follows: F1 denotes the force on the food in the mouth, iter denotes the current number of iterations, Max − iter denotes the maximum number of iterations, and a is used to adjust the size of F1, which has a value of 1.5574.F1 − d denotes forces with different sizes and directions, where rand is a random value in the range [0, 1].
The chemical digestion of food in the mouth is dominated by the digestion of starch by salivary amylase, and, considering the effect of substrate concentration on the enzymatic reaction, the modeling process is as follows: Em denotes the enzyme in the oral cavity, randomly setting half of the dimension values to 0 and the other half to 1. r = randperm(D) denotes that the values of the D dimensions are randomly scrambled.Equation ( 4) is the Mie equation, which reflects the relationship between substrate concentration and reaction rate [38].V represents the rate of the enzymatic reaction, V max represents the maximum reaction rate, and its value is 2. S represents the substrate concentration, and we express it as a sine function, represented by Equation (5), where rand is a number in the range [0,1].π represents the mathematical constant pi.K m is a characteristic constant of the enzyme, and in the oral cavity, the value of K m is 0.8.Therefore, the particle update equation in the oral cavity is as follows: Food t+1 i denotes the ith particle at generation t + 1.Food t k denotes the kth particle at generation t. k is a randomly selected particle from among N particles.Food t R denotes the Rth particle of the tth generation, and R is chosen as shown in Equation ( 8).b is a constant which has a value of 1.5.Food t i denotes the ith particle of the tth generation.Best − p represents the global optimal value.C1 and C2 are two random numbers that change with the number of iterations.ceil denotes rounding to positive infinity, and randi is a random rounding function.
Digestion in the Stomach
The digestion of food in the stomach also involves two processes: physical and chemical digestion.Physical digestion is primarily governed by the forces generated by the contraction and diastole of the stomach as well as peristalsis.The forces are expressed as follows: F2 represents the force on the food in the stomach.F2 − d represents a directed force, which takes values in the range [−2, 2].The chemical digestion modeling process in the stomach is similar to that in the oral cavity.The difference is that different enzymes Em and different characteristic constants K m are selected for each iteration.In the stomach, the value of K m is 0.9.Therefore, the particle update equation in the stomach is as follows: Food t+1 m selects particles according to Equation (13).If the optimal fitness value of the first one-third of the updated particles is less than the global optimum, then we select this particle.Otherwise, we perturb the globally optimal particle and select the perturbed particle.Therefore, its selection condition is i f min f itness t+1 i Mean is calculated according to Equation (12).
Digestion in the Small Intestine
The digestion of food in the small intestine also involves two processes: physical and chemical digestion.Physical digestion is primarily governed by forces generated by peristalsis of the small intestine, which is expressed as follows: F3 represents the force on the food in the small intestine.a is a constant that has a value of 1.5574.a 1 is used to regulate the magnitude of the force, which has a value of 1. F3 − d represents a directed force, which is a random value in the range [−2, 2].Thus, the equation for particles updated in the small intestine is as in Equation (16).
The judgment condition for Food t+1 )), which is calculated from Equation (17).Levy(D) denotes Lévy flight, which is calculated as follows: µ and δ are random numbers in the range [0, 1], and β is a constant whose value is 1.5.The food digestion algorithm simulates the process of food digestion in the three main digestive sites in the human body to construct the particle optimization process.In the oral cavity, particles always follow a random particle to update their positions, which promotes the diversity of particles.As the number of iterations increases, particles gradually select particles with better fitness values to update their positions.This selection enhances population diversity in the early iterations and facilitates rapid convergence in later stages.
In the stomach, particles follow the optimal particles from the previous site or particles after perturbation to update their positions.This accelerates the convergence process.Additionally, particles follow the average particles to update their positions, promoting particle diversity and preventing them from getting trapped in local optima.
In the small intestine, particles update their positions after the global optimum, enabling quick convergence.Furthermore, particles update their positions using the Lévy flight strategy, which helps avoid falling into local optima.
Algorithm 1 provides a detailed description of the FDA.Backup the initialized populations and their fitness values; 6: Calculate the values of F1, F2, and F3 according to Equations ( 1), ( 9) and ( Calculate the value of R according to Equation ( Calculate the values of C1 and C2; 9: Calculate the values of Em and S according to Equations ( 3) and (5); 11: Calculate the values of F1 − d and V according to Equations ( 2) and (4); 13: Update the particle according to Equation (6); 14: Calculate the fitness value of the particle; 15: if i == N/3 then 16: Find the minimum fitness value in the oral cavity f itness m ; 17: Update the particle according to Equation (13); if The historical optimal fitness value of the particle < Updated particle optimal fitness value then 37: Replace the updated particle position and fitness value with the particles' optimal historical position and fitness value; end for 40: Backup of the particle's historical optimal position and its fitness value; 41: Update optimal global position and optimal global value; 42: iter = iter + 1; 43: end while
Mobile Sensor Localization Problem
This section introduces a localization method called Monte Carlo Localization (MCL) for mobile sensor networks, as described in references [39,40].In wireless sensor networks, Monte Carlo localization methods typically involve fixed anchor nodes.These anchor nodes serve as reference points in the localization algorithm, and their positions are known in advance and remain unchanged over time.During the localization process, anchor nodes send signals to the mobile node and receive signals back from it, aiding in determining the mobile node's position.
The Monte Carlo localization method is a probabilistic and statistical-based algorithm used to estimate the location of a mobile node through multiple random simulations.It calculates the position of the mobile node using measurements such as received signal strength, arrival time, or other relevant data.The algorithm relies on important parameters, among which the pre-known position of the anchor node plays a crucial role.
In Monte Carlo localization methods, the use of multiple fixed anchor nodes enables the provision of additional measurements for estimating the position of the mobile node.This, in turn, improves the accuracy of the localization process.The fixed positions of the anchor nodes, along with reliable measurement data, form the foundation for the effectiveness of the Monte Carlo localization method in achieving accurate localization.
The MCL (Monte Carlo Localization) method consists of three main phases: initialization, prediction, and filtering [41].In the initialization phase, each node is assigned motion regions and maximum motion speeds.During the prediction phase, a preliminary estimate of the mobile node's location is calculated.This estimate corresponds to a circular region, where the last known position of the node serves as the center, and the product of the velocity and positioning interval time determines the radius.Figure 1 illustrates the execution flow of the MCL algorithm.
The filtering phase plays a crucial role in MCL.Initially, MCL calculates the set of single-hop beacon nodes, denoted as S1, and the set of two-hop beacon nodes, denoted as S2, based on their distances to other nodes.Subsequently, MCL randomly selects points within the feasible region and checks if they belong to the set of unknown nodes by verifying if they fall within the range of either single-hop or two-hop beacon nodes.Specifically, a selected point is classified as an unknown node if its nearest anchor is within the range of S1, or if both its closest and next closest anchors fall within the range of S2.
Points that fail to satisfy these criteria are filtered out.The filtering condition is expressed in Equation (20).As shown in Figure 2, the unknown node L senses the information of the surrounding anchor nodes at the moment t, where S1 is its one-hop anchor node and S2 is its two-hop anchor node, and the estimated coordinate sample of the unknown node L is a valid sample only if it satisfies the filter condition that the distance from S1 is less than R and the distance from S2 is between R and 2R, Lt in the figure meets the filter condition, and is retained as a reasonable sample particle.After the filtering phase, numerous sample particles' coordinates are eliminated, resulting in an insufficient number of sample sets.Hence, the prediction and filtering phases are iteratively executed until an adequately high number of samples remain in the sample set.Eventually, the arithmetic mean of the sample coordinates is calculated, serving as an estimation for the final node coordinates, thereby concluding the localization at the current moment.Equation ( 21) was employed to estimate the locations of the unknown nodes based on the filtered reference points.
Enhanced Food Digestion Algorithm
This section introduces three intergroup communication strategies and proposes a concise approach to enhance the food digestion algorithm.
Design of Parallel Strategies
This section proposes three parallel strategies to speed up the convergence of the algorithm and to improve the algorithm's optimization finding accuracy.These three parallel strategies use different topologies.Their topologies are shown in Figure 3.The first parallelization strategy uses a star topology.First, we choose one group as the central group and the others as subgroups.Particles in the central group exchange information with particles in the subgroups, and there is no communication between subgroups.The pseudo-code for the algorithm is shown in Algorithm 2.
The second parallel strategy uses a unidirectional ring topology.The structure allows only subgroups to communicate with their neighboring side, and the side that each group chooses to communicate with is in the same direction in the ring structure.Algorithm 3 shows the details of the communication strategy.
Algorithm 2 Parallel strategy for star topology.
1: Calculate the average position of the first three groups of optimal particles and their fitness values; 2: if The fitness value of the average position < The fitness value of the optimal particle in the central group then Replace the position of the central group of optimal particles and its fitness value with the average position and its fitness value; 4: end if 5: Perturbing the central group of optimal particles and calculating its fitness value; 6: if Particle fitness values after perturbation < The fitness value of the first group of optimal particles then 7: Replace the position of the first group of optimal particles and its fitness value with the position of the perturbed particle and its fitness value 8: end if 9: if Particle fitness values after perturbation < The fitness value of the second group of optimal particles then 10: Replace the position of the second group of optimal particles and its fitness value with the position of the perturbed particle and its fitness value 11: end if 12: if Particle fitness values after perturbation < The fitness value of the third group of optimal particles then 13: Replace the position of the third group of optimal particles and its fitness value with the position of the perturbed particle and its fitness value Use g + 1 to find the remainder of 4 and record the remainder as sg if The fitness value of the optimal particle in group g > The fitness value of the optimal particle in group sg then 7: Replace the optimal particle position and its fitness value of group g with the optimal particle position and its fitness value of group sg if Particle fitness value after perturbed < The optimal particle fitness value of group g then 12: Use the perturbed particle position and its fitness value to replace the optimal particle position and its fitness value of group g 13:
end if 14: end for
The third parallel strategy uses a bi-directional ring topology.The structure allows subgroups to exchange information with their neighboring groups, and in a ring structure, subgroups exchange information in a specific direction.Implementation details are given in Algorithm 4.
Algorithm 4
Parallel strategy for bi-directional ring topology.
1: for g = 1 : 4 do 2: Use g + 1 to find the remainder of 4 and record the remainder as sg Calculate the average position of the optimal particle in group sg and group vg and its fitness value 11: if The fitness value of the average position < The fitness value of the optimal particle in group g then 12: Replace the position of the optimal particle in group g and its fitness value using the average position and its fitness value 13: end if 14: end for
Design of Compact Strategy
This section describes the principles of the compact mechanism and the detailed process for improving the food digestion algorithm using the compact mechanism.
Principles Of The Compact Mechanism
The Distribution Estimation Algorithm (EDA) is a method based on probabilistic models [42].It maps the population into a probability model and realizes the operation of the population by operating the probability model [43].Compact algorithms are a type of EDA.It dramatically reduces the use of memory space and speeds up the algorithm's operation by using a probabilistic model to characterize the distribution of the entire population.The compact algorithm uses a virtual population instead of the actual population.This virtual population is encoded in a PV vector.It is an N ×2 matrix in compact differential evolution (CDE) [43] and real-valued compact genetic algorithms (RCGAs) [44].
µ and δ denote the mean and standard deviation of the PV, respectively, and t denotes the current number of iterations.Each pair of mean and standard deviation in PV corresponds to the corresponding Probability Density Function (PDF), which is truncated at [−1, 1] and normalizes the amplitude area to 1 [45].The calculation of PDF is given by Equation (23).
er f is the error function.By constructing Chebyshev polynomials, PDF can correspond to a Cumulative Distribution Function (CDF) with values ranging from 0 to 1 [46,47].CDF is calculated as shown in Equation (24): In Equation ( 24), x takes values in the range [−1, 1].The function CDF can be expressed as Equation (25): CDF returns the value in the range [0, 1].The process of sampling the design variable X i from the PV vector is to first generate a random number R from a uniform distribution and then calculate its corresponding inverse function of CDF to obtain a new value.This newly generated value is compared with another value, with the one with the better fitness value being the winner and the one with the worse fitness value being the loser, both of which are retained for updating the PV vector.The updated equations of mean and standard deviation are shown in Equations ( 26) and (27).
N p denotes the size of the virtual population, which is a typical parameter of compact algorithms, and the size of this parameter is usually several times the size of the actual population [44].
Compact Food Digestion Algorithm
Compact algorithms reduce memory space usage and speed up algorithms, but they reduce population diversity and tend to fall into local optima.A solution is generated by sampling from the probabilistic model during each iteration to solve this problem.Then three solutions are generated using the sampled solutions in conjunction with the characteristics of the FDA algorithm.These three solutions are generated using the particle update formulae in the oral cavity, stomach, and small intestine.Since the extent of the sampling space is not the same as the actual space, it is essential to map the generated solution Food t 1 to the actual computational space once it has been sampled in the probabilistic model, and we use Equation (28) to complete this process.
ub and lb are the maximum and minimum bounds on the actual space, respectively.The updated equation for the three solutions is given by Equations ( 29)- (31).
Food t 2 is the particle generated using the particle update equation in the oral cavity, where Food t 1 is the particle generated by sampling from the probabilistic model, Best − p is the optimal global particle, and group(g).Best − p is the optimal particle of the gth group.Food t 3 is the particle generated using the particle update equation in the stomach, and Mean is the particle obtained by averaging Food t 1 and Food t 2 .Food t 4 is the particle generated using the particle update equation in the small intestine.The meaning of the other variables in these three equations is the same as in the FDA in Section 2. The pseudo-code of the FDA algorithm for the parallel compact strategy is shown in Algorithm 5. Find the particle with the best and worst fitness value among the four particles, denoted as winner and loser; Use winner and loser to update PV; Find the global optimal solution Best − p and its fitness value Best − f ; 17: iter = iter + 1; 18: end while
Numerical Experimental Results and Analysis
This section not only compares the PCFDA with the original FDA but also compares it with the PCSCA [28].In reference [28], the authors propose three strategies for parallel communication, which apply to solving single-peak, multi-peak, and mixed-function problems.This section verifies the effectiveness of PCFDA by comparing it with them.
Parameter Settings
In this section, experiments are conducted using a Lenovo computer manufactured in Shanghai, China, equipped with an Intel(R) Core(TM) i3-8100 CPU at 3.60 GHz, 24 GB of RAM, a 64-bit Windows 10 operating system, and MATLAB2018a.
This section uses the CEC2013 test set for test experiments.The test set consists of 28 test functions, including five unimodal, fifteen multimodal, and eight mixed functions.Unimodal functions have only one global optimal solution and are used to test the ability of the algorithm to develop.Multimodal functions have multiple local optimal solutions and are mainly used to test the ability of the algorithm to escape from local optimal solutions.Mixed functions are extremely complex, they have the characteristics of both single-peak and multi-peak functions, and can test both the development ability of the algorithm and the ability of the algorithm to escape from the local optimal solution, which is the function that can best reflect the ability of the algorithm to solve complex problems.Using these three types of function tests to test the metaheuristic algorithm can effectively assess the performance and reliability of the algorithm and improve the practical application value of the algorithm.
To ensure the experiments' fairness and reduce the effect of algorithmic instability, we let all algorithms run ten times on 28 test functions for 1000 iterations.Finally, the mean and standard deviation of their runs on each function are compared.The dimension of each particle is set to 30, and the range of the particle search is in the range [−100, 100].The number of groups in the algorithm is set to 4, and the initial mean and standard deviation values are set to 0 and 10.The number of particles in the FDA is set to 20.K m has three different values that indicate the three characteristic constant values of the algorithm in the oral cavity, stomach, and small intestine, which have values of 0.8, 0.9, and 1, respectively.The parameter settings of PCSCA follow the original paper, and its three algorithms are denoted by PCSCAS1, PCSCAS2, and PCSCAS3, respectively.For the experiments in this section, we use PCFDA1, PCFDA2, and PCFDA3 to represent the enhanced FDA using Algorithms 2-4.
Comparison with the Original FDA
In this section, we use PCFDA to compare with the original FDA, mainly comparing the mean and standard deviation of their runs on each function as well as the time cost and memory usage of their runs to determine the performance of PCFDA.The mean and standard deviation comparison results are shown in Tables 1 and 2.
In Tables 1 and 2, the data in the last row indicate the number of PCFDAs better than the FDA.On the Unimodal functions f1-f5, PCFDA1 has a better searching ability on the first three functions and is more stable on f2 and f3.On the Multimodal functions f6-f20, all the algorithms have good searchability and stability on f8 and f20.PCFDA3's search ability is poor on Multimodal functions.FDA and PCFDA2, and PCFDA3 outperformed on different Multimodal functions with comparable performance.On the Mixed functions f21-f28, PCFDA2 has better searchability and stability on four functions, while PCFDA3 only performs better on f26.Overall, PCFDA1 and PCFDA2 are comparable to the original FDA regarding merit-seeking ability but are more stable than the FDA.PCFDA3 has improved performance on a few functions, but overall performance is not as good as the FDA.In order to statistically verify the effectiveness of the improved algorithm, this paper uses the Wilkerson rank sum test to verify the significant difference between the improved algorithm and the original algorithm.The significance level alpha is set to 0.05.Table 3 displays the p-values for the comparison results.The data with p-values less than 0.05 are highlighted in red.From the data in the table, it can be observed that the improved algorithm holds a significant advantage.
Improving the algorithms using compact strategies is more concerned with the time cost and the memory footprint size.Table 4 shows the time loss and memory usage for each algorithm.
In Table 4, the average running time indicates the average time to run each algorithm 10 times on 28 functions, the memory usage indicates the memory space occupied by each particle in each algorithm, the * is used as a multiplication sign, and D indicates the particle dimension.(20 + 1) * D denotes the memory occupied by the 20 particles in the FDA and one globally optimal particle.In the last three columns of Table 4, (2 * 4) * D denotes the memory occupied by µ and δ in the four groups.The following two 4s represent the memory occupied by the four particles obtained from each update (including one sampled particle and three generated particles) and the optimal particle in the four groups, respectively.The last 1 denotes the memory occupied by a temporary particle needed in the communication strategy.Combining the results of each algorithm in Tables 1 and 2 leads to the conclusion that the improved algorithms are improved in terms of both time cost and memory space.
Comparison with PCSCA
This section compares the improved FDA with PCSCA.Both algorithms use parallel and compact strategies for improvement, so we only compare their searchability and stability here.Tables 5 and 6 show the mean and standard deviation comparison results.
The red font in Tables 5 and 6 indicates the mean and standard deviation of the optimum found by each algorithm on each function.As seen from the tables, on the f20, all algorithms show good searching ability and stability.On the f8, all algorithms have the same search ability, but PCSCAS3 is more stable.On the other functions, the PCFDA outperformed the PCSCA regarding searching superiority.
In this section, the Wilcoxon rank sum test was also used for the significance analysis of the proposed algorithm in this paper.We conducted significance analysis of the three algorithms proposed in this paper with the parallel compact SCA algorithm.Tables 7-9 display the comparison results, with red font indicating data with p-values greater than 0.05.From the data in the table, it can be observed that the proposed algorithm in this paper outperforms the parallel compact SCA algorithm in most functions.
Convergence Analysis
This section evaluates the performance of the algorithms by comparing the convergence curves of the PCFDA and PCSCA algorithms on three classes of functions.Figures 4-6 show the corresponding experimental results.From the convergence curves of the three types of functions, on the unimodal function, the convergence speed of each algorithm is not much different.Only on f1 do the PCFDA1 and PCFDA2 algorithms converge faster in the early stage.On the multimodal functions f8 and f20, although the convergence speeds of the algorithms are quite different, they have similar optimization capabilities based on the data in Tables 1 and 5. On f6, f7, f10, and f19, the convergence speed of each algorithm is similar.Due to the instability of each algorithm's search on other multimodal functions, the convergence speed and accuracy are different.On the mixed functions f23, f24, f25, and f27, PCFDA2 converges faster and has the best optimization accuracy.On the function f22, the FDA has better convergence speed and accuracy than other algorithms.
Application of PCFDA in Mobile Sensor Localization Problem
This section discusses the PCFDA algorithm for mobile sensor localization and compares it with the original MCL algorithm under different numbers of anchor nodes and communication radii.Locations with large errors are first obtained by the MCL localization technique, and then the PCFDA algorithm is applied for further optimization around the obtained locations to reduce the localization error.The error function is defined as Equation ( 32): Z represents the total number of unknown nodes, and N represents the total number of anchor nodes.(x l , y k ) denotes the estimated location of the unknown node l, and (x k , y k ) denotes the location of the anchor node.D lk represents the distance between unknown node l and anchor node k.This section assumes that anchor node k can obtain the distance between anchor node k and unknown node l through the signal strength received from unknown node l.The smaller the error value, the higher the positioning accuracy.
Experimental Analysis of Different Numbers of Anchor Nodes
In this section, the total number of nodes is set to 300, randomly distributed within the space range of 300 × 300.The number of anchor nodes is set to 10, 20, 30, 40, and 50, and the communication radius is set to 50.Experiments were performed using the MCL localization algorithm, FDA, and PCFDA.To avoid randomness, this section runs each algorithm 10 times and takes the average of 10 runs as the final result.The experimental results are shown in Table 10.In Table 10, Ave and Std represent the mean and standard deviation of the run results, respectively.It can be seen from Table 10 that under the condition of a fixed communication radius, the more the number of anchor nodes, the smaller the positioning error and the more accurate the positioning.Compared with the MCL positioning algorithm, the positioning accuracy of the FDA has improved a lot, but the FDA is extremely unstable.The cAPSO [48] algorithm has comparable localization accuracy to the FDA algorithm, but it is more stable than the FDA algorithm.Under the same experimental conditions, the performance of the PCFDA is remarkable, both in positioning accuracy and algorithm stability are better than the FDA, and the positioning accuracy is much better than the MCL algorithm.
Experimental Analysis of Different Communication Radius
This section also uses 300 nodes for experiments and distributes them in the space of 300*300.The number of anchor nodes is set to 50, and the communication radius is set to 20, 40, 60, and 80, respectively.Each algorithm is run 10 times in this section, and the mean and standard deviation of 10 runs are taken for experimental analysis.The experimental results are shown in Table 11.Table 11 shows that when the number of anchor nodes is fixed, the larger the communication radius, the smaller the positioning error, and the more accurate the positioning.The positioning accuracy of the FDA is better than the MCL positioning algorithm, but the stability is poor.The cAPSO algorithm is comparable to the FDA algorithm in terms of localization accuracy, but with better stability.The performance improvement of PCFDA is more significant and has good results in positioning accuracy and operational stability.
Conclusions
This paper proposes three intergroup communication strategies to improve the food digestion algorithm.These three strategies use different topologies, which significantly demonstrate the efficiency of particle communication and speed up the algorithm's convergence.This paper also uses a compact strategy to improve the food digestion algorithm, reducing the algorithm's running time and saving memory space.Then, this paper tested the PCFDA algorithm on the CEC2013 test set and achieved good results.Finally, this paper uses the improved algorithm to solve the problem of mobile sensor localization, which reduces the error of positioning and improves the accuracy of positioning.
In the future, we can use other inter-group communication strategies to further improve the FDA's search accuracy.In the meantime, we will consider using the improved algorithm for other localization problems in wireless sensor networks.The design process of the algorithm does not take into account issues such as communication barriers of mobile sensors in real environments, so these factors can be considered to be added in future research.
Figure 1 .
Figure 1.Flowchart of the MCL algorithm.
9 :
Disturb the optimal particle of group g and calculate its fitness value 10:
3 :
Use g − 1 to find the remainder of 4 and record the remainder as vg 4:if sg == 0 then
Figure 4 .
Figure 4. Convergence curves on the unimodal functions.
Global optimal position Best − p, Global optimal fitness value Best − f ; 1: Initialize populations and calculate their fitness values; 2: Record the optimum global position Best − p; 3: Initialize the parameters a, b, a 1 , V max , K m ; 4: while iter < Max − iter do
end if 34: end for Algorithm 1 Cont.
Compact Food Digestion Algorithm.Population size N p ; Dimension D; Maximum number of iterations Max − iter; Lower boundary lb; Upper boundary ub; Input: Global optimal position Best − p, Global optimal fitness value Best − f ; 1: Initialize the parameters a, b, a 1 , K m , V max , iter and the number of groups groups as well as the mean and standard deviation µ and δ for each group; 2: while iter < Max − iter do
Table 1 .
The average of the running results of the improved FDA and the original FDA.
Table 2 .
The standard deviation of the running results of the improved FDA and the original FDA.
Table 4 .
The average running time and memory usage of each algorithm.
Table 5 .
The running results of the average value of each algorithm.
Table 6 .
The running results of the standard deviation of each algorithm.
Table 7 .
The comparison results between PCFDA1 with three improved SCA algorithms.
Table 8 .
The comparison results between PCFDA2 with three improved SCA algorithms.
Table 10 .
Experimental results of the localization error of different anchor nodes.
Table 11 .
Experimental results of the localization error of different communication radius.
|
v3-fos-license
|
2022-09-15T17:02:24.522Z
|
2022-09-01T00:00:00.000
|
252266136
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1420-3049/27/18/5812/pdf?version=1662622576",
"pdf_hash": "85491ee60e60e9cc204fc8ad45e37edda3495cd9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46597",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"sha1": "3fd872bc17648686bba8e88a34f3eebac53c279f",
"year": 2022
}
|
pes2o/s2orc
|
Antimicrobial Activities of Saponaria cypria Boiss. Root Extracts, and the Identification of Nine Saponins and Six Phenolic Compounds
The purpose of this study was to identify the chemical components in root extracts of Saponaria cypria, an endemic species of Cyprus. Subsequently, the synergistic bioactivity of its root extracts through different extraction procedures was also investigated for the first time. A total of nine saponins, along with six phenolic compounds, were identified and quantified using the UHPLC/Q-TOF-MS method. Additionally, S. cypria root extracts demonstrated antibacterial potential against Escherichia coli, Staphylococcus aureus, Enterococcus faecalis and Salmonella enteritidis. S. aureus presented the highest susceptibility among all bacteria tested. These findings provide the first phytochemical data regarding the saponin, phenolic content and antimicrobial activity of S. cypria extracts, indicating that the Cyprus saponaria species is a rich natural source for bioactive compounds with a potentially wider bioactivity spectrum.
Introduction
Saponaria plants, also known as soapworts, belong to the family Caryophyllaceae. Their genus name is derived from the Latin word "sapo" which means soap, since the roots of some species are rich in active molecules called saponins [1]. Saponins are glycosylated molecules of an amphiphilic nature which form stable, soap-like foams in aqueous solutions [1,2]. They are composed of two main parts: a water-soluble glycosidic chain and a liposoluble structure. The non-sugar and sugar components are called aglycone and glycone portions, respectively. The aglycone portion is composed of a triterpenoid or a steroid backbone. The sugar moiety is linked to the aglycone through an ester or ether glycosidic linkage at one or more glycosylation sites [1,2].
In the past, soapwort extracts were used as household detergents and cosmetics, mainly due to the emulsifying, cleansing and foaming properties of its saponin components. Today, one of the major applications of the common species Saponaria officinalis L., is its use as a natural emulsifier in the production of halva, a popular confectionery. Besides food and cosmetics, the saponin-rich extracts demonstrate strong biological activity and may potentially be used as alternative medications for disorders such as heart disease, chronic inflammatory disease and cancer [2,3]. Saponins isolated from the roots of S. officinalis, have been previously characterized in terms of their chemical composition [4][5][6][7] and antibacterial activity [8][9][10]. Moreover, extracts from S. officinalis aerial parts have been reported to possess antioxidant properties, due to their rich content of phenolic compounds [8,11]. Besides their antioxidant activity, polyphenols found in many plant species, are also
Determination of Total Saponin Content
The total saponin content (TSC) of S. cypria root extracts was determined using three different solvents (methanol, ethanol and acetone), based on previously described methods [19]. A standard curve of oleanolic acid was constructed and the results were expressed as mg oleanolic acid equivalents per gram of dry crude extract (mg OAE/g crude extract). According to the results, as shown in Table 1, acetone gave the highest TSC yield (169.000 mg OAE/g crude extract), significantly higher than the ethanol and methanol yields (106.210 and 64.331 mg OAE/g crude extract respectively, p < 0.01).
Identification and Quantification of Saponins in S. cypria
The saponin compounds identified in acetone S. cypria root extracts, using ultra-high performance liquid chromatography coupled to quadrupole time of flight mass spectrometry (UHPLC-QTOF-MS), are presented in Table 2, and the total ion chromatogram is documented as supplementary material ( Figure S1). The MS/MS fragmentation patterns and chromatograms of each compound are presented in Figure S2. According to the obtained results, a total of nine major saponins were identified, belonging to triterpene saponins. These are glycosylated derivatives of triterpene sapogenin, the aglycone moiety of each compound. The mass spectrometry analysis of the saponin compounds allowed the total identification of the compounds by direct comparison to previously published data on S. officinalis saponin fragmentation [4,5,7,20,21]. Saponarioside A (compound 5) and other saponins derived from Quillaic acid (compounds 4, 6), Medicagenic acid (compounds 1, 2, 3) and Gypsogenin (compounds 7, 8, 9) were identified ( Table 2). Structures of the backbone of these saponins are shown in Figure 1. The fragment ion observed at m/z 113.0231 was considered to be produced from hexoses. Compound 6 also revealed fragment ions at m/z 955.4468 and 113.0253 and an additional ion at m/z 485.3222 with a loss of a pentose (m/z 132/150), three desoxyhexoses (m/z 146), one hexose (m/z 162/180), and uronic acid (m/z 176) [23]. Based on the molecular weight and the fragmentation pattern of these two compounds, which were compared to the values of signature fragment ions of Quillaic acid octosaccharide and Quillaic acid heptasaccharide previously described in the literature [7], compounds 4 and 6 were proposed to be Quillaic acid octosaccharide and Quillaic acid heptasaccharide, respectively. (Table 2), as previously reported in literature [7].
Compounds 7 and 8 with the molecular formula of C73H120O43 were detected at retention times 14.00 and 14.43 min, respectively ( Table 2). These compounds demonstrated the same fragmentation pattern with produced ions at m/z 1551.6802, 939.4517 and 469.3272, which are characteristic of the Gypsogenin backbone, as documented in the literature [7].
Finally, compound 9, another Gypsogenin derivative, was identified at retention time 15.82 min and m/z of 1447.6217 2-(C64H104O36). This was regarded as Gypsogenin hexasaccharide, based on the MS/MS data, which provided fragment ions at m/z 939.4448, 469.3299 and 113.0223. According to the literature this compound has been previously identified in S. officinalis extracts [7].
Quantification analysis revealed that the major saponin components of the extract are the Medicagenic acid derived saponin (compound 3, m/z 1293.5673) and the Gypsogenin derivative (compound 9) at 2.588 % and 2.447 %, respectively ( Table 2). Although the three saponins (compounds 1, 2, 3), Table 2 have been previously reported [22], this is the first time that these compounds were found in a Saponaria species. In these cases, fragmentation patterns revealed the ions at m/z 501.3185 which is characteristic of Medicagenic acid as the aglycone moiety. The additional fragment ions at m/z 485.147 and 439.3183 were documented according to Peeters et al. [22].
Compounds 4 and 6 with retention times of 10.06 and 11.57 min and m/z of 1729.7330 2− and 1657.6978 2− , respectively, were also identified. Compound 4 revealed fragment ions at m/z 955.4468, characteristic of Quillaic acid backbone with a loss of a pentose (m/z 132/150), three desoxyhexoses (m/z 146), one hexose (m/z 162/180) and one acetyl unit (m/z 42/60). The fragment ion observed at m/z 113.0231 was considered to be produced from hexoses. Compound 6 also revealed fragment ions at m/z 955.4468 and 113.0253 and an additional ion at m/z 485.3222 with a loss of a pentose (m/z 132/150), three desoxyhexoses (m/z 146), one hexose (m/z 162/180), and uronic acid (m/z 176) [23]. Based on the molecular weight and the fragmentation pattern of these two compounds, which were compared to the values of signature fragment ions of Quillaic acid octosaccharide and Quillaic acid heptasaccharide previously described in the literature [7], compounds 4 and 6 were proposed to be Quillaic acid octosaccharide and Quillaic acid heptasaccharide, respectively. (Table 2), as previously reported in literature [7].
Compounds 7 and 8 with the molecular formula of C 73 H 120 O 43 were detected at retention times 14.00 and 14.43 min, respectively ( Table 2). These compounds demonstrated the same fragmentation pattern with produced ions at m/z 1551.6802, 939.4517 and 469.3272, which are characteristic of the Gypsogenin backbone, as documented in the literature [7].
Finally, compound 9, another Gypsogenin derivative, was identified at retention time 15.82 min and m/z of 1447.6217 2− (C 64 H 104 O 36 ). This was regarded as Gypsogenin hexasaccharide, based on the MS/MS data, which provided fragment ions at m/z 939.4448, 469.3299 and 113.0223. According to the literature this compound has been previously identified in S. officinalis extracts [7].
Quantification analysis revealed that the major saponin components of the extract are the Medicagenic acid derived saponin (compound 3, m/z 1293.5673) and the Gypsogenin derivative (compound 9) at 2.588 % and 2.447 %, respectively ( Table 2).
Determination of Total Phenolic Content
The total phenolic content (TPC) of methanol, ethanol and acetone root extracts of S. cypria was detected by using the Folin-Ciocalteu method [24]. A standard curve of gallic acid was constructed and the results were expressed as mg gallic acid equivalents per gram of crude extract (mg GAE/g). According to the data presented in Table 3, the S. cypria acetone extract demonstrated the highest TPC result (21.016 mg GAE/g crude extract), a yield significantly higher than the methanol and ethanol extracts (p < 0.01).
Identification and Quantification of Phenolic Compounds in S. cypria
The phenolic compounds identified in the acetone S. cypria root extract, using UHPLC-QTOF-MS/MS, are presented in Table 4 and the total ion chromatogram is documented as supplementary material ( Figure S3). The MS/MS fragmentation pattern and chromatograms of all identified compounds are also provided as supplementary material ( Figure S4). Six phenolic compounds were identified, including Rutin, Quercetin glucosides, Syringic acid and 4,5-di-O-Caffeoylquinic acid. The structural identification of these compounds was based on a comparison of their MS/MS data with those reported in the literature [25,26] [26], compound 3 was identified as Rutin.
Compound 2, with a generated formula C 33 H 40 O 20 , retention time of 6.07 min and m/z 755.2028, gave no fragmentation pattern. By comparing these results to previously reported data [25], this compound was identified as Quercetin 3-O-(2,6-di-O-rhamnosylglucoside). Compound 4, with a generated formula C 21 H 20 O 11 , retention time of 7.50 min and m/z 447.0922, gave three characteristic product ions (Table 4). A comparison of these data to the literature [25] suggests that this compound is Quercetin 3-O-rhamnoside (quercitrin). In a similar manner, the molecular formula and fragmentation pattern of compound 6 indicated that this compound seemed to be Quercetin 3-O-galactoside [25]. (Table 4), was identified as Caffeoylquinic acid [25]. Quantification analysis revealed that among the phenolic compounds identified, Caffeoylquinic acid (compound 5) was the major constituent detected at 1.855 % (Table 4).
Antimicrobial Activity of Extracts
The minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC) of methanol, ethanol and acetone root extracts of S. cypria were evaluated against gram-negative (E. coli, S. enteritidis) and gram-positive bacteria (S. aureus, E. faecalis). According to the results shown in Table 5, all S. cypria extracts demonstrated bacterial inhibition, with MIC values ranging from 0.195-1.563 mg/mL for S. aureus and 0.391-3.125 mg/mL for E. faecalis, while the inhibition activities against E. coli and S. enteritidis were the weakest (3.125 mg/mL). S. cypria acetone extract exhibited the highest bacterial inhibition against S. aureus (MIC, 0.195 mg/mL) and E. faecalis (MIC, 0.391 mg/mL). The antimicrobial efficacy was also studied by determining MBC, which is defined as the lowest concentration of the extract that is bactericidal. Therefore, the lower the MBC value, the less extract is needed to kill the bacteria. S. cypria exhibited low MBC values (more bactericidal) ranging from 0.195-1.563 mg/mL for S. aureus and 0.391-3.125 mg/mL for E. faecalis, whereas the bactericidal effects on E. coli and S. enteritidis were weaker (6.250 mg/mL for E. coli and values ranging from 6.250 to 12.500 mg/mL for S. enteritidis). S. cypria acetone extract exhibited the lowest MBC value against S. aureus (0.195 mg/mL).
Discussion
The present study is the first attempt that documents data regarding the saponin and phenolic chemical profiles of S. cypria root extracts. Further to the molecules detected, our results also provide valuable evidence for antibacterial activity.
Although the endemic species was the main focus of this study, it is important to note that two other species, namely Saponaria mesogitana Boiss. and Saponaria orientalis L., are also encountered on the island. S. cypria can be identified and distinguished from S. mesogitana and S. orientalis based on morphological characteristics [17,27]. The main difference between S. cypria and S. orientalis is that the endemic taxon is perennial with woody rootstock, while S. orientalis is annual [17]. Apart from that, there is an obvious difference regarding the diameter of the flowers since the endemic taxon has much larger flowers; similarly, there is a distinct difference concerning the size of the calyx [17]. S. mesogitana is an annual plant and has two short coronal scales, a characteristic that is not found in S. orientalis [27].
Regarding the extraction procedure implemented in the current study, three different solvents were used, namely methanol, ethanol and acetone. Thus, a comparison of the total saponin yield extracted with each solvent revealed that acetone exhibited the highest saponin yield, a finding which was in agreement with previously reported data regarding the saponin content of safed musli extracts [28]. According to Barve et al., this may be attributed to the polar and non-polar properties of acetone which may justify a higher extraction yield of saponins compared to ethanol or methanol [28].
Although there is no reported data regarding the saponin content of S. cypria species, previously published results documented the saponin content of the root of S. officinalis with a value of 82.4 mg/g crude root extract [7]. Our results may suggest that S. cypria root extract is richer in saponins than S. officinalis, however different extraction and quantification methods were implemented by Budan et al. compared to our study, which may have contributed to the different TSC values observed. Moreover, apart from dealing with different Saponaria species, the total saponin yield may also be affected by environmental factors, such as the following: micro-climate, temperature cultivation period, geographical location, and growth conditions [29,30]. According to the literature, Saponaria species are considered a good source for saponins with a content close to 10%. Although most reported species seem to have a lower content, for instance, Soybean (0.22-0.47%), Chickpea (0.23%), Alfalfa (0.14-1.71%) or Quinoa (0.14-2.3%), there are several other species which seem to have considerable amounts, such as Quillaja bark (10%) and Yucca (10%) [30][31][32]. Interestingly, Licorice root and the American Ginseng have been reported to be a rich source of saponins (22.2-32.3%) [30][31][32].
In relation to our results, nine major saponins were identified in the S. cypria root extracts. These included Saponarioside A and saponins derived from Quillaic acid, Medicagenic acid and Gypsogenin. Among the saponin molecules identified in the current study, relatively higher quantities of Medicagenic acid and Gypsogenin conjugates (compound 3 and 9 respectively) were observed. Saponins derived from Medicagenic acid have not been previously reported in other saponaria species. However, all structurally known saponins have been previously identified in various plant species and most of them have been studied for their biological roles. For instance, Quillaic acid saponins, also known as Quillaja saponins, have been reported to possess anti-inflammatory, antibacterial and antiviral activity [33]. Furthermore, Quillaic acid and Gypsogenin isolated from S. officinalis roots, have been reported to have antiproliferative properties by inhibiting the growth of tumorigenic human breast cancer and prostate cancer cells [3]. Medicagenic acid detected in other plant species, demonstrated antibacterial, as well as antifungal properties [34,35]. Interestingly, among the saponins identified in our study, the components derived from Medicagenic acid had the highest quantity in the extract of S. cypria.
Concerning the antibacterial properties, the present study demonstrated the bacterial inhibition of S. cypria root extracts against all four strains tested, namely E. coli, S. aureus, E. faecalis and S. enteritidis. Although this is the first time S. cypria species was tested for its antibacterial potential, the antimicrobial properties of other Saponaria species have been reported in the literature. More specifically, methanol extracts of S. officinalis were reported to demonstrate antibacterial activity against S. aureus and E. faecalis [8][9][10]. Similar to our study, Saponaria prostrata Willd. extracts demonstrated the highest antibacterial activity against S. aureus among various gram-positive and gram-negative bacteria tested and, unlike S. cypria, it did not express antimicrobial activity against E. coli [36]. Other studies reported that the Sapindus saponaria L. hydromethanolic extract is effective against various fungal and bacterial strains, with best activity against Bacillus cereus and S. aureus [37], while the ethanolic extract of Sapindus saponaria Vahl also seemed effective against all tested bacterial pathogens including S. aureus [38].
A great deal of attention has also been given to natural antioxidants and their health benefits during the past few years. Knowing that polyphenols are the most abundant antioxidant molecules in nature, this study also aimed at investigating the presence of phenolic compounds in S. cypria root extracts. The results confirmed that S. cypria root is also a source of phenolic compounds. A total of six phenolic compounds were identified in the S. cypria plant. The extract presented high amounts of Caffeoylquinic acid, a plant metabolite which has been described as an antibacterial agent against gram-positive Bacillus cereus and S. aureus in the past [39], as well as a free radical scavenger in a study using Coffee silver skin extracts [40]. Other phenolic compounds detected at lower concentrations included Quercetin glucosides, Rutin and Syringic acid. Quercetin, a well-known flavonoid is found in many plants. In fact, Quercetin O-glycoside derivatives are well known for their antioxidant properties [41]. Additionally, the 3-O-rutinoside derivative of Quercetin, named Rutin, is found in several species of the Caryophyllaceae and it has been reported to have a wide range of biological properties [42][43][44]. Syringic acid, another phenolic compound identified in this study, has also been reported to demonstrate a wide range of health-related properties, such as prevention of oxidative stress [45,46] and antimicrobial activities against several gram-positive and gram-negative bacteria [47]. Although the extracts of S. officinalis have been previously reported to contain phenolic compounds and, particularly, flavonoids, this is the first study to provide data on the phenolic content of S. cypria species and to demonstrate that the root of the plant is a good source of antioxidant and antimicrobial agents. Apart from the obvious synergistic effects, the presence of phenolic Caffeoylquinic acid and saponin Medicagenic acid could, at this stage, explain the significant antimicrobial activities of S. cypria.
In conclusion, the above results contribute towards the phytochemical and pharmacological knowledge regarding S. cypria, as well as its promising synergistic actions that may in the future be of great use as alternative medicine and nutritional supplements. Further studies, which are currently underway, will help elucidate the total content of bioactive compounds of S. cypria. Furthermore, tests using in vitro biological assays, e.g., cell lines, or in vivo assessments, are required to help determine the antioxidant activity of isolated phenolics. Overall, this study provides valuable data for the exploitation of S. cypria by the pharmaceutical and cosmetic industries.
Plant Material
Sampling was carried out with the coordinates of the central point of the surface being as follows: x: 485,399; y: 3,866,330; z: 1358 (in UTM system 36S). S. cypria plants were identified and distinguished from other Saponaria species based on morphological characteristics, as previously described [17,27]. Roots were collected from five randomly selected mature S. cypria plants (total dry root mass = 500 g), and cultivated at the nurseries of the Department of Forests in Troodos, Cyprus. Cultivated plants came from seeds germinated at the Nature Conservation Unit at Frederick Research Center. Seeds of S. cypria came from two seed banks in Cyprus (the Agricultural Research Institute Gene-bank (Nicosia, Cyprus) and the Nature Conservation Unit Seedbank (Nicosia, Cyprus).
Preparation of Extracts
S. cypria roots were washed, air-dried at room temperature for 3-4 days and crushed into fine powder. Three different solvents were used: 100% methanol (Merck, Gillingham, UK), 100% ethanol (Merck, Gillingham, UK), and 100% acetone (Merck, Gillingham, UK). Powder (10 gr each time) was added to 150 mL solvent and macerated continuously at room temperature for 24 h. Thereafter, the extracts were centrifuged at 4 • C, 4000 rpm for 10 min and filtered. The solvent in each extract was fully evaporated using a rotary evaporator (Stuart RE300, Keison, Chelmsford, UK) at 60 • C under vacuum of <1 mmHg. The remaining solids were redissolved in methanol. The crude extracts were stored at 4 • C until further analysis.
Total Saponin Content
The total saponin content (TSC) of S. cypria root extracts was measured, as previously described [19]. In a glass tube, 250 µL of each extract was added along with 1 mL of reagent mix containing glacial acetic acid (Merck, Gillingham, UK) and sulfuric acid (1:1, v/v, Sigma Aldrich, Hamburg, Germany). The contents of the tube were vortexed vigorously and heated at 60 • C for 30 min during which a purple color developed. Following incubation, the tubes were rapidly cooled to room temperature in an iced water bath. The absorbance of all samples was measured at 527 nm (UV-1280, Shimadzu Europa GmbH, Duisburg, Germany). A standard oleanolic acid (Sigma Aldrich, Hamburg, Germany) curve (0.1-1 mg/mL) was constructed. The TSC of all extracts was expressed as mg of oleanolic acid equivalents per gram of crude extract (mg OAE/g crude extract) using the linear regression equation of the oleanolic acid standard curve. All experiments were performed in triplicate and the results were expressed as the mean value ± standard deviation (SD).
UHPLC-QTOF-MS Analysis
The identification of the saponin components and the phenolic compounds was performed by UHPLC-QTOF-MS, (Agilent Technologies, Santa Clara, CA, USA). The gradient elution steps were: 98% A, water (Water for LCMS, Carlo Erba, Italy), (0-0.5 min), 98% to 2% A (19 min), 2% A (24 min), 2% to 0% A (26 min), 100% B, acetonitrile (Carlo Erba, Italy) (29 min), 100% to 2% B (30 min) and 98% A (35 min), modifier 0.1% formic acid, (Carlo Erba, Italy) in both, the injection volume was 10 µL and the flow rate was 0.3 mL/min. The liquid chromatography was performed with an Agilent 1290 Infinity LC system (Agilent Technologies, Santa Clara, CA, USA) and the separation of the saponins was achieved using a Waters Sunfire column, 150 mm × 2.1 mm, 3.5 µm, at 40 • C, (Waters Corporation, Milford, MA, USA). The MS experiments were performed on an Agilent 6550 iFunnel high resolution quadrupole time of flight mass spectrometer operating in the negative mode using default settings, (Agilent Technologies, Santa Clara, CA, USA). All chromatographic data were acquired in MS and AutoMS/MS mode using collision energies at 10, 20, 40 and 60 volts. The MS/MS data were processed with the MassHunter Workstation Software known as Qualitative Analysis Version B.06.00. The molecular formula assignment was carried out for each identified compound by comparing the experimental m/z to theoretical values, allowing a mass error of less than 5 ppm. The mass error of all fragment ions was also less than 5 ppm. The molecular weight values and the fragmentation pattern of the compounds were compared to previously reported values of signature ion fragments of known saponins [7,22] and phenolics [25,26]. The structures of saponin backbones were prepared using the ChemSketch program (ACD/Labs Toronto, Canada). Relative quantification was based on calculated peak areas of the nine saponins using the linear regression response curve of reference Quillaic acid (Sigma Aldrich, Germany). Similarly, the linear regression response curve of reference Quercetin (Sigma Aldrich, Germany) was used for the quantification of the six phenolic compounds. The standard concentration range used for quantification was 5, 10, 50, 100, 200, 400, 600 and 800 ng/injection for Quillaic acid and 5, 10, 50, 100, 200 and 400 ng/injection for Quercetin. The data was presented as the mean % (g of compound per 100 g of crude extract) ± the estimated standard deviation (SD) of three independent experiments.
Total Phenolic Content
The total phenolic content (TPC) of S. cypria root extracts was determined using the Folin-Ciocalteu method, as previously described [24]. A standard gallic acid (Sigma Aldrich, Hamburg, Germany) curve was constructed by preparing dilutions of 0.05-0.4 mg/mL in methanol (Merck, Gillingham, UK). In a glass tube, 100 µL of each of these dilutions were mixed with 500 µL water and then 100 µL of Folin-Ciocalteu reagent (Sigma Aldrich, Hamburg, Germany). Each reaction mixture was allowed to stand for 6 min, followed by the addition of 1 mL of 7% sodium carbonate (Sigma Aldrich, Hamburg, Germany) and then 500 µL of distilled water. The absorbance was recorded after 90 min spectrophotometrically at 760 nm (UV-1280, Shimadzu Europa GmbH, Duisburg, Germany). The same procedure was repeated with S. cypria extracts. The TPC of all samples was expressed as mg of gallic acid equivalents per gram of crude extract (mg GAE/g crude extract) using the linear regression equation of the gallic acid standard curve. All experiments were performed in triplicate and the results were expressed as the mean value ± standard deviation (SD).
Minimum Inhibitory Concentration
The broth microdilution method was used for the determination of MIC of the S. cypria root extracts. Saponaria extracts as a 50 mg/mL starting solution were subjected to 2-fold serial dilutions. Specifically, 200 µL of each extract (50 mg/mL) were added as a starting solution and 2-fold serial dilutions with Tryptic Soy broth (TSB, Liofilchem, Italy) were prepared. Isolated cultures of E. coli (NCTC 9001, Sigma Aldrich, Hamburg, Germany), S. aureus (NCTC 6571, Sigma Aldrich, Germany), E. faecalis (NCTC775, Sigma Aldrich, Hamburg, Germany) and S. enteritidis (WDCM 00030, Sigma Aldrich, Hamburg, Germany) were prepared in TSB at a concentration of approximately 1 × 10 6 cfu/mL. One-hundred microliters (100 µL) of each bacterial inoculum were added in each well, containing either extract or controls. Blank samples of each extract (containing no bacteria) were subjected to 2-fold serial dilution with TSB (blank control). Control samples included bacteria (100 µL), but no extracts were used as growth controls. A sterility control was used with TSB, no bacteria and no extract. A well with bacteria and Ampicillin (0.516 mg/mL, Sigma Aldirch, Hamburg, Germany) or Gentamycin (0.064 mg/mL, Molekula, Darlington, UK) were used as positive controls. The MIC of each sample was detected after 18 h of incubation at 37 • C, followed by the addition of 30 µL (0.2 mg/mL) p-iodonitrotetrazolium chloride (INT, Sigma Aldrich, Gillingham, UK) and incubation at 37 • C for 30 min. The absorbance at 492 nm was measured with a microplate reader (Sunrise, Tecan Trading Ltd., Mannedorf, Switzerland). The MIC of each extract was defined as the minimum sample concentration that prevented the color change of the medium, thus exhibiting complete inhibition of bacterial growth as compared with that of the blank control.
Minimum Bactericidal Concentration
The MBC of S. cypria extracts was determined by sub-culturing 2 µL aliquots of the preparations from the MIC assay in 100 µL TSB and incubating for 24 h at 37 • C. The MBC was defined as the lowest concentration of each sample not exhibiting color change, after addition of INT, as described above.
Statistical Analysis
All experiments were performed in triplicates and the results were expressed as the mean value ± the estimated SD. Significance between the means was determined by student's t test (p < 0.01).
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/molecules27185812/s1, Figure S1: UHPLC-QTOF-MS Extracted Ion Chromatogram of saponins of S. cypria root extract. Only peaks that represent saponins are indicated with numbers 1-9. Other peaks did not provide any evidence that they are saponins; Figure S2: MS/MS spectra data of saponins with precursor and product ions in negative mode; Figure S3: UHPLC-QTOF-MS Extracted Ion Chromatogram of phenolic compounds of S. cypria root extract. Only peaks that represent phenolic compounds are indicated with numbers 1-6. Other peaks did not provide any evidence that they are phenolic compounds; Figure S4: MS/MS spectra data of phenolic compounds with precursor and product ions in negative mode. Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Data reported in this study are contained within the article. The underlying raw data are available on request from the corresponding author.
|
v3-fos-license
|
2022-05-18T15:19:31.533Z
|
2022-05-16T00:00:00.000
|
248843938
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ppsc.202200021",
"pdf_hash": "d150b846076c7401d89eedaa99bfe252d962b8ee",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46598",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "ea1d0f16ab79ca4601221330955329b681f827c8",
"year": 2022
}
|
pes2o/s2orc
|
Silica Supraparticles with Self‐Oscillatory Vertical Propulsion: Mechanism & Theoretical Description
A novel type of mm‐sized silica‐based self‐propelling supraparticles displaying buoyancy‐driven homogeneous vertical oscillatory motion using aqueous hydrogen peroxide (H2O2) as chemical fuel is presented. The supraparticles are prepared via a robust droplet templating technique by drying colloidal suspension droplets containing silica microspheres and catalytic Fe3O4@Pt decorated nanoparticles on a superhydrophobic Cu–Ag surface. Oxygen gas originating from Pt catalyzed decomposition of H2O2 is released and gathered onto the hydrophobic supraparticle surface. This causes buoyancy and uplift of the particle to the surface, where the oxygen bubble is released and the particle descents again, leading to an oscillating process in a very regular fashion. The mechanism of this process is characterized and analyzed here quantitatively by a balance of the gravitational and buoyant forces. The theoretical model of particle movement describes how the particle oscillation period depends on the H2O2 concentration. This novel type of self‐propelling particles could find potential applications in mixing and catalysis, especially due to the high regularity of their periodic movement.
The ORCID identification number(s) for the author(s) of this article can be found under https://doi.org/10.1002/ppsc.202200021.
DOI: 10.1002/ppsc.202200021
creation of many new types of functional materials. [2] This functionality can for example be of structural, optical, or chemical nature. Sperling et al. have shown that the use of fumed silica as colloidal building block assembled inside aqueous droplets residing on superhydrophobic surfaces results in the formation of anisometric supraparticles possessing boatlike shape, [3] where those particles can also enclose additional colloids. [4] Besides shape, internal structuring of such particles can be achieved by forming patchy assemblies by the use of polystyrene microspheres and highly light-diffracting gold nanoparticles [1] or magnetic particles. [5] The latter furthermore allows for defined patch positioning in supraparticles with anisometric ellipsoidal shape by using bent superhydrophic surfaces. [6] The technique of droplet templating on superhydrophobic surfaces, [7] thus allows for efficient formation of isotropic and anisotropic supraparticles, which in turn offer high potential for applications in the field of self-propelling devices. [8][9][10] Self-propelling particles harvest energy from their media to generate a force for movement by consumption of a chemical fuel via a usually anisometrically distributed catalyst. [11] The most popular chemical "fuel" is hydrogen peroxide (H 2 O 2 ), when using metals, such as Pt or Pd as a catalyst. [12] Many particles of this kind published in literature are of spherical shape, [13,14] although rod-like particles have also been extensively studied and several modes of movement have been proposed. In general, such particles mostly consist of two or three metals such as Au and Pt, Ni, or Pd as a catalyst [15] and are asymmetric in architecture, i.e., are of Janus type. This type of particles driven by H 2 O 2 decomposition are for instance used for targeted delivery of species and the rate of movement can be controlled by the H 2 O 2 concentration. [16] Furthermore, the particle velocity can be controlled by the solution pH [17] and as the decomposition reaction proceeds at the interface it can be modified by the presence of surfactants. [18] In addition to metal catalysts, one may also employ enzymes or catalysts mimicking them, like synthetic manganese catalase. [19] During H 2 O 2 decomposition, oxygen bubbles are formed on the particle surface, where buoyancy can lead to vertical motion, [20] while the typically mainly observed horizontal motion is likely to be explained by the detachment mechanism of the oxygen bubbles. [14] A recent investigation has shown that in addition to the overall shape of the particles also the nanometric details of the catalyst play an important role. [21] A novel type of mm-sized silica-based self-propelling supraparticles displaying buoyancy-driven homogeneous vertical oscillatory motion using aqueous hydrogen peroxide (H 2 O 2 ) as chemical fuel is presented. The supraparticles are prepared via a robust droplet templating technique by drying colloidal suspension droplets containing silica microspheres and catalytic Fe 3 O 4 @Pt decorated nanoparticles on a superhydrophobic Cu-Ag surface. Oxygen gas originating from Pt catalyzed decomposition of H 2 O 2 is released and gathered onto the hydrophobic supraparticle surface. This causes buoyancy and uplift of the particle to the surface, where the oxygen bubble is released and the particle descents again, leading to an oscillating process in a very regular fashion. The mechanism of this process is characterized and analyzed here quantitatively by a balance of the gravitational and buoyant forces. The theoretical model of particle movement describes how the particle oscillation period depends on the H 2 O 2 concentration. This novel type of self-propelling particles could find potential applications in mixing and catalysis, especially due to the high regularity of their periodic movement.
Introduction
The fabrication of colloidal supraparticles by use of the suspension droplet templating technique on solid superhydrophobic surfaces [1] has proven to provide a powerful tool for the The use of toxic H 2 O 2 restricts the applicability of such systems, but other chemical reactions like the oxidation of primary alcohols can be employed in powering active motile systems. [22] Moreover, other mechanisms of movement are possible, including bubble propulsion, [23] gravitational, [20] and Brownian ratchet mechanism, [24] self-diffusiophoresis, [25] as well as interfacial gradient, [26] light driven [27] or electrokinetically driven mechanisms. [28,29] In general, the design of self-propelling devices plays an important role with respect to mechanism and direction of movement, but also to programmed guiding, involving patterning, [30] chemical fuel gradient, [31] or magnetic manipulation, [32] which offer possibilities to effectively influence particle trajectories. Sundararajan et al. reported the dynamics of Au-Pt bimetallic nanorods of ≈1.5 µm length and ≈0.4 µm diameter. [33] They have been able to use this device to transport polystyrene microspheres as small cargo, bound via electrostatic or chemical interaction, along a concentration gradient of H 2 O 2 . Adding magnetic Ni to the rods also showed stabilization of the particle trajectory in terms of orientation under the action of an applied external magnetic field. Similar means of cargo transport have been reported by Burdick et al. using magnetic interaction for reversible binding. [34] A micro-submarine having superhydrophobic surface coating could be able to collect oil droplets navigating. [35] Most of the autonomously moving particles reported in literature to date are capable of horizontal 2D translation. One example for vertical motion was published recently by Campbell and Ebbens, who used gravitation to orient the direction of motion particles with a diameter of 3 µm or larger via anisotropic metal coating. [36] In another example, 2 µm sized silica particles were converted to Janus particles with a Pt coverage on one half of the particle. These Janus particles were moving on the surface of an aqueous solution due to the decomposition of H 2 O 2 . Their trajectories on the surface have been largely controlled by the H 2 O 2 concentration. [37] Referring to particle size, most studied particles are in the µm size range and to date there are few publications dealing with bigger self-propelling particles. Working with larger particles, Dey et al. used a fuel gradient induced by pH to guide a vertical movement trajectory, where particles used in this case were of about 0.7 mm size. [31] A novel, soft type of self-propelling gel-based particle in the size range of several mm was reported by Sharma et al., who used ethanol infused hydrogel to promote dancing movement driven by the Marangoni effect. [38] In this paper, we describe a new type of autonomously moving supraparticle in the sub-mm size range performing oscillating vertical motion or elevation, by means of oxygen bubble growth via Pt catalyzed decomposition of aqueous H 2 O 2 and subsequent bubble release at the air-water interface based on recently reported experimental observation by ourselves. [39] These supraparticles are prepared via the droplet templating method, [40] by deposition of colloidal suspension droplets on a superhydrophobic surface and subsequent drying. We provide a full analysis of the movement mechanism and give a simple physical model, that accounts for the observations at different fuel, here H 2 O 2 , concentrations. By correlating to the reaction kinetics, we apply our model to the experimentally observed oscillation time dependence of the particle movement depending on H 2 O 2 concentration. A full explanation of our concept is given and the model strengths and limitations in terms of prediction capability are discussed.
Experimental Observation
We recently reported a new type of self-propelling supraparticles performing oscillatory vertical motion similar to that of an elevator. [39] This type of motion is promoted by catalytic oxygen production by Pt catalyzed decomposition of H 2 O 2 , described in Equation (1). (1) The produced oxygen gathers into bubbles attached to the supraparticle surface, due to the surface hydrophobicity tailored during particle preparation (see Experimental section), as shown in Figure 1a. While the particle surface is covered with several small bubbles, one big bubble mainly responsible for buoyancy is formed, as shown in Figure 1b. Oxygen is continuously generated catalytically, and the bubble grows until it creates enough buoyancy to begin lifting the particle to the top of the container (Figure 1c). Here the bubble gets released into the air layer between the hydrophobic Teflon surface and liquid interface ( Figure 1d). This is followed by the particle descending by gravity towards the bottom and then a new movement cycle starts, because of the continued production of O 2 . This oscillating movement will then be repeated in a very regular fashion for extended periods of time (up to several days by our observation). We analyze here the origins of this movement and the means to control it.
The oscillation time (or cycle frequency) depends on the initial concentration of H 2 O 2 in the aqueous phase. In our experiments, this effect was analyzed by tracking of the particle position over time. From the video files, we extracted the height function, h(t), and derived from that velocity, v(t), and acceleration, a(t), for details see Supporting Information. Some example curves for h(t), v(t), and a(t) are given in Figure 1. Mechanism of particle movement. a) Particle located on Teflon bottom generating oxygen gas forming a bubble attached at the particle surface. Gravity (F g ) is still larger than buoyancy (F b ), F g > F b ; b) F b equals to F g tilting the particle, F g ≈ F b . c) Particle is lifted to the top due to further increasing F b , F g < F b . d) Bubble is detached at the Teflon top and the particle descends to bottom.
the particles drop to the bottom (Δh = -27 mm), while for high concentration of 15-20 wt% the particles already revert their motion before hitting the bottom, thereby performing a fully oscillatory movement. In general, one observes a very regular movement and one that is well reproduced as shown here for 3 different particle trackings.
Theoretical Description
The description of the elevator particle movement, in terms of velocity v el , can be done by Newton's second law (Equation 2), by balancing the forces affecting it. For simplification, the mass of the elevator particle m el , was taken as constant, as the increase of mass arising from the growing gas bubble is small. The relevant forces F i associated within this process are gravity (F g ), buoyancy (F b ), and friction (F fr ), and all of them change with time. With Equation (2), we only consider movement in the z-direction, as any forces operating in other directions should be negligible and the small extent of lateral movement was also validated experimentally as shown in Figure S1 (Supporting Information). The sign in front of (F g -F b ) will depend on the direction of movement, being positive for upward motion and negative for downward motion. (2) The F g (t) and F b (t) are given as Equations S1 and S2 (Supporting Information) with a comprehensive description of relevant parameters in detailed theoretical description (Supporting Information). Also, F fr (t) in a solvent of viscosity η was simply approximated by the Stokes equation (Equation S3, Supporting Information)). This is a valid assumption as the particles are moving under laminar flow conditions, since the dimensionless Reynolds number (Re in Equation (3) where ρ sol is density of solvent, r el is radius and v el is velocity of the particle), will be less than 40, as even for the highest H 2 O 2 concentration of 20 wt% the maximum velocity (see Figure 2 bottom) is below 40 mm s -1 (for further details of Re calculation see also Tumbling of the particles could be another issue of concern for describing the particle motion. For homogenous spheres (the density distribution in our supraparticles may well be approximated to be homogenous) the dimensionless moment of inertia (which is relevant for rotational motion) would be about one, accordingly the moment of inertia of the particles could be relevant. [41] However, experimentally we did not observe that tumbling of particles had any significant effect on the particle movement, different to rising spheres of varying density, which may exhibit complex motion. [42] However, in our case the particle movement is dominated by moving in z-direction and lateral motion, that could for instance arise from convection, obviously can be neglected ( Figure S1, Supporting Information). In summary, apparently, the translational inertia and being well under conditions of laminar flow stabilizes the linear motion of the particles.
For our case of a gas, produced and attached as a bubble onto the particle, a dynamic force equilibrium builds up, leading to oscillatory motion. This motion, depending on the height of the vessel, may take place in two basic modes (1) reversing the direction of its movement during the fall (full oscillation) or (2) the case, where it drops all the way to the bottom of the vessel and rests there until enough oxygen has been produced to start the up-lift again (semi oscillation), shown in Figure 3. It is generally possible to identify the trajectory minimum with v el = 0 for fully oscillating motion. For the case of semi oscillating motion that point is equal to a force equilibrium on the right-hand side of Equation (2) being 0 and at this point in time, Δt rise,s , the elevating motion will begin. The simple mechanical models for each particle motion are described in detailed theoretical description (Supporting Information) based on the well-known first-order reaction of H 2 O 2 decomposition. [43] Part. Part. Syst. Charact. 2022, 39, 2200021
Model Application to Experimental Data
In the following, we want to compare our experimental data to this just described model. In order to do so, an appropriate expression for ρ sol needs to be found. According to Easton et al., the solvent density of aqueous solutions of H 2 O 2 at 25 °C, for c 0 in [wt%], ρ sol in [kg m −3 ] is given as: [44] Experimentally, we then determined a for different concentrations c 0 and using the density given by Equation (4) determined the rate constant of H 2 O 2 decomposition. As χ depends also on ρ sol (Equation S6b, Supporting Information), it is convenient to rewrite the dependence as: Equation (5b) refers k ′ to k , which can be combined with Equation S19 (Supporting Information). With this the experimental data for a are well approximated, as shown in Figure 4, from which one can deduce a value of (1.63 ± 0.05) × 10 −5 m s −3 for to k . This rate constant describing the decomposition of H 2 O 2 then also can be used to calculate the overall expression for the experimentally observed oscillation period, Δt el (shown in Figure 3 and given by Equations S4, S5, S7, S10, and S12, Supporting Information). The required radius r el of the elevator supraparticles was calculated using the spherical Random-closed-packed geometry with a packing factor of ξ = 0.64: The resulting fit to the experimental data as a function of the H 2 O 2 concentration is shown in Figure 5, where it must be noted that basically all parameters are known and fixed here by the composition of supraparticles and the experimentally determined decomposition rate of H 2 O 2 . The only adjustable parameter is the amount of permanently adsorbed oxygen, given by β in Equation S1b (Supporting Information). In our case, it turns out to be 0.96, which means that interestingly the volume of permanently bound oxygen, presumably present in pores of the supraparticles, or strongly bound to the surface, is about identical to the dry volume of the colloidal material of the particles.
It is clearly visible that the model describes well the oscillation time dependency up to 15 wt% of H 2 O 2 . Above this concentration, however, the model predicts a slightly lower oscillation time than observed in the experiments reported earlier. This may be related to transition in the reaction kinetics to conditions, where the Pt becomes saturated and at this point the reaction converts into zero order kinetics, which subsequently leads to constant particle frequencies. Here Equation S16 (Supporting Information) is no longer valid. This saturation point is not only determined by the pure surface area of the Pt catalyst, but the wettability also plays an important role due to the hydrophobic supraparticle surface. In addition, it should be noted that the amount of oxygen loss from the particle surface may significantly increase at such high concentrations of H 2 O 2 . This again results into pseudo-zero-order kinetics as the ability of the hydrophobic surface to retain the produced oxygen is limited, which is not accounted for by our model. We have also added the hypothetical oscillation time Δt hyp (= Δt rise,f + Δt up ) as black dashed curve in Figure 5, as it would be observed for an infinitely deep vessel, i.e., where one has only full oscillation motion even if the concentration of H 2 O 2 is not high enough. Clearly as a result, the expected elevation time increases compared to the experimental values, as one has the additional travel time of the elevator particle, arising from a larger amplitude. The corresponding hypothetical amplitude Δh hyp is calculated based on Δt hyp and given in the inset of Figure 5. From it, one can see that at ≈6.25 wt% of H 2 O 2 , the expected amplitude reaches 27 mm, which equals the experimental liquid/vessel height. This coincides well with the experimental observation of transition from semi to full oscillation mode when approaching higher concentrations (see Figure 2). However, the expected values for Δt hyp and Δh hyp approach 0, when exceeding ≈14 wt%. This refers to the oxygen production expected at its full retention by the supraparticle being high enough to prevent any sinking and therefore oscillation. The rationale behind this conclusion is again lowered retention of oxygen as increasing concentration of H 2 O 2 causing higher violent bubble production.
Discussion and Conclusions
We present a full mechanical description of the dynamics of a new type of self-propelling supraparticles performing vertical elevating motion by decomposition of H 2 O 2 in aqueous media as chemical fuel. Upward movement arises from binding of a formed oxygen gas bubble to the hydrophobic supraparticle surface. This oxygen becomes released when the particle reaches the air/water interface, where the bubble breaks through and the supraparticles loses its buoyancy, thereupon beginning a downward motion that eventually becomes stopped by reforming an oxygen bubble. The resulting oscillating motion is characterized by a very regular oscillation interval, which depends strongly on H 2 O 2 concentration. A simple model describing this dependency was derived by balancing buoyancy against gravity, while taking into account solvent density, reaction kinetics, friction, and amount to oxygen collected on the supraparticle structure. The volumetric ratio of oxygen generated was evaluated to be around 80-85% comparing to the colloidal volume. This constant adsorption of oxygen onto the suparaparticle can be explained by the hydrophobicity of the particle surface and has been validated by optical microscopy. The simple model applies very well below 15 wt% of H 2 O 2 , while at higher concentrations deviations are seen due to a change in reaction kinetics and limitations of oxygen adsorption on the particle surface. The cycle frequency can be controlled precisely by the concentration of the solution and the overall mass of the particle. The results show how the elevating particle movement can be calculated precisely by a model based on simple physical principles, which therefore can provide useful predictions of particle frequency depending on applied chemical fuel and amount of colloidal ingredients during particle synthesis.
Supraparticles of this type are promising for applications where a chemical system needs mixing, catalytic processing, capturing of dispersed components, or other types of convective transport operations. This is made possible by the simple and robust means of making and actuating these supraparticles. The theoretical expressions for the dynamics of such supraparticles may enable the future fabrication of new smart materials with potential use in catalytic or separation applications, where the very regular and controlled motion of these particles can intensify and enhance the process.
Experimental Section
Preparation of Superhydrophobic Surfaces: Superhydrophobic Cu-Ag surfaces were prepared according to a procedure reported by Gu et al. using an electrochemical deposition process. [45] Polished Cu surfaces were immersed into an aqueous solution containing 0.01 m AgNO 3 under moderate stirring for 25 min at room temperature. The surface was then dried at ambient conditions and immersed in 0.001 m 1-dodecanethiol dissolved in ethanol for 20 h without stirring. The as-prepared surfaces showed a water contact angle higher than 150° and were operating consistently for several weeks.
Catalytic Particles-Magnetite Synthesis: Magnetic Fe 3 O 4 nanoparticles with a core of 8-15 nm were synthesized according to Kang et al. using a simple coprecipitation method at high pH. [46] In a typical synthesis, 2.1 mL of a solution containing 0.617 m FeCl 2 and 1.234 m FeCl 3 (molarratio = 2:1) at 0.4 m HCl was added drop-wise to 20.3 mL of 1.5 m NaOH under vigorous stirring at room temperature. After addition, the particles were collected by means of external magnetic field and washed three times with MilliQ water using centrifugal sedimentation at 2300 g. In a last washing step, the resulting precipitate was refilled with 0.01 m HCl solution and concentrated to a concentration of 0.66 wt% as measured with inductively coupled plasma-optical emission spectrometry (ICP-OES) after having dissolved the particles with aqua regia.
Catalytic Particles-Pt Decoration: The resulting magnetite core particles were decorated with Pt in a second step using H 2 PtCl 6 as precursor and NaBH 4 as reduction agent. A mass-ratio of 1:10 Pt to Fe 3 O 4 during synthesis was maintained. For the reaction in aqueous media, a solution of 6 mg H 2 PtCl 6 × 6H 2 O in 0.2 µL was added quickly to The β values (Equation S1b, Supporting Information) show correlation indicating an approximate constantly collected amount of oxygen equal to 95% to 97% of the total ingredients volume. The black dashed curve shows hypothetical oscillation time for a free particle oscillation without any limitation due to the vessel geometry. The inset shows the expected amplitudes for the hypothetical oscillation between 5.75-14.75 wt% of H 2 O 2 . a suspension of 97 mL volume containing 20 mg Fe 3 O 4, under vigorous stirring using an Ultra-Turrax at 24 000 rpm. After 2 min of stirring, 1.5 mL of an ice-chilled solution of 0.6 mg NaBH 4 was added drop-wise at a rate of 1 mL min −1 . The mixture was allowed to homogenize for another 2 min and then a 1 mL solution of 127 mg Na-citrate dihydrate was added. After additional 3 min of stirring, the resulting particles were cleaned in analogy to the magnetite preparation described in Section 5.2. except for the last HCl washing step. The final suspension had a concentration of 0.46 wt% measured by ICP-OES after microwaveaided decomposition of the particle in aqua regia at 160 °C and 18 bar pressure. The particles showed an average size of about 8-15 nm in diameter based on transmission electron microscopy measurement. [39] Synthesis of Self-Propelling Elevator Supraparticles: The "elevator" supraparticles were prepared using the droplet templating method [3,39] by deposition of aqueous colloidal suspension droplets under controlled conditions onto a superhydrophobic Cu-Ag surface. For a typical preparation, 3 µL of a suspension containing 10% vol/vol silica (Bangs Laboratories Inc., 780 nm in diameter) and 0.11% vol/vol of catalyst particles (0.0034% vol/vol Pt) were applied, described in Section 5.2. onto the superhydrophobic surface inside a chamber, where the humidity was set to 5-20% using a silica gel desiccant. The droplets were dried with a magnet placed above to direct the magnetic catalyst particles to form a patch on the supraparticles. After drying, the resulting supraparticles were hydrophobized by silylation with MeSiCl 3 via chemical vapor deposition for 3 min at room temperature in a closed chamber. After the reaction, the supraparticles were annealed at 120 °C for 30 min to enhance the stability of the assembled colloidal structure by complete removal of any moisture. Prior to the elevation experiments, the as-prepared particles were immersed in water for several hours. An illustration for the drying process during particle synthesis is given in Figure 6.
Self-Propulsion Experiments and Video Analysis: The particle motion was observed using an OLYMPUS SZ61 microscope at ×4 magnification coupled with a digital camera (SONY Cybershot DSC-V1), record movies at 26 frame s −1 . The videos were subjected to image analysis using ImageJ/ Fiji. [47,48] The particle position over time was recorded and analyzed at different concentrations of aqueous H 2 O 2 and for different amounts of silica in the particles. For experiments, plastic rectangular cuvettes (Malvern) with an average inner diameter of 10.1 mm were equipped with Teflon tape on the bottom and the top to provide a constant interface. Within our experiment, during an observation time of 8 min, the particle movement was tracked (e.g., hundreds of oscillations for the higher concentrations), and was subsequently averaged. One particle tracking result at 0.5 wt% H 2 O 2 aqueous solution is shown in Figure S2 (Supporting Information). Every oscillation was isolated by identifying initial decay, bottom and rising time. Further, three particles were tested in each H 2 O 2 aqueous solution to obtain reliable reproducibility. The extractions of velocity and acceleration rate from the height difference at 0.5 wt%, 5 wt%, and 20 wt% H 2 O 2 aqueous solution are described in Figure 2.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author. . Bottom: Optical microscopy image of a typical dried particle prepared from 3 µL droplet containing initial 10% vol/vol silica and 0.11% vol/vol catalyst particles; Scale bar is 500 µm.
|
v3-fos-license
|
2018-01-04T18:12:40.753Z
|
2017-10-14T00:00:00.000
|
12664972
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00018-017-2674-y.pdf",
"pdf_hash": "95061e48ae455c76798f600f18ef29529cdd06b5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46599",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "95061e48ae455c76798f600f18ef29529cdd06b5",
"year": 2017
}
|
pes2o/s2orc
|
Gut microbiota changes in the extreme decades of human life: a focus on centenarians
The gut microbiota (GM) is a complex, evolutionarily molded ecological system, which contributes to a variety of physiological functions. The GM is highly dynamic, being sensitive to environmental stimuli, and its composition changes over the host’s entire lifespan. However, the basic question of how much these changes may be ascribed to variables such as population, diet, genetics and gender, and/or to the aging process per se is still largely unanswered. We argue that comparison among studies on centenarians—the best model of healthy aging and longevity—recruited from different geographical areas/populations (different genetics and dietary habits) can help to disentangle the contribution of aging and non-aging-related variables to GM remodeling with age. The current review focuses on the role of population, gender and host genetics as possible drivers of GM modification along the human aging process. The feedback impact of age-associated GM variation on the GM–brain axis and GM metabolomics is also discussed. We likewise address the role of GM in neurodegenerative diseases such as Parkinson’s and Alzheimer’s, and its possible therapeutic use, taking advantage of the fact that centenarians are characterized by an extreme (healthy) phenotype versus patients suffering from age-related pathologies. Finally, it is argued that longitudinal studies combining metagenomics sequencing and in-depth phylogenetic analysis with a comprehensive phenotypic characterization of centenarians and patients using up-to-date omics (metabolomics, transcriptomics and meta-transcriptomics) are urgently needed.
composition and functionality to the varying conditions in which the human host lives to meet the changing demands of host metabolism [2]. Thus, a healthy adult GM structure is properly defined as a set of many possible configurations which, even when differing in composition, share a comparable degree of diversity and evenness (meaning the number of species with an equal distribution in the ecosystem), and the ability to preserve the homeostasis of the human host [3]. In this elaborate scenario, the most informative approach for understanding the role of the GM in its lifelong maintenance of host homeostasis would clearly be by longitudinal studies monitoring individuals over time (years and decades) to identify and follow the specific trajectories of their agerelated GM modifications. To date, this kind of analyses has not been possible because attention towards the GM is quite a recent development, while the most reliable and robust longitudinal studies have not collected stool samples across the full life span of individuals. Hopefully, new life-long longitudinal studies or continuations of existing ones will cover this gap.
At present, the best way of grasping the adaptive pattern of human GM as humans age is represented by crosssectional studies embracing a wide age range in well-defined populations that are relatively homogeneous in genetics and lifestyles. Inclusion of "extreme phenotypes", i.e., individuals who are at the extreme ends of a trait distribution (healthy subjects versus patients suffering from diseases), can help in identifying specific signatures within overall age-related trajectories, regarding genetics, epigenetics, metabolomics, and including metagenomics, among other things [4][5][6][7][8]. Such is the case of centenarians who represent a clearly defined and highly informative "super-control" group, since, unlike younger controls, most of them achieved their remarkable age by avoiding or perhaps postponing major age-related diseases. The strategy of focusing on individuals from welldefined populations and including the "extreme phenotypes" such as centenarians increases one's power to identify physiological age trajectories, including the last 20-30 years of human life which are usually neglected [9]. Comparison between data sets obtained from different populations will allow us to disentangle changes related to specific genetic or lifestyle habits, including diet, from changes related to the aging process itself.
The model of centenarians
Centenarians represent the best model of "successful" aging showing a lower incidence of chronic illness, a reduction of morbidity and an extension of health span in comparison to octogenarians and nonagenarians from the same cohort [10,11]. Thus, the study of the GM of exceptionally long-lived individuals is providing insights into how the GM successfully adapts in an extremely long lifespan to the progressive age-related environmental (lifestyle, diet, etc.) and endogenous changes, contributing to the maintenance of metabolic and immunological homeostasis and promoting survival [1,8].
Human longevity has a strong familial and genetic component [12,13]. Data from different populations have shown that relatives (parents, siblings and offspring) of long-lived subjects have a significant survival advantage, a higher probability of being or becoming long-lived and a lower risk of undergoing major age-related diseases [14][15][16][17]. Family genealogy data from Sardinian centenarian women have confirmed that maternal longevity is associated with lower infant mortality in offspring [18] suggesting that parents/ mothers who will later become centenarians very likely adopt healthier lifestyles for their children. Considering that the study of centenarians has some obvious limitations (rarity, lack of an age-matched control group and frailty related to extreme age), centenarians' offspring, representative of the elderly age bracket whose lifestyle can still be modified to attain better health, may provide a useful model to study both genetic and environmental/lifestyle determinants of healthy aging [14].
Starting from observation of the profound changes in immune responses with age (immunosenescence, i.e., the overall age-related remodeling of the immune system [19]) and taking into account the increasing amount of experimental data on genetics, proteomics, epigenetics, metabolomics, glycomics, etc. [20], one may conceptualize the aging process as a continuous lifelong remodeling of the whole human organism [21]. The exceptional phenotype of centenarians has been revealed as unexpectedly complex and very dynamic, being a unique mixture of adaptive robustness and accumulating frailty [21][22][23][24][25][26] resulting from the ability of the centenarian's organism to respond/adapt to damaging stimuli.
According to the dynamics of world population aging, a lifelong approach including the last decades of life is extremely important if we are to understand the basis of the longevity process, considering that the oldest-old are the fastest growing segment of the population in many countries. It is also interesting to note that the birth cohort is crucial in the health outcome of long-lived people. A comparison of two Danish cohorts born 10 years apart (1905 and 1915) showed that the younger cohort had longer survival and scored significantly better on both cognitive tests and the activities of daily living scale than the cohort born in 1905, despite being 2 years older at the time of assessment. This finding suggests that more people are living to older ages with better overall functioning [27].
Demographic projections suggest that there will be 3.7 million centenarians across the globe in 2050. In particular, China is expected to have the largest centenarian population, followed by Japan, the United States, Italy and India. In this scenario, the global number of persons aged 80 or over is projected to increase from 125 million in 2015 to 434 million in 2050 with a dramatic hike of the resources needed to care for them [28].
Gut microbiota from birth to 100 years and beyond
The programming of immune response and metabolic pathways is heavily influenced by the interaction between the human organism and its GM starting from infancy. This bidirectional relationship in early life has a profound impact on health and disease in later life.
A very recent paper has proposed that the progressive process of microbial colonization of the human ecosystems may be initiated in utero by the microbial populations of the maternal placenta and amniotic fluid which share some features with the microbiota detected in infant meconium [29]. Moreover, during vaginal delivery, a considerable inoculum of maternal intrauterine microbes is received by the neonate and, after birth, neonatal gut colonization is continued by microbes present in maternal milk and feces, with human milk factors (e.g., complex polysaccharides and antibodies) selectively promoting the growth of mutualistic microbial partners. Thus, antibiotic exposure during pregnancy, cesarean section delivery, postnatal antibiotic administration, and formula feeding may all alter the early intestinal microecology, and these factors have been associated with the risk of disease in later life [30][31][32][33][34]. These findings reveal that the aging process could also depend on early stimuli and experiences that may exert long-term effects. To the best of our knowledge, no studies have correlated the physiological and/ or pathological phenotype of elderly and extremely old individuals with these initial events shaping the early GM establishment. For instance, in centenarian databases no data are available on the mode of delivery, breast-or other types of feeding (wet nurse, animal milk, etc.), nutrition and hygienic conditions in the early years of life. Historical anthropology studies could shed light on these points.
Starting from life in utero, the gastrointestinal tract is colonized by a wide range of bacteria of maternal, dietary and environmental origin, which, after assembling themselves into a highly interconnected bacterial community, co-operate in several vital host functions, including nutrient digestion and absorption, immune function, as well as the development of an appropriate stress response. This close symbiotic relationship makes humans inter-dependent "meta-organisms" [35,36], where the commensal bacteria function as a metabolic and endocrine organ [37] and, in turn, the human immune system has properly evolved to control the physiological life-long low-grade inflammatory response triggered by the GM.
The human GM metacommunity is estimated to consist of over 1000 different microbial species [38] belonging to 5 predominant phyla: Firmicutes and Bacteroidetes followed by Actinobacteria, Verrucomicrobia and Proteobacteria [39,40]. As previously discussed, the GM is a malleable ecosystem, being able to adapt its phylogenetic and functional profile to changes in diet, environment, lifestyle, antibiotic treatments and stress. In a mutualistic context, this plasticity is functional to optimizing the metabolic and immune performance of the host in response to environmental and physiological changes, preserving physiological homeostasis and health status [41].
The human GM is a complex and dynamic environment, which undergoes profound life-long remodeling, sometimes with a concrete risk of maladaptive changes. Indeed, in certain circumstances the age-related pathophysiological changes in the gastrointestinal tract, modification of lifestyle, nutrition [42] and behavior, as well as immunosenescence and "inflammaging" (the chronic low-grade inflammatory status typical of the elderly, [23]) strongly impact on the GM, eventually forcing maladaptive variations [43]. Inflammation, in particular, may result in a higher level of aerobiosis and production of reactive oxygen species that inactivate the strict anaerobic Firmicutes, while allowing a bloom of facultative aerobes, as frequently observed in the elderly [41]. These microorganisms (i.e., Enterobacteriaceae, Enterococcaceae, Staphylococcaceae), generally called "pathobionts", can prosper in an inflamed gut as they are relatively oxygen tolerant, getting the better of mutualistic symbionts and further supporting inflammation [44]. On the other hand, these age-related GM changes can compromise the host immune homeostasis in favor of a proinflammatory profile creating a vicious inflammatory circle and may contribute to the progression of diseases and frailty in the elderly [45][46][47]. Frailty has been negatively associated with GM diversity [48] and Eubacterium dolichum and Eggerthella lenta have been found to be more abundant among frail individuals, while Faecalibacterium prausnitzii was less abundant, thus identifying a GM signature of frailty [48]. A very recent publication has demonstrated that germfree mice are protected from inflammaging [49]. When these mice are co-housed with old, but not young, mice the levels of pro-inflammatory cytokines in the blood increase together with intestinal permeability and macrophage dysfunction [49]. On the whole, these data prove that age-related dysbiosis is responsible for the age-related increase in systemic inflammation. Thus, pursuing a wholesome and adaptive GM trajectory during aging is dramatically emerging as a key factor in the achievement of healthy aging and maintenance of host homeostasis [50].
The comparison of GM among young adults, the elderly, and centenarians has highlighted that the mutualistic changes in the composition and diversity of the gut ecosystem do not 132 A. Santoro et al. follow a linear relation with age, remaining highly similar between young adults and 70-year-olds and markedly changing in centenarians. Thus, the GM seems to rest in a stable state from the third to the seventh decade of life [51], while after 100 years of symbiotic association with the human host, it shows a profound, and possibly adaptive, remodeling. Further analyses are needed to fill in the age gap between 70 and 100 years of age and complete the re-construction of age-related GM modifications.
Centenarians stand out as a separate population, their GM showing high diversity in terms of species composition (Table 1) [52]. Bacteroidetes and Firmicutes still dominate the GM of centenarians, but Firmicutes subgroups go through specific changes with a decrease in the contributing Clostridium cluster XIVa, an increase in Bacillus species, and a rearrangement of the Clostridium cluster IV composition. Several [51].
The GM of centenarians is enriched in facultative anaerobe bacteria mostly belonging to Proteobacteria which have been redefined as "pathobionts" because, in some circumstances, e.g., inflammation, they may escape surveillance, prevail over mutualistic symbionts and induce pathology [44,53]. The age-related remodeling of GM (i.e., proliferation of opportunistic Proteobacteria at the cost of symbiont Firmicutes and Bacteroidetes) may contribute to inflammaging and/or is affected by the systemic inflammatory status in a sort of self-sustaining loop. Indeed, the changes in GM profile observed in centenarians correlate with an increase in pro-inflammatory cytokines in the peripheral blood. In particular, these exceptionally long-lived subjects show high levels of IL-6 and IL-8, which correlate with an enrichment in Proteobacteria and a decrease in the amount of certain butyrate-producing bacteria [51].
A recent paper reconstructs the longest human microbiota trajectory with age by phylogenetic GM analysis of a sizable number of Italian young, elderly and extremely long-lived subjects (centenarians and semi-supercentenarians, i.e., persons who reach the age of 105 years) [8]. According to the authors, a core GM comprised of dominant symbiotic bacterial taxa (Ruminococcaceae, Lachnospiraceae, Bacteroidaceae) loses diversity and relative abundance of its members with age, thus decreasing in size. In extreme longevity, this shrinkage is counterbalanced by an increase in longevity-adapted and possibly health-promoting subdominant species (e.g., Akkermansia, Bifidobacterium, Christensenellaceae) as well as in their co-occurrence network. In addition, the GM of semi-supercentenarians is invaded by micro-organisms typical of other niches, such as Mogibacteriaceae and Synergistaceae, known to be abundant in the periodontal environment. In extremely aged people, centenarians and semi-supercentenarians, an overall increase has been observed in the GM diversity. Thus, while extremely aged people lose some of the most important core components of the adult GM, they acquire in parallel a wealth of new microbial GM components, including potential pathobionts and allochthonous microorganisms. Along with extreme aging, it seems that the host tolerates the consolidation of new GM ecosystem balances in the gut, resembling a property typical of the ancestral human GM [54,55]. In particular, to understand the GM-host's co-evolutionary trajectory, several studies have been conducted comparing the GM ecosystem of small-scale rural societies and that found in a westernized lifestyle [54]. This comparison revealed specific GM adaptations to the respective subsistence strategies, including higher diversity and enrichment in microorganisms generally considered as pathobionts (e.g., Prevotella, Treponema, Bacteroidetes and Clostridiales) in the GM from ancestral populations [56,57]. For instance, the GM of Hadza hunter-gatherers from Tanzania showed a unique enrichment in metabolic pathways that align with dietary and environmental factors peculiar to their foraging lifestyle, characterized by a broad-spectrum carbohydrate metabolism, reflecting the complex polysaccharides in their diet during the rainy season, though it is also equipped for the branched-chain amino acid degradation and aromatic amino acid biosynthesis typical of their diet during the dry season [55,58]. Such research makes us appreciate the coadaptive functional role of the GM in complementing human physiology. Kong et al. [52] Odamaki et al. [70] Simpson Reciprocal Index of Diversity ↓ ↑ Alpha diversity (Chao Index) Along with these studies, we can thus hypothesize that extremely long-lived people are able to rearrange the "mutualistic pact" with the GM, at least partly changing the microbial partners which support host health and physiology. It remains to be seen how these persons achieve this goal, and if and which environmental and/or genetic host factors are involved in this highly adaptive human process.
Gut microbiota in centenarians from different continents: Italians versus Chinese and Japanese
In studying the age-related remodeling of the human GM, one of the most challenging aspects is to discriminate effects due to the aging process per se from those due to the modification of diet and lifestyle that aging entails [3]. In advanced age, tooth loss, chewing and swallowing problems, impaired sense of taste and smell and reduced physical activity strongly affect the quality of diet and lifestyle [59,60] and these, in turn, are very well known short-and long-term determinants impacting on GM composition and functionality [3].
One effective strategy to disentangle these aspects is to compare elderly and long-lived people with different nutritional habits, lifestyles and cultures. Thus, comparison between GM of Italian centenarians/semi-supercentenarians and Chinese old people (including centenarians) led to identification of gut-microbial signatures during healthy aging [52]. The combination of the two datasets suggests significant differences in community membership and structures between the Italian and Chinese long-living groups that can be attributed to geographic, genetic and nutritional factors (Table 2). However, common features such as to discriminate long-lived from young people were identified in both groups [52]. Finally, the GM of the long-living groups in both the Italian and Chinese cohorts is also enriched in Ruminococcaceae, Akkermansia and Christensenellaceae which have been classified as potentially beneficial bacteria and linked to body mass index, immunomodulation and healthy homeostasis [52].
Another paper, presenting the Illumina sequencing of 16S rRNA gene amplicons performed on the GM of centenarians living in one of the most long-lived villages in the world (Bama County, China), confirmed that the GM of centenarians was more diverse (count of the unique OTU numbers, Chao 1 index) than that of the younger elderly [61] ( Table 1). The diversity of the GM community is considered as a key health indicator since it markedly affects the health status of the hosts, while a reduced GM diversity has been associated with several pathological conditions, including autoimmunity (inflammatory bowel disease and psoriatic arthritis), antibiotic treatment, Clostridium difficile infections, obesity and other metabolic alterations [62][63][64][65][66][67][68][69]. These results contrast with previous studies suggesting that the microbial diversity of GM was significantly reduced in centenarians [51]. However, in Biagi et al. [51] the GM was characterized by a microarray-based approach, making it impossible to fully characterize any unexpected diversity of the human GM ecosystem. Among the distinctive features of the fecal microbial communities of Bama County centenarians, the authors showed certain similarities (abundance of Escherichia, reduction in Bacteroidetes, structural change in butyrate-producing bacteria in the Clostridium cluster IV and Clostridium cluster XIVa) and some differences (low level of Akkermansia) with Italian centenarians ( Table 2).
The paper by Odamaki et al. [70] provides a picture of the changes in the GM composition throughout human life, from birth to extreme aging in a large cohort of Japanese individuals [70]. However, even though children, adults and the elderly were abundantly represented, this analysis was not centered on longevity, including only six centenarians (100-104 years old) and seven over-95 year-olds. Importantly, a decrease in Faecalibacterium, Roseburia, Coprococcus, Blautia and an increase in Enterobacteriaceae were shown in 90-and 100-year-old subjects, resembling the agerelated microbiota features found in Italian centenarians but with some differences from Chinese centenarians ( Table 2). Regarding the microbiota diversity, in the Japanese cohort the alpha diversity score and the Shannon index remained stable during adulthood and then increased in the elderly and centenarians, the later data confirming previous observations (Table 1) [70].
GM remodeling with age matches metabolome variations. Thus, centenarians showed a distinct metabolic pattern. A unique alteration of specific glycerophospholipids and sphingolipids [71] and decreased circulating levels of 9-hydroxy-octadecadienoic acid (9-HODE) and 9-oxo-octadecadienoic acid (9-oxoODE), markers of lipid peroxidation [7], are seen in the longevity phenotype in Italy. It has also been revealed that the longevity process deeply affects the structure and composition of the human GM-derived metabolome, as shown by the increased excretion of phenylacetylglutamine (PAG) and p-cresol sulfate (PCS) in Italian centenarians' urine [7]. In 647 individuals from the US, followed up for as much as 20 years, higher concentrations of the citric acid cycle intermediate, isocitrate, and the bile acid, taurocholate, were associated with lower odds of longevity, defined as attaining 80 years of age. In a larger cohort of 2327 individuals with metabolite data available, higher concentrations of isocitrate but not taurocholate were also associated with worse health conditions [72]. On the other hand, centenarians from the Bama County in China showed decreased levels of PCS but increased levels of fecal shortchain fatty acids (SCFAs) and total bile acids [73]. Intestinal commensal bacteria metabolize host-derived bile salts [74]. Bile acids are hormones that regulate their own synthesis, transport, glucose and lipid homeostasis and energy balance via activation of specific nuclear receptors and G proteincoupled receptors. The circulating bile acid pool composition consists of primary bile acids produced from cholesterol in the liver, and secondary bile acids formed by specific gut bacteria. The gut microbial community, through its capacity to produce bile acid metabolites distinct from the liver (i.e., secondary bile acids), can be thought of as an "endocrine organ" with the potential to alter host physiology, perhaps in their own favor. The term "sterolbiome" [74] describes the genetic potential of the GM to produce endocrine molecules from endogenous and exogenous steroids in the mammalian gut. Thus, changes in age-associated microbiome composition could impact on bacterial metabolism of steroid compounds and ultimately steroid hormones in peripheral tissues. Chinese centenarians have high levels of bile acids [73], suggesting a pro-longevity role. However, studies on different populations reported that increased levels of secondary bile acids are associated with an increased risk of age-associated diseases [72] and specific diseases of the gastrointestinal tract system [75].
On the whole, these data indicate that human GM alterations during aging are not univocal ( Table 2) but follow different trajectories depending on lifestyle, nutrition, geographic/population/social factors as well as host genetics. In extremely long-lived people the composition, functionality and diversity of this complex and dynamic microbial community seem to achieve a peculiar balance resulting from a continuous 100-year remodeling process. Thus, it still remains to be determined how and if this (optimally?) adapted GM contributes to the homeostasis of the aged host, enabling him/her to reach the extreme limits of human life.
Gut microbiota age-related changes, brain functions and neurodegenerative diseases
To this already complex scenario, it should be added that the gastrointestinal tract establishes a strong bidirectional connection with the Central Nervous System (CNS) named the "gut-brain axis", along which the GM plays a crucial role. A number of experimental observations have shown that even mild alterations in GM composition are able to cause modification of cerebral functions, while conversely the brain can deeply affect intestinal functions via the secretion of hormones, neuropeptides and neurotransmitters such as substance P, neurotensin, corticotropin releasing hormone, 5-hydroxytryptamine, and acetylcholine. The literature on this hot topic is extensive and more details can be found in recent reviews [76,77]. Here, specific topics relating to the impact of GM age-related changes on brain physio-pathology, with particular attention to the role of tryptophan, will be briefly addressed. Gut and the GM affect brain and upper cognitive functions by two distinct pathways: (1) a direct one via retrograde stimulation of the Vagus nerve and the production of hormones and cytokines such as IL-6, TNF-α and VIP; (2) an indirect one, via the production of bacterial components and metabolites. The main microbial bioactive molecules are: proteins that may cross-react with human antigens and stimulate abnormal responses by the immune system [78,79]; neurotoxic metabolites such as d-lactic acid and ammonia which are able to cross the blood-brain barrier and cause neurotoxicity or neuroinflammation [80][81][82]; hormones and neurotransmitters interfering with those of human origin (e.g., Lactobacillus and Bifidobacterium species are GABA neurotransmitter producers, Escherichia, Streptococcus and Enterococcus are serotonin synthesizers) [83][84][85]. Hence, instead of a "gut-brain axis", it would be more correct to refer to the "GM-gut-brain axis" integrating the GM with neuro-humoral signals from/to the CNS, neuroendocrine and immune systems, the autonomic nervous system, and the enteric nervous system (ENS). A growing amount of evidence has pinpointed the availability and metabolism of the essential amino acid tryptophan as a key regulator of this axis. Tryptophan is metabolized along the serotonin or the kynurenine pathway [86] with many implications for ENS and CNS functioning (Fig. 1). Serotonin is mainly (95%) located within the GI tract and in a small proportion (5%) in the CNS. In the gastrointestinal tract serotonin is responsible for motility, secretion and absorption as well as intestinal transit, while it can also modulate food intake by stimulating vagal afferent pathways involved in the reduction of obesity and metabolic dysfunction [87]. By contrast, most available tryptophan is transformed into quinolic and kynurenic acid, which are of particular interest for neurogastroenterology as they are neuroactive metabolites that act on N-methyl-d-aspartate (NMDA) and alpha 7 nicotinic acetylcholine receptors in the CNS and ENS. In the CNS, kynurenic acid has long been viewed as neuroprotective, whilst quinolinic acid is primarily considered an excitotoxic NMDA receptor agonist [88]. Within the gastrointestinal tract, both molecules appear to be involved in immunoregulation [89] and in particular kynurenic acid may have anti-inflammatory properties [90]. Due to its specific role on tryptophan metabolism and serotonergic system, there is some evidence that the GM is a pivotal player in the regulation of different behavioral domains such as pain, depression, anxiety and cognition [86]. Major studies about this relationship have been performed using germfree animals (free of all microorganisms, including those normally symbiotic in the gut) characterized by increased plasma tryptophan concentrations that can be normalized by colonizing the mice immediately post-weaning. These animals exhibited increased hippocampal 5-hydroxytryptophan (a serotonin precursor) concentration and significant CNS alterations, demonstrating that the GM is essential for normal brain development [91,92]. In fact, the GM can directly utilize tryptophan, limiting its availability to the host because bacteria require tryptophan for their normal growth and some strains such as Bacteroides fragilis may produce a tryptophanase, an enzyme that has recently been associated with autism spectrum disorders [93] (Fig. 1). Moreover, some bacterial strains are able to synthesize tryptophan and produce serotonin from tryptophan in vitro. Tryptophan, through the kynurenine pathway, is involved in the biosynthesis of nicotinamide adenine dinucleotide (NAD + ) [94], which has a key role in human health as it is an essential coenzyme for the cellular processes of energy metabolism, cell protection and biosynthesis. Moreover, decreased cellular NAD + concentrations occur during aging and supplementation with NAD + precursors can prolong both life span and health span [95,96]. NAD + is indeed an important co-substrate of sirtuins. Several papers have shown that in old animals, when the levels of NAD + are restored, there is an increase in sirtuin1 and a reduction in mitochondrial stress, DNA damage and inflammation [95].
Tryptophan-derived indoles are involved in the host-microbiome interaction in the intestine [97]. Indoleamine-2,3-di-oxygenase (IDO) is an interferonγ-induced enzyme involved in catabolizing tryptophan to kynurenine, which has been shown to be higher in nonagenarians than in young people [98]. Hence, inflammaging might induce IDO, leading to tryptophan degradation to kynurenine. Microbial tryptophan metabolites generated by induction of IDO have recently been identified as human aryl hydrocarbon receptor (AhR)-selective agonists [99]. AhR signaling has a role in various physiological processes including chemical/microbial defence and tissue development, while, recently, the IDO-AhR axis has been recognized as a fundamental player in the control of the "Disease Tolerance Defence Pathway", i.e., the ability of the host to reduce the effect of infection on host fitness [100] (Fig. 1). Data obtained on murine models have shown that tryptophan catabolism by IDO assumes an immunoregulatory role acting via AhR ligands, boosting regulatory T cells and protecting mice from chronic hyperinflammatory responses [101].
The balance between bacterial tryptophan utilization, metabolism and synthesis and serotonin/kynurenine production has a fundamental function in determining local gastrointestinal and circulating tryptophan availability for the host with implications for both ENS and CNS serotonergic neurotransmission [86].
Modifications to the composition of GM across the lifespan may deeply affect the availability of tryptophan and serotonergic signaling during aging. Shotgun analysis on the bacterial metagenome of young, old and centenarian subjects showed an age-related amplified abundance of genes involved in the tryptophan metabolism pathway [102] and this finding is in agreement with the reduction of tryptophan due to its altered bio-availability found in the serum of centenarians [7,71]. Little is known about plasma tryptophan disposition in aged experimental animals, while in humans the plasma concentration of tryptophan is moderately lower in the elderly [103]. It is interesting to note that in rodents, limiting the dietary intake of tryptophan and methionine may have a beneficial effect on health-and life span [104], while excess of tryptophan can be toxic and carcinogenic [105]. In addition, alteration of the kynurenine metabolites may contribute to neurotoxicity [106] and has been associated with Huntington disease [107], HIV dementia [108] and Parkinson's disease (PD) [109]. Surprisingly, in a mouse model of Alzheimer's disease (AD), a diet rich in tryptophan seems able to reduce the amyloid plaque content [110]. It is thus tempting to speculate that the GM of centenarians adjusted the tryptophan metabolism to support healthy aging (Fig. 2). These findings assume a particular importance in view of the fact that centenarians are remarkably free from neurodegenerative pathologies such as PD and AD. Indeed, although the prevalence of cognitive impairment in centenarian studies varies widely [111], some of them (15-20%) preserve cognitive function and, even among those who show cognitive impairment at 100 years, approximately 90% delay the onset of clinically evident dementia until the advanced average age of 92 years [112]. In addition, most centenarians have low levels of anxiety and depression [111], suggesting that such people should be chosen as "super-controls" in studies designed to evaluate the contribution of GM dysbiosis to cerebral degenerative diseases.
AD is one of the commonest neurodegenerative disorders and associates with cerebral accumulation of amyloidbeta fibrils driving neuroinflammation and neurodegeneration. The bacterial species residing in the intestine have been shown to release substantial amounts of amyloids and lipopolysaccharides, thereby promoting the production of pro-inflammatory cytokines and modulating the signaling pathways involved in the pathogenesis of AD [113,114]. Numerous research findings have shown that AD may start Fig. 1 Tryptophan metabolism through the serotonin and kynurenine pathway. Tryptophan (TRP) is an essential amino acid which must be supplied with the diet. Once absorbed from the gut, TRP is made available in circulation as free TRP and albumin-bound TRP fraction and/or is metabolized along the serotonin or the kynurenine pathway. TRP in circulation can cross the blood-brain barrier (BBB) to participate in serotonin (5-HT) synthesis in the CNS. TRP in the gut is metabolized to 5-HT in the enterochromaffin cells (ECs): TRP is first converted to 5-hydroxytryptophan (5-HTP) by the rate-limiting enzyme tryptophan hydroxylase (TPH), then the short-lived 5-HTP intermediate product is decarboxylated to 5-HT by aromatic amino acid decarboxylase (AAAD). However, the vast majority of available TRP is metabolized along the kynurenine pathway. Kynurenine (L-KYN) is produced from TRP by the action of the hepatic enzyme, tryptophan-2,3-dioxygenase (TDO) or the ubiquitous indoleamine-2,3-dioxygenase (IDO). TDO can be induced by glucocorticoids or by TRP itself, whereas IDO is stimulated by inflammation with IFN-ɣ as the most potent inducer. Once L-KYN is produced, it is further metabolized along one of two distinct arms of the pathway with the production of neuroprotective kynurenic acid (KYNA) or neurotoxic quinolinic acid (QUIN). KYNA can be neuroprotective against QUIN-induced excitotoxicity but it can also induce cognitive impairment when abnormally elevated. Activation of the kynurenin pathway has a dual impact by limiting the availability of TRP for 5-HT synthesis and increasing the downstream production of neurotoxic/ neuroprotective metabolites. TRP, via the kynurenine pathway, is involved in the biosynthesis of nicotinamide adenine dinucleotide (NAD + ) which is an essential coenzyme for cellular processes of energy metabolism, cell protection and biosynthesis. The GM can also directly utilize TRP, limiting its availability to the host. Certain bacterial strains may produce a tryptophanase enzyme that synthetizes indoles from TRP. These microbial metabolites have recently been identified as human aryl hydrocarbon receptor (AhR)-selective agonists. AhR signaling has a role in chemical/microbial defense and tissue development, while, recently, IDO-AhR axis has been recognized as a fundamental player in controlling the "Disease Tolerance Defense Pathway". Bacteria can also synthesize tryptophan via enzymes such as TRP synthase (TRP synt) and specific bacterial strains can also produce serotonin from TRP in vitro. The balance between bacterial TRP utilization and metabolism, TRP synthesis and 5-HT production plays an important role in regulating gastrointestinal and circulating TRP availability for the host in addition to its dietary intake. Moreover, accumulating evidence supports the role of the GM in regulating TRP availability and 5-HT synthesis via modulation of the enzymes responsible for TRP degradation along the kynurenine pathway ◂ in the gut, and hence is closely associated with GM imbalance. There is increasing evidence to suggest a link between GM and PD. Recent studies showed that PD is associated with gut dysbiosis [115]; the fecal concentration of SCFAs is significantly reduced in PD patients compared to controls and this reduction could impact on CNS alterations and contribute to gastrointestinal dysmobility in PD [116]. In a mouse model of PD, it has been demonstrated that GM is a key player in motor deficits and microglia activation [117]. Interestingly, alpha-synuclein aggregates, a pivotal marker of PD, are present in both the submucosal and myenteric plexuses of the ENS, prior to their appearance in the brain, indicating a possible gut to brain route of "prion-like" spread [118].
The GM role has also been investigated regarding regulation of hypothalamic-pituitary-adrenal (HPA) axis development [119,120]. In germ-free mice, exposure to a restraint stress triggers an exaggerated HPA axis response, as compared to specific pathogen-free control mice. Such an aberrant response is normalized through intestinal colonization by Bifidobacterium longum subsp. infantis, and fecal matter from specific pathogen-free mice. Importantly, fecal microbiota transplantation (FMT) proved efficient only in animals' early life [121]. These experiments demonstrated the crucial function of the GM in the development of an appropriate physiological endocrine response versus stress in the postnatal stage of the animal model. During life, chronic HPA axis hyperactivation by stress exposure damages the gut barrier integrity, causing intestinal dysbiosis, behavioral changes and stress-related symptoms, including mood disorders, anxiety and cognitive defects [122]. Patients suffering from hepatic encephalopathy are characterized by alterations of GM composition and endotoxemia. In particular, high levels of inflammatory cytokines were found in cirrhotic patients with cognitive decline, as compared to those with normal cognitive function, and the bacterial families Alcaligeneceae and Porphyromonadaceae proved positively correlated with cognitive impairment [123,124]. Other works have focused on the impact of the GM on depression or anxiety, showing that pathogen-free mice exhibit reduced anxiety and increased motor activity [91,125]. Tillisch et al. [126] demonstrated that brain activity and connectivity in healthy women following an emotive task could be attenuated by administering a 4-week course of a fermented Fig. 2 Gut microbiota and brain function in Italian centenarians. This figure summarizes our studies on the phenotypic characteristics of Italian centenarians. In extreme longevity complex remodeling of the GM is reflected at a systemic level by specific signatures of blood and urine markers (inflammatory, lipidic and metabolic). The strong two-way connection between GM and brain is likely to positively affect the well-preserved cognitive function of centenarians until a very advanced age. The fundamental role in the effect on the brain by bacterial tryptophan metabolism via the serotonin and/or kynurenine pathways deserves to be further investigated. AD Alzheimer's disease, PD Parkinson's disease, SCFAs short-chain fatty acids, IL-6 interleukin-6, IL-8 interleukin-8, 9-HODE 9-hydroxy-octadecadienoic acid, 9-oxo-HODE 9-oxo-octadecadienoic acid, PCS p-cresol sulfate, PAG phenylacetylglutamine milk beverage containing several probiotic bacterial strains [126]. Thus, the GM seems to modulate multiple effects, overcoming even the adaptive immunity functions besides revealing neurological/psychological potential. The two-way interaction between GM and the brain can be modulated by diet and/or probiotic/prebiotic/symbiotic supplementation designed to positively impact on brain activity and behavior [127]. For these reasons, probiotics with psychotropic functions in humans, such as Lactobacillus helveticus and Bifidobacterium longum, have recently been termed "psychobiotics" given their ability to reverse anxiety or depression-like behavior [128].
Gut microbiota-targeted diets and interventions improving cognition and health
The marked potential effect of the GM on neurological and psychological pathways suggested the hypothesis that intestinal bacteria may be a bridge in the emerging relation between diet and the cognitive system [123]. For example, pronounced consumption of fruit, vegetables and pulses typical of the Mediterranean Diet (MedDiet) has been associated with increasing fecal SCFA levels. SCFAs (acetate, propionate and butyrate), produced by GM (Firmicutes and Bacteroidetes strains) during fermentation of undigested polysaccharides, has a well-documented protective role on various inflammatory as well as behavioral disorders [129,130].
It has recently been shown that the GM rapidly responds to altered diet in a diet-specific manner. It seems possible to modulate GM composition and activity within a single day, switching from herbivorous to carnivorous diet and as a consequence modulating GM metabolic pathways [131]. Thus, the dietary lifestyle represents a long-life stimulus for the GM, which responds by modifying its structure and functionality in the short term with multiple effects on the organism.
Recently, it has been postulated that the MedDiet exerts its health effects through hormetic mechanisms [132]. A lifelong exposure to the specific components of the MedDiet may, therefore, very likely counteract the effects of inflammatory stimuli, including those that may come from the GM metabolism, by acting as hormetins [132]. Epidemiologic evidence also suggests that coffee drinkers have a lower risk of PD [133]. It has been proposed that this protective effect impacts on the composition of the GM, counteracting the development of intestinal inflammation which is associated with less misfolding of the protein alpha-synuclein in the enteric nerves. This would reduce the risk of PD development, minimizing propagation of the alpha-synuclein aggregates to the CNS [118].
In animal models, interventions aimed at reducing calorie intake have been shown to be accompanied by structural modulation of the GM [134]. For instance, a life-long lowfat diet significantly altered the overall structure of the GM in C57BL/6J mice. Calorie restriction was shown to enrich phylotypes positively correlating with longevity, such as the genus Lactobacillus, and to reduce phylotypes negatively associated with lifespan [135]. Since nutrient metabolism is highly dependent on the composition of the GM and vice versa [136], it can be assumed that certain anti-aging interventions may cause specific variations to gut microbial communities causing chronic calorie restriction conditions and thus promoting both the health span and the life span. Several documented clinical trials have investigated the effect of prebiotics and probiotics, particularly those containing Bifidobacterium and Lactobacillus, as a microbiota-targeted intervention to improve health status in elderly populations [137][138][139][140]. Most of the benefits are mediated by the activation of anti-inflammatory pathways in the residents' microorganisms. Probiotic supplementation may also improve metabolic and cardiovascular health status [141] and promote longevity by stimulating the innate immune response [142,143], improving resistance to oxidative stress [144], decreasing lipofuscin accumulation [145] and modulating serotonin signaling [146]. There is also evidence that probiotic treatment can promote longevity in mice, possibly through suppression of chronic low-grade inflammatory processes in the colon [147]. Importantly, several findings suggest that direct modulation of the GM may not only be applied in treating particular age-related disorders, but can also be a promising therapeutic option to combat the aging process per se. For example, in a murine model, oral administration of purified exopolysaccharide fractions from Bifidobacterium animalis RH that were isolated from the fecal samples of centenarians residing in Bama longevity villages (Guangxi, China) resulted in significantly increased activity by superoxide dismutase, catalase and total antioxidant capability in serum, as well as reduced levels of lipofuscin accumulation in the mouse brain [148].
Another approach to restoring the intestinal ecosystem is FMT, also called bacteriotherapy, a transfer of liquid filtrate feces from a healthy donor into the recipients' gastrointestinal tract to treat a particular disease or condition [149]. Initially, bacteriotherapy was developed as an effective method of treating Clostridium difficile infection, which is a major cause of healthcare-associated diarrhea through perturbation of the normal GM [150]. More recently, its potential effectiveness and safety has been hypothesized in the prevention and treatment of non-gastrointestinal pathologic conditions, including those commonly associated with aging, e.g., atherosclerosis, metabolic syndrome, type 2 diabetes and neurodegenerative diseases [151,152]. In a preliminary study of the effectiveness of FMT in humans, transferring GM from lean donors to persons with metabolic syndrome [153] beneficially affected the GM composition in recipients 142 A. Santoro et al.
3
by increasing amounts of butyrate-producing bacteria along with improved insulin sensitivity 6 weeks after the FMT procedure [154]. Improvements in symptoms of PD in patients receiving FMT were described in one case report [155], while no studies have been reported for AD so far.
In this scenario the knowledge emerging from GM studies in centenarians may soon be exploited for therapeutic purposes. For example, transplantation of centenarians' GM into germ-free animal models will allow us to identify the bacteria or bacterium combination that could be protective against neurodegenerative diseases.
Gut microbiota and host genetics: an intimate evolutionary-shaped relationship
During the last few years, an impressive amount of literature has been published on the different strategies to modify and improve the GM diversity structure with a view to promoting human health. Similarly, many pathologies ranging from obesity and inflammatory diseases to behavioral and physiological abnormalities with neurodevelopmental disorders have been associated with different types of bacterial species and their products [77], as described in the previous sections.
On the other hand, recent data have suggested a new and intriguing possibility that the host genome interacts and shapes its own GM. In this connection host genetics have been shown to influence the composition of the GM in twin studies [156,157], while more recently in a wider population study, Christensenellaceae have been reported as the chief bacterium family associated with genetics [158]. The abundance of Christensenellaceae was also associated with lower body mass index (BMI) in twins, and when introduced into a mouse model it led to reduced weight gain in treated mice compared with controls [158], suggesting that the microbiome can be an important mediator between host genetics and phenotype. Intriguingly, these bacteria were found to characterize the GM in extreme longevity [8], thus reinforcing the idea of a close association with the genetic background and suggesting a possible link to the inheritable component of human longevity. Nuclear, also mitochondrial, DNA plays a major role in the aging process so the complex interaction between these two host genetics [159] should be taken into account if we are to properly address the GM remodeling occurring during the human life span.
The intimate symbiotic relationship between host genetics and the GM is very ancient since vertebrates coevolved along with their gut bacteria. Multiple lineages of the predominant bacterial taxa such as Bacteroidaceae and Bifidobacteriaceae in the gut arose via co-speciation within hominids over the past 15 million years [160]. The divergence times also indicate that nuclear, mitochondrial, and gut bacterial genomes diversified in concert during hominid evolution [160]. Interestingly, it seems that gut microbiomes have recorded the information of major dietary shifts that occurred during the evolution of mammals, allowing us to predict ancient diets from the reconstruction of ancient microbiomes [161].
Recently, genome-wide association screening for host genetic associations with GM composition identified 42 loci (mainly related to innate immunity) associated with GM variation and function in humans [162]. Another study identified significant associations between gut microbial characteristics and the VDR gene (encoding vitamin D receptor), in addition to a large number of other host genetic factors, and eventually quantified the total contribution of host genetic loci to diversity as 10.43% [163]. The non-genetic factors such as age, sex, BMI, smoking status and dietary patterns explain 8.87% of the observed variations in the GM [163]. Even though the effect of individual genes is small and comparable with the cumulative effect of key non-genetic covariates, the underlying biology of these studies provides a critical framework for future assessments of host-microbe interactions in humans with an adequate statistical power and sample size. Associations with gut microbial community composition at the VDR locus provide a link with secondary bile acids, which serve as ligands for VDR. Results from gene set enrichment analysis and the observation that the bile acid profile in serum associates with variation in the gut microbiome [163] further support this finding. A detailed description of the effect of host genetics on GM composition lies outside the scope of this review. Kurilshikov and colleagues recently published a comprehensive summary of the state of the art on host genetic determinants of GM with details as to techniques and populations analyzed, to which readers are referred [164].
A recent bioinformatics analysis predicts that long noncoding RNAs expressed in the intestinal epithelial cell in murine models constitute molecular signatures reflecting the different types of microbiome [165]. In this direction, very recent data highlight the role of the host genome in shaping the GM, even if in terms of microRNAs (miRs). MiRs produced by gut epithelial cells enter bacterial membrane, modifying bacterial gene expression in in vitro models [166]. In a mouse model (DICER deficiency), a severe dysbiosis develops when miR maturation is deficient. These important findings not only outline the tight coevolution and inter-organismal crosstalk leading to various profound cellular and metabolic changes, but also lay the foundations for new miR-based therapies to counteract gut-related diseases.
Many variables may be responsible for GM remodeling associated with human longevity. Among these, the genetic makeup of extreme longevity [159,167], and the epigenetic changes associated with aging could have a deep impact together with nutrition and lifestyle habits. These lifelong interactions by variables are expected to have significant outputs in the production of specific blood/urine biomarkers or longevity-associated metabotypes. This is the case with centenarians. As reported above, Italian centenarians show increased excretion of bacterial products such as PAG and PCS in urine [7], specific blood lipid profiles and changes in amino acid levels [7,71] (Fig. 2). By contrast, centenarians from the Bama County in China showed decreased levels of PCS and increased levels of fecal SCFAs and total bile acids [73]. All these findings support the hypothesis of a complex remodeling of the lipid and amino acid metabolism correlated with GM changes [7], as a result of lifelong adaptation and coevolution processes that could also be ethnic specific. Of note, it still remains to be clarified what role gender plays in GM modification studies on long-lived subjects, since female centenarians outnumber males. A much deeper knowledge of the relationship between host genetics and the GM emerged from a recent paper, which used shotgun analysis on 250 adult twins from the UK [168]. These data showed that GM composition and functions are inheritable and that twin pairs share microbial SNPs. Interestingly, this similarity is lost after decades of living apart [168], emphasizing the impact of household and geographic region on the GM.
Lifelong interaction among sex, sex hormones and gut microbiota
Several studies have shown that sex hormones also play a role in the host-microbiota interaction. Indeed the term "microgenderome" defines the potential mediating and modulatory role of sex hormones on GM function and composition with implications for autoimmune and neuroimmune conditions [169]. Sexual dimorphism is common in autoimmune diseases. Using the non-obese diabetic mouse model of Type 1 Diabetes, Markle et al. showed that the gut commensal microbial community strongly conditions the pronounced sex bias in Type 1 Diabetes risk by controlling serum testosterone and metabolic phenotypes [170]. Their results revealed evidence of sex-specific microbial communities and sex-specific responses to the same microbial communities. The same group also found that the recipients' GM was stably altered in a sex-specific way, since maletypical changes in the GM of female recipients were evident for several months. Unexpectedly, these experimental GM manipulations strongly protected the female mice from diabetes. The mechanism behind this protection critically depended on the impact of the GM on host metabolism and sex hormone signaling pathways [171]. A number of different taxa have been found between male and female mice, while the sex differences in GM composition depend in part on genetic background [172]. Using gonadectomized and hormone-treated mice clearly revealed hormonal effects on the GM composition [172]. In humans, sex-specific interactions between Firmicutes and neurological, immune and mood symptoms of myalgic encephalomyelitis/chronic fatigue syndrome have been reported [173], but we are just beginning to appreciate the links between human microbiome composition and hormonal phenotypes. Twin studies have revealed that the once similar microbial composition of opposite-sex twins becomes distinctly different after puberty when compared to that of same-sex twins which remains compositionally similar [57]. These data suggest that agespecific interactions of the host with specific microbes may exert beneficial and/or detrimental influences on the biology of the host, including either protection from or susceptibility to autoimmune disease. Furthermore, microbiota transfer studies in humans, mice, and rats reveal a high degree of host specificity on the part of the GM. Bacterial gene expression modulation by the host may partly explain the failure of FMT in certain specific cases, such as those related to Clostridium difficile infection treatment [174] and eventually impact on GM remodeling with age [8]. Efficient colonization and associated effects also seem to be most successful in young animals, most likely because their microbiota is not yet stabilized [169].
Dietary effects on the composition and diversity of GM depend in part on sex-specific interactions [172,175]. An interesting work showed that GM composition depends on interactions between host diet and sex within populations of wild and laboratory fish, laboratory mice and humans. The inter-individual diet variation correlates with individual differences in the GM and these diet-microbiota associations are sex dependent. In mice, experimental diet manipulations confirmed that diet affects the GM differently in males versus females. Thus, the prevalence of the individual genotype interacting with the environment (e.g., sex by diet) implies that therapies to treat dysbiosis might have sex-specific effects [176].
Conclusions
Overall, the data available on lifelong changes in the GM are still too few for us to draw any definitive conclusions as to the basic question of how much can be set down to variables such as population, diet, genetics and gender, and how much to the aging process per se. In particular, the GM changes occurring in the last two or three decades of life (in nonagenarians, centenarians, semi-supercentenarians and supercentenarians, i.e., persons who reach the age of 110 years) have been insufficiently investigated, especially regarding the possible contribution of GM to health and longevity or to cognitive decline and neurodegeneration. Longitudinal studies envisaging metagenomics sequencing and in-depth phylogenetic analysis as well as an extensive phenotypic characterization using up-to-date omics (metabolomics, transcriptomics and meta-transcriptomics, to mention a few) are urgently needed. The results of this comprehensive approach are likely to offer more satisfactory answers to the questions addressed in this paper.
|
v3-fos-license
|
2021-02-26T06:16:26.394Z
|
2021-02-25T00:00:00.000
|
232049897
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://zenodo.org/record/4618192/files/health-opportunity-costs-and-expert-elicitation-a-comment-on-soares-et-al.pdf",
"pdf_hash": "cbcabdc1f1d8d35f63aefe378730f401f345d32e",
"pdf_src": "Sage",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46601",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"sha1": "7e71b47486a04976c6f03937f116965f72966f5e",
"year": 2021
}
|
pes2o/s2orc
|
Health Opportunity Costs and Expert Elicitation: A Comment on Soares et al.
Soares et al. have published a valuable study on the use of expert elicitation techniques to identify elusive quantities in public policy. Their contribution may prove to be an important step in the development of robust methods to support evidence-based decision making. The stated purpose of the structured elicitation is ‘‘to inform estimates of expected health opportunity costs in the UK NHS [National Health Service].’’ The primary conclusion of the study is that the impact of expenditure in the NHS is likely to have been underestimated, such that the shadow price of a quality-adjusted life year (QALY) in the NHS is less than £12,936. In this commentary, we dispute the authors’ conclusion. We outline 3 reasons why the findings are unlikely to provide valid inputs to a revised estimate of the opportunity cost of health care expenditures.
Health Opportunity Costs and Expert Elicitation: A Comment on Soares et al.
Chris Sampson , Isobel Firth, and Adrian Towse Soares et al. 1 have published a valuable study on the use of expert elicitation techniques to identify elusive quantities in public policy. Their contribution may prove to be an important step in the development of robust methods to support evidence-based decision making.
The stated purpose of the structured elicitation is ''to inform estimates of expected health opportunity costs in the UK NHS [National Health Service].'' The primary conclusion of the study is that the impact of expenditure in the NHS is likely to have been underestimated, such that the shadow price of a quality-adjusted life year (QALY) in the NHS is less than £12,936.
In this commentary, we dispute the authors' conclusion. We outline 3 reasons why the findings are unlikely to provide valid inputs to a revised estimate of the opportunity cost of health care expenditures.
There Are No Experts
An expert is someone with relevant experience that would enable them to provide realistic point estimates and confidence levels for values not available from other (empirical) sources. For example, expert elicitation exercises are used to identify clinical end points associated with health care interventions. A common exercise is to elicit expert opinions from clinicians, to identify a relevant measure for patient response to a therapy, based on prior observations from their experience of treating a disease.
The quantities of interest in Soares et al. 1 related to associations between mortality, morbidity, and expenditures in health care at the system level. These quantities are not observable. This creates a difficulty for expert elicitation. It is not clear who, if anyone, could be considered an expert when judgments cannot be based on prior observation.
Participants were asked to consider parameters that related to an array of heterogeneous diseases and patient groups within a budgeting category. This introduces 2 important problems. First, nobody could be expected to be an expert in all of the diseases mentioned. Second, it seems unlikely that any ''expert'' could come to a reasonable aggregation across diseases, based on the prevalence of each disease within the budgeting category.
The lack of expertise is illustrated in the data provided by the authors. A quarter of clinical experts explicitly stated that they were either not an expert or only an expert in a single clinical field and therefore lacking relevant knowledge.
Most policy experts were from ''governmental bodies.'' These people are likely to have diverse professional experience but are unlikely to have clinical experience relevant to the questions asked. Rather, they are likely to be experts in the process of policy analysis and advocacy, which has little relevance to this exercise. These participants identified many difficulties in the elicitation process, with some stating that they were not contributing their own opinions, deriving their views entirely from the clinical experts.
Clinical experts in one disease area are unlikely to have relevant knowledge at the level of aggregation required by this exercise. Given the complexity of the elicited quantities, particularly when they rely on the identification of causal relationships (e.g., between expenditure and mortality), it is difficult to see who could be an expert.
The Elicited Quantities Are Not Meaningful
There are several frameworks for effective expert elicitation in health care research. Notably, the Sheffield Elicitation Framework (SHELF) is regularly used for this purpose. 2 It is not clear whether a standardized protocol was adopted by Soares et al. 1 Nevertheless, it is useful to consider SHELF criteria as indicators of good practice.
Quantities of interest elicited using the SHELF protocol satisfy 3 conditions: 1. The definition must be clear and unambiguous. 2. It should be such that the quantity of interest will have a unique value. 3. It should be formulated to make the experts' judgments as simple as possible.
The elicitation described by Soares et al. 1 does not appear to satisfy any of these conditions. A key respect in which many of the quantities are neither clear nor unambiguous in their definition is in the comparison of heterogeneous groups of diseases. This was a concern raised by many participants. Experts were asked to consider the disease areas within each budgeting category where an increase in expenditure is more likely to fall. Whether or not any of the participants could reasonably interpret this point is unclear, and each may have interpreted it differently and in a way that could introduce bias. One possibility is that the consideration of specific disease areas introduced an ambiguity effect, whereby respondents focused on favorable outcomes and ignored the diseases for which they lacked information. Qualitative responses show that some participants considered the effectiveness of specific interventions.
The researchers elicited quantities of interest that characterize the proportional relationship between 2 other quantities, as shown by the examples in Table 1. While a given expenditure change might be expected to influence both quantities (X and Y), it is unlikely that the relative magnitude of effect for each is constant (i.e., that X}Y) in reality, either through time or across diseases. As such, it is difficult to conceive of unique quantities that could be identified by experts.
The researchers characterized nonmortality consequences in a way that was not simple. That is, all consequences were related to mortality. The capacity of expenditure to improve quality of life-even in aggregate-may be independent of its capacity to extend life. Indeed, these outcomes may be substitutes. It is not clear how respondents would conceptualize these relationships.
The experts' judgments would have been simpler if based on absolute quantities. The authors justify the use of relative quantities on the grounds that they support conditional independence. However, it seems unlikely that conditional independence is a reasonable assumption, with expenditure in one disease area likely to have spillover effects into other areas and through time. Furthermore, the hypothetical change in expenditure was not explicitly specified as being temporary. Individuals may have interpreted the change in expenditure as either temporary or permanent, and we would expect their responses to the questions to differ accordingly.
Our suggestion that the quantities of interest could not be meaningfully quantified is supported by the participants' comments. One respondent stated that they were ''not sure what I have based my estimates on,'' while others explained the problems associated with comparing budgeting categories and disease areas. One respondent said that there was ''too much to aggregate.'' Participants were given very little information to support their judgments. Some of the quantities that were elicited required participants to speculate about quantities that could have been otherwise estimated, such as
There Is Significant Uncertainty in Responses
As the authors note, there is a high level of uncertainty in the pooled values. By design, the exercise did not allow for tradeoffs between mortality and morbidity or between current and future benefits; all values were bound by zero. For all quantities, credible intervals included values close to this lower bound, as well as very high positive values (which were unbounded). The values imply conflicting conclusions and are difficult to interpret. Only 14% of clinical experts (and 32% of policy experts) were confident that their answers represented their views on mortality effects, surrogacy, and extrapolation. It is important to note that this question is not about respondents' confidence about their answers representing reality but about their answers representing their own views. Thus, the majority of responses might be considered invalid.
Concluding Remarks
Soares et al. 1 ought to be commended for their research study, not least because they chose to make a wealth of material available, which in turn has enabled us to understand the study in greater detail. We hope that our commentary can support further development of methods and practice in this area.
Taken together, the concerns that we have outlined make the authors' conclusion untenable. While being a valuable exercise in methodological development, the study cannot support any practical conclusions about the validity of the assumptions used in previous work or the optimality of any cost-effectiveness threshold used in policy.
Notably, the authors have not chosen to estimate (or even approximate) a revised central estimate for the marginal productivity of health care in the NHS. If such a task were to be undertaken, it would raise other challenges that we do not discuss here. Taken at face value, the mean pooled estimates seem to imply that NHS productivity may be several orders of magnitude greater than previously reported. Such an estimate would lack face validity.
Even if the elicitation exercise were valid, these findings would tell us little about the true opportunity cost of expenditure in the NHS. Rather, they reinforce the high level of uncertainty associated with earlier estimates.
|
v3-fos-license
|
2021-04-07T06:16:52.684Z
|
2021-04-05T00:00:00.000
|
233036386
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/0886022X.2021.1908901?needAccess=true",
"pdf_hash": "8fb64a0408aca10abc2a2d18bb175b9d8d5b33a6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46602",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a15de7e17e0e5523426a18b410f3b9c68ea63cbf",
"year": 2021
}
|
pes2o/s2orc
|
Inhibitory effect of anti-malarial agents on the expression of proinflammatory chemokines via Toll-like receptor 3 signaling in human glomerular endothelial cells
Abstract Objective Although anti-malarial agents, chloroquine (CQ) and hydroxychloroquine (HCQ) are currently used for the treatment of systemic lupus erythematosus, their efficacy for lupus nephritis (LN) remains unclear. Given that upregulation of glomerular Toll-like receptor 3 (TLR3) signaling plays a pivotal role in the pathogenesis of LN, we examined whether CQ and HCQ affect the expression of the TLR3 signaling-induced representative proinflammatory chemokines, monocyte chemoattractant protein-1 (MCP-1), and C–C motif chemokine ligand 5 (CCL5) in cultured human glomerular endothelial cells (GECs). Methods We examined the effect of polyinosinic-polycytidylic acid (poly IC), an agonist of TLR3, on MCP-1, CCL5 and interferon (IFN)-β expression in GECs. We then analyzed whether pretreatment with CQ, HCQ, or dexamethasone (DEX) inhibits poly IC-induced expression of these chemokines using real-time quantitative reverse transcriptase PCR and ELISA. Phosphorylation of signal transducers and activator of transcription protein 1 (STAT1) was examined using western blotting. Results Poly IC increased MCP-1 and CCL5 expression in a time- and concentration-dependent manner in GECs. Pretreating cells with CQ, but not DEX, attenuated poly IC-induced MCP-1 and CCL5 expression; however, HCQ pretreatment attenuated poly IC-induced CCL5, but not MCP-1. HCQ did not affect the expression of IFN-β and phosphorylation of STAT-1. Conclusion Considering that TLR3 signaling is implicated, at least in part, in LN pathogenesis, our results suggest that anti-malarial agents exert a protective effect against the development of inflammation in GECs, as postulated in LN. Interestingly, CQ is a rather powerful inhibitor compared with HCQ on TLR3 signaling-induced chemokine expression in GECs. In turn, these findings may further support the theory that the use of HCQ is safer than CQ in a clinical setting. However, further detailed studies are needed to confirm our preliminary findings.
Introduction
Given that viral infections may trigger either the development of inflammatory renal disease or the worsening of preexisting renal disease [1], activated signaling through Toll-like receptor 3 (TLR3) reportedly plays a crucial role in the pathogenesis of glomerulonephritis (GN) [2,3]. Concerning TLR3 in resident glomerular cells, both exogenous ligands derived from pathogens and endogenous ligands can activate TLR3 and downstream immune responses leading to the development of 'pseudo' antiviral immunity-related inflammations in the kidney [3][4][5]. The activation of these TLRs, including TLR3, signaling cascade in residual renal cells and cross-talk of infiltrating monocytes, neutrophils, and lymphocytes result in inducing type I interferon (IFN) release, which may be involvement of pathogenesis of lupus nephritis (LN) [3][4][5]. Further, the implication of continuous activation of type I IFN system has been reported to play a pivotal role in the pathogenesis of systemic lupus erythematosus (SLE) [6]. Therefore, this theory is probably involved in the pathophysiology of GN, especially in LN [3][4][5][6]. Expression of TLR3 in resident glomerular cells has been confirmed in biopsy specimens from patients with LN [7]. Among these cells, glomerular endothelial cells (GECs) are directly exposed to circulating viral particles in the glomerulus [8,9]. Thus, the specific molecular mechanisms underlying the initiation of glomerular inflammation through the activation of endothelial TLR3 signaling need to be determined. Thus far, we found that endothelial TLR3 activation leads to inflammatory chemokine and adhesion molecule expression in cultured human GECs [8][9][10][11][12]. Despite some limitations, endothelial TLR3 signaling, which is associated with the continuous activation of type I interferon (IFN) as well as the regional expression of various inflammatory molecules in GECs, is thought to be involved in the pathogenesis of LN [3,6].
Although the European League against Rheumatism and the European Renal Association-European Dialysis and Transplant Association (EULAR/ERA-EDTA) has recommended the use of the anti-malarial agents, chloroquine (CQ) and hydroxychloroquine (HCQ) for patients with systemic lupus erythematosus (SLE) and LN [13], the beneficial effects of these agents against glomerular inflammation in LN has not been elucidated yet. Notably, CQ and HCQ have been reported to directly interacts with nucleic acids and consequently cause structural modifications of the TLR ligand and prevent nucleic acids from binding to TLR, which inhibits TLR3 and TLR9 signaling [14]. Previously, we examined the effects of CQ on the expression of C-C motif ligand 5 (CCL5) via TLR3 signaling in human glomerular mesangial cells (MCs) and found that CQ attenuates mesangial TLR3 signaling in the early phase [15]. Intracellular signaling systems are sometimes different between different cell types, and we think it is important to examine the effect of antimalarial agents on GECs, another type of resident glomerular cell.
In the clinical setting, the occurrence of retinopathy, a serious adverse event of anti-malarial agents, is a major concern. Notably, the incidence of HCQ retinopathy is less likely than CQ retinopathy, suggesting that different modes of action between these drugs exist in the retinal cells [16]. However, it remains unclear whether such differences exist between CQ and HCQ in their inhibitory effects on the expression of TLR3 signaling-mediated proinflammatory functional molecules in resident glomerular cells. In this study, we examined whether CQ and HCQ differentially affect the expression of TLR3 signaling-mediated representative proinflammatory chemokines, monocyte chemoattractant protein-1 (MCP-1), and CCL5 in GECs.
Cells
GECs were purchased from ScienCell (Carlsbad, CA, USA) and were cultured in endothelial growth medium-2 (EGM-2; Lonza, Walkersville, MD). The culture medium was supplemented with 5% fetal bovine serum, 50 lg/ mL gentamicin, and 50 lg/mL amphotericin B. Poly IC was dissolved in phosphate-buffered saline (PBS), pH 7.4, and the cells were treated with 0.5-50 lg/mL poly IC for up to 24 h [8][9][10][11]. In the experiments using immunosuppressive reagents, GECs were pretreated with 1 or 10 lg/mL CQ, 1 or 10 lg/mL HCQ, or 10 mM DEX for 1 h before the treatment with 30 lg/mL poly IC. In our previous studies, we found that the cell viability was more than 95% when the cells were pretreated with up to 20 lg/mL CQ and HCQ. RNA interference experiments were performed using a specific siRNA against IFN-b (12), NF-jB p65, or non-silencing negative control siRNA using the Lipofectamine RNAi MAX.
Real-time quantitative reverse transcription (RT) PCR analysis
Total RNA was extracted from cells using illustra RNA spin kit (GE healthcare, Buckinghamshire, UK). Singlestranded complementary DNA was synthesized from 1 lg of total RNA using oligo (dT) 18 primers and Moloney murine leukemia virus reverse transcriptase (MMLVRT). The complementary DNA for MCP-1, CCL5, IFN-b , and glyceraldehyde-3-phosphate dehydrogenase (GAPDH) was amplified using SsoAdvanced Universal SYBR Green Supermix. All values were normalized to GAPDH mRNA levels. PCR was performed using the following primers: ELISA for MCP-1, CCL5, and IFN-b protein The concentrations of MCP-1, CCL5, and IFN-b proteins in the cell-conditioned medium were measured in triplicate using an ELISA kit according to the manufacturer's protocol.
Western blotting
The cells were lysed in Laemmli's buffer after incubation, and the lysates were subjected to 5-20% polyacrylamide gel electrophoreses. The proteins were transferred to polyvinylidene difluoride membranes. After blocking, the membranes were probed with antibodies against STAT1 (1:10 000) or p-STAT1 (1:5000). The bands were visualized using horseradish peroxidase-labeled secondary antibodies and a chemiluminescent substrate.
Statistical analysis
All experiments were performed at least three times. Values are reported as the means ± standard deviation (SD). Significance in differences between groups was analyzed using Student's t-test. A p-value of less than 0.05 was considered statistically significant. All analyses were carried out using GraphPad Prism software version 7 (GraphPad Software, Inc., La Jolla, CA, USA).
Results
Poly IC induced the expression of MCP-1, CCL5, and IFN-b in cultured human GECs We confirmed that stimulation of GECs with poly IC resulted in increased expression of MCP-1 and CCL5 both at the mRNA and protein levels in a concentration-and time-dependent manner (Figures 1 and 2(a-d)). The expression level of MCP-1 and CCL5 mRNA increased gradually up to 24 h (Figure 2(a,c)). On the other hand, IFN-b mRNA peaked at 2 h and then decreased rapidly thereafter (Figure 2(e)).
CQ inhibits the expression of poly IC-induced MCP-1 and CCL5
We examined the effect of CQ on poly IC-induced MCP-1 and CCL5 expression. Pretreatment of cells with 1 lg/mL CQ did not inhibit the expression of MCP-1 and CCL5 mRNA, but 10 lg/mL CQ inhibited the poly IC-induced expression of MCP-1 and CCL5 mRNA and protein ( Figure 4).
HCQ inhibits the expression of poly IC-induced CCL5, but not MCP-1 We examined the effect of HCQ on the poly IC-induced expression of MCP-1, CCL5, and IFN-b. Pretreatment of cells with 1 lg/mL HCQ did not inhibit the mRNA expression of CCL5, but 10 lg/mL HCQ inhibited the mRNA and protein expression of CCL5 ( Figure 5(c,d)).
On the other hand, pretreatment of cells with 1 or 10 lg/mL HCQ did not inhibit the mRNA and protein expression of MCP-1 ( Figure 5(a,b)). Poly IC-treatment induced expression of IFN-b protein and the phosphorylation of STAT1, but HCQ pretreatment did not result in a significant change of IFN-b protein and STAT1 phosphorylation ( Figure 5(e,f)).
DEX does not inhibit the expression of poly ICinduced either MCP-1 or CCL5
Pretreatment of cells with 10 lM DEX did not inhibit the expression of MCP-1 or CCL5 mRNA ( Figure 6).
Discussion
Since the activation of TLR3 signaling cascades results in the subsequent release of inflammatory chemokines, cytokines, adhesion molecules, and finally type I IFN [17], sustained activation of type I IFN via TLRs activation is thought to be involved in the pathogenesis of SLE [6]. Renal biopsy specimens showed apparently higher glomerular expression of TLR3 in patients with GN [7]. Thus, regional viral and 'pseudo' viral immunoreactions via the activation of TLR3/IFN-b signaling in the resident glomerular cells have been postulated to be involved, at least in part, in the pathogenesis of LN [3,4]. Recently, we found that the activation of TLR3/ IFN-b axis induced endothelial expression of representative proinflammatory functional molecules, neutrophil chemoattractant C-X-C motif chemokine 1 (CXCL1)/ GROa, macrophage chemoattractant CX3CL1/fractalkine, E-selectin, plasminogen activator inhibitor-1, and interleukin-6 (IL-6) in GECs [9][10][11][12]. Regarding type I IFNs, we confirmed that IFN-b, but not IFN-a, is released GECs were pretreated with 1 or 10 lg/mL CQ for 1 h and subsequently treated with 30 lg/mL poly IC for 16 h. The medium was collected, and RNA was extracted from cells, after which quantitative real-time RT-PCR and ELISA analyses were performed. Data are shown as the means ± SD (n ¼ 3, Ã p < 0.01, by t-test). Figure 5. Pretreatment of GECs with HCQ inhibits the expression of CCL5, but not of MCP-1 and IFN-b. The cultured GECs were pretreated with 1 or 10 lg/mL HCQ for 1h and subsequently treated with 30 lg/mL poly IC for 16h. The medium was collected, and RNA was extracted from cells, after which quantitative real-time RT-PCR (a,c,e) and ELISA (b,d) analyses were performed. Data are shown as the means ± SD (n ¼ 3, Ã p < 0.01, by t-test). (f) The cells were pretreated with 10 lg/ mL HCQ for 1h, and then treated with 30 lg/mL poly IC for 6h. The cells were lysed and western blotting for phosphorylated STAT1 (p-STAT1) and STAT1 was performed.
from MCs and GECs and acts as a mediator in resident renal cells [4,[9][10][11][12]. We postulate that IFN-b is released from resident renal cells and acts as an 'autocrine' mediator, whereas IFN-a may be released from infiltrating proinflammatory cells and acts as a 'paracrine' mediator, although this theory remains to be elucidated [18]. Based on these results, we also speculated that despite some limitations, endothelial TLR3 signaling, which is associated with the continuous activation of type I IFN as well as a regional expression of various inflammatory molecules, is possibly involved in the pathogenesis of LN [3][4][5].
In the present study, we found that poly IC induced the expression of MCP-1 and CCL5 downstream of IFNb in a time-and concentration-dependent manner in cultured human GECs. Notably, CQ effectively inhibited poly IC-induced both MCP-1 and CCL5 expression in GECs whereas HCQ inhibited CCL5 expression only. Since anti-inflammatory steroids are commonly used for the treatment of patients with LN, we next examined the inhibitory effect of DEX in this experiment. Interestingly, DEX did not inhibit the expression of MCP-1 and CCL5; thus, we speculated that DEX did not affect the TLR3/IFN-b axis in GECs. On the other hand, the anti-malarial agents, CQ and HCQ have some inhibitory roles in the TLR3/IFN-b axis in GECs, leading to decreased expression of MCP-1 and CCL5. In this context, the postulated mode of action of anti-malarial agents in TLR signaling is thought to be an inhibition of endosomal acidification [19]. On the other hand, it has been reported that anti-malarial agents do not influence the endosomal pH and expression of TLRs, suggesting that the anti-malarial agents interact with nucleic acids. These interactions consequently cause structural modification of the TLR ligands, which prevents their binding to TLR [20]. However, that may not be the case in the present study because anti-malarial agents have been added to the cells 1 h before poly IC treatment. Regarding TLR3 signaling in resident glomerular cells, our previous studies showed that CQ inhibits TLR3 signaling during the early phase of IFN-b production; that is, inhibition of nuclear translocation of phosphorylated nuclear factor-jB [12,15]. In this study, however, HCQ did not affect the poly IC induced-IFN-b protein expression in GECs ( Figure 5(e)). Accordingly, HCQ may act downstream of IFN-b expression in GECs, suggesting that different modes of action on the endothelial TLR3/IFN-b signaling may exist between CQ and HCQ. However, we did not determine the detailed effect of HCQ on the TLR3 signaling because of some difficulties in experimental settings. Thus, it is imperative to conduct more detailed studies in the future.
A study reported a minimum HCQ target blood level >600 ng/mL to prevent flares in 171 lupus patients [21]. Considering the blood concentration level in clinical practice, 10 lg/mL of CQ and HCQ used in our experiments may be higher than that postulated in the abovementioned study. However, the adequate blood level of the drugs in patients with LN is yet to be determined. In addition, further studies should clarify whether adequate blood levels of the drugs can prevent activated TLR3 signaling in GECs. The limitation of this study is that we only examined in cell culture model in vitro. Further research of in vivo study may be needed to confirm our hypothesis when considering a clinical setting.
It has been reported that HCQ, a product obtained by adding a hydroxyl group to CQ, showed a marked decrease in the development of retinopathy. Among 647 patients treated with CQ for a mean period of more than 10 years, 16 (2.5%) patients developed definite The cultured GECs were pretreatment with 10 lM DEX for 1h and subsequently treated with 30 lg/mL poly IC for 16h. The medium was collected, and RNA was extracted from cells, after which quantitative real-time RT-PCR analysis was performed. Data are shown as the means ± SD (n ¼ 3, Ã p < 0.01, by t-test).
retinal toxicity, whereas only 2 (0.1%) of 2043 patients who were treated with HCQ for a similar period developed retinopathy [16]. Based on the relationship between retinopathy and melanin, the affinity of melanin for HCQ is not as strong as that for CQ [22]. Further, HCQ was found to be a less potent enhancer of lipofuscinogenesis compared to CQ, apparently due to its less effective inhibition of lysosomal degradative capacity [23]. These different modes of action of CQ and HCQ may be attributable to their tendency to develop retinopathy, although this theory remains speculative. However, different modes of action between CQ and HCQ in resident glomerular cells remain unclear. To the best of our knowledge, this is the first study to show different modes of action that may exist between CQ and HCQ in TLR3 activation induced-proinflammatory chemokine expression in human GECs. In addition, these differences may be involved in the occurrence of adverse events of the drugs, although this theory remains speculative.
Conclusion
Our results may further support the regional renoprotective effects of the anti-malarial agents, CQ, and HCQ in the development of inflammation in GECs, as postulated in LN [9,13,21]. Different modes of action between CQ and HCQ in TLR3 activation induced-proinflammatory chemokine expression in GECs may exist, which may also be involved in the occurrence of adverse drug reactions. However, more detailed studies are needed to confirm our preliminary findings.
Ethical approval
This study did not involve human and animal subjects that required ethical approval.
|
v3-fos-license
|
2023-10-20T15:36:48.286Z
|
2023-10-16T00:00:00.000
|
264338501
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "HYBRID",
"oa_url": "https://www.ijfmr.com/papers/2023/5/7560.pdf",
"pdf_hash": "b7a7f542dd75a04aeac3f3f86ac3bce78fb3cc9c",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46604",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "0f337f715146cc3f299f6b53aae21809accff95c",
"year": 2023
}
|
pes2o/s2orc
|
An Analytical Data-Driven Framework for Advancing Tiger Conservation in India
India is home to a significant population of the world’s wild tigers, but it is disheartening to see that they are still classified as 'Endangered' on the IUCN's Red List. The efforts to protect these magnificent creatures are ongoing, with the Indian government working hand in hand with various Conservation organizations. Though we have seen some promising progress with a notable increase in tiger populations, it is not all smooth sailing. Challenges persist, including the loss of their natural habitats, the ever-present threat of poaching, and unfortunate conflicts between humans and these incredible animals. This paper dives deep into the challenging mission of conserving tigers in India, highlighting the importance of forward-thinking strategies and using data-driven methods. We take a closer look at how data analytics can play a crucial role in conservation, from collecting and managing data to analysing it and making informed decisions. Our main goal is to offer practical recommendations on how to harness the power of data analytics to boost tiger conservation efforts in India, drawing on the most recent data trends and discoveries.
Introduction
The thriving population of Panthera Tigris, commonly referred to as the tiger, reflects India's remarkable biodiversity.Tigers are more than just charismatic apex predators; they are symbols of our country's ecological richness.As stated in a press release by the Indian Ministry of Environment, Forests, and Climate Change on July 29, 2023 [1], India currently hosts around 75% of the world's wild tiger population.India commemorated the 50th anniversary of 'Project Tiger' at Mysuru on April 9, 2022.Our Honourable Prime Minister, Shri Narendra Modi, officially announced a minimum tiger population estimate of 3,167 tigers based on camera-trapped areas.On subsequent data analysis by the Wildlife Institute of India, encompassing camera-trapped and non-camera-trapped tiger habitats, the result indicated an upper limit estimate of 3,925 tigers and an average population of 3,682 tigers.The report declared an annual tiger population growth rate of 6.1%, drawing a notable triumph in our conservation efforts.Despite the increase in tiger populations, some regions, like the Western Ghats, experienced localized tiger population drops.Mitigating such declines needs better-targeted monitoring and conservation efforts.The report also stated that around 35% of the tiger reserves required immediate and enhanced protection measures.Big data and analytics methods for such enhanced protection can dramatically accelerate the progress of conservation efforts.[2] • Email: editor@ijfmr.comIJFMR23057560 Volume 5, Issue 5, September-October 2023 2 During his speech in Mysore, the Indian Prime Minister pointed out that, although having only 2.4% of the world's land area, India contributes 8% of the world's known biodiversity.Yet, tiger conservation in India is an uphill battle and involves ecological, socio-economic, and geopolitical factors.Relentless encroachments by human activities have led to the loss of natural tiger habitats.Moreover, the persistent threat of poaching, fuelled by illegal wildlife trade networks, endangers tiger survival.Poachers mainly target tigers for their body parts used in traditional Chinese medicines and for luxury items derived from their fur.[3] Tiger habitats are rapidly vanishing, increasing tiger contact with human settlements.Unsurprisingly, human-wildlife conflicts are at an all-time high.
Knowing that such problems persist, we urgently need creative and collaborative efforts to protect and effectively oversee India's tiger populations.By providing a broad framework for using data analytics to bring about a transformative change in tiger conservation, we hope to contribute to well-informed strategies and decision-making tools.This framework covers every step of the data journey.i.e., From gathering data from diverse sources to employing advanced analytical methods for discovering patterns.We emphasize the importance of turning these insights into practical policy suggestions and stress the need for collaboration among various stakeholders, like researchers, conservationists, policymakers, and local communities.
We promote a comprehensive, science-based approach that enhances our knowledge of tiger behaviour and ecology.Using data analysis wisely, we can secure the future of these big cats in their natural habitats and reaffirm India's dedication to global biodiversity preservation.Figure (1) illustrates a simple framework that shows all the phases involved in using data analytics for decision-making in Conservation.
Let us now go over each of these phases in detail.
Data Collection
To use data analytics in any field, we first begin with gathering thorough and accurate datasets.These datasets form the building blocks of the data analytics approach because they ultimately determine the quality of our analysis.For the same reason, we must go beyond conventional data collection methods.
Several new data-collection techniques, described below, have shown to be very effective in increasing the effectiveness of tiger Conservation efforts.
Camera traps:
Motion-activated devices take photos and videos discretely while recording wildlife in their natural habitats.These recordings can help estimate tiger populations, carry out behavioural studies, and identify individual tigers based on their stripe patterns.Camera traps, coupled with Artificial intelligence algorithms, can automate species identification.ExtractCompare [4] is one of many such pattern recognition programs used by Indian conservationists.With their help, we can learn more about tiger movements, social interactions, and potential human-induced threats.
Satellite tracking:
GPS collars are gaining immense popularity these days, and rightly so because they furnish real-time location data about tigers and offer priceless insights into their ecology.[5] Movement patterns of wild mammals, like tigers, can be easily studied with the help of GPS collars.These patterns can consequently enable the study of ranging behaviours, habitat preferences, and environmental interactions.
Acoustic Monitoring Networks:
Devices that monitor sounds, like acoustic monitoring networks, help us to go beyond what our eyes can see.[6] They use a network of carefully placed microphones that record the unique sounds made by tigers and their potential prey.With the help of machine learning, we can analyse this audio to identify their vocalizations, differentiate between individual tigers, and even learn about their emotional states.As a result, we learn a lot about their social interactions, stress levels, and breeding behaviours.
Environmental DNA (eDNA) Sampling:
Technologies like eDNA sampling allow us to noninvasively track the presence of tigers and count their numbers in desired areas.Researchers can collect water or soil samples from known tiger habitats to analyse the genetic material present in these samples.This method is both cost-effective [7] and gentle on the environment.Following that, tiger DNA traces found using advanced DNA sequencing techniques can be used for population assessment and monitoring.Monitoring tiger populations in remote or delicate ecosystems where directly observing them is difficult/not possible can be made easier with the help of such technologies.One power point about drones is that they can be programmed to fly in pre-determined routes in protected areas while capturing high-resolution images and videos.Using this, we can automate the detection of tigers and potential poachers by combining the powers of drones and AI-driven image recognition.Advanced machine learning models can sift through the captured imagery to identify tigers, count their numbers, and promptly alert forest rangers about suspicious human activities.
Climate and Habitat Data:
For successful Conservation, it is not sufficient to only track tiger movements.Monitoring the changes in their habitat and considering the impacts of climate change are equally important.We must collect data on vegetation, water sources, and climate conditions as they can identify habitat degradation and potential threats to tiger survival.
Metagenomics for Diet Analysis:
To effectively manage tiger habitats, we must understand their eating patterns.Such learning is possible with the help of Metagenomic analysis [9] of scat samples.Researchers can use metagenomics to pinpoint the species consumed by tigers and evaluate the nutritional contents of their diets.Consequently, this aids in the identification of potential prey population declines and human-wildlife conflict hotspots.This knowledge can shape effective and robust conservation strategies.
Community-Based Data Collection:
Involving local communities in data collecting is, to say the least, tremendously valuable.Localities can contribute their timeless wisdom and participate in citizen science initiatives.As such, communities can provide significant insights into tiger conservation.These include tiger sightings, behaviour, and potential poaching activities.Moreover, involving communities cultivates a sense of responsibility and ownership toward conserving these magnificent animals.
Integrating these data collection techniques into tiger conservation initiatives broadens the scope and richness of data that is accessible to us.Undoubtedly, technology is a powerful tool and can tackle the tricky issues of tiger conservation in India.It enables us to develop forward-thinking conservation strategies that can easily adjust to the changing world.
Data Management
We need a scientifically grounded approach to data handling if we want successful conservation results through data analytics.Data gathered from numerous sources forms the foundation for generating helpful analytical insights.Thus, proficient data management is imperative for ensuring data precision, credibility, and usefulness, and it is more than just a technical requirement.
In simple terms, data management is an efficient organization of data that makes it easily accessible and guarantees that it is readily available for decision-making.Data management includes several steps, covering database development, data cleaning, and data integration.Figure (2) shows the steps involved in the data management process.We will now go over the fundamentals of each of these steps.
Database Development:
A centralized and well-organized infrastructure is elemental for consolidating diverse datasets gathered from various sources.Databases are the repositories for all collected data.They consist of spatial, temporal, and categorical attributes.It is vital to design database schemas that follow data normalization principles [10] because this lowers redundancy and improves the efficiency of data retrieval.We can create a coherent data system using modern database management systems (DBMS) supported by relational database models.Also, adopting data standards and metadata conventions helps maintain semantic consistency.This results in more streamlined data access and enhanced interoperability in collaborative research projects.
Data Cleaning and Validation:
The raw data acquired from field or sensor networks require thorough review and correction procedures to handle inaccuracies and discrepancies.Data cleaning and validation are a part of this procedure.Rigorous statistical techniques, such as outlier analysis [11], hypothesis testing, and imputation algorithms [12], are applied to rectify erroneous or incomplete data points.Involving domain-specific expertise in data-cleaning procedures improves dataset reliability.The primary goal of data cleaning is to produce a dataset marked by high accuracy that results in a robust foundation for following analytical processes.
Data Integration:
Conservation data streams come from a myriad of sources.These include field surveys, satellite observations, acoustic sensors, and public contributions.Combining these different datasets to make one clear and organized system is necessary for analysis.And this is the job of data integration.Data integration necessitates the development of data fusion methods [13] that can resolve discrepancies.These differences may be in spatial and temporal resolution, sensor types, and data formats.
For instance, Geospatial data integration may require geo-referencing datasets to a standardized coordinate system.Spatial interpolation techniques can then harmonize varying spatial resolutions.We can use synchronization protocols to align data streams over time, ensuring consistency among data sources with temporal differences.Such well-maintained data repositories can help in tiger conservation by supporting evidence-based decision-making.
Data Analytics
Using data analytics in tiger conservation is a crucial paradigm shift in our conservation strategy.It brings precision, objectivity, and a strong focus on empirical evidence.By combining analytical techniques with powerful computational models, conservationists can understand the intricate ecological tapestry that affects tigers to make better-informed decisions.
Visualizations are the compass of data analytics, guiding us through landscapes of data.Charts like heat maps and trendlines help understand the current scenarios better.We will examine some analytical methods designed to uncover insights from the data we collect about tigers.
Spatial Analysis:
Spatial analysis, aided by Geographic Information Systems (GIS) and remote sensing, helps understand the spatial dynamics of tiger habitats.GIS is the basis for identifying and characterizing critical tiger habitats, ecological corridors, and regions susceptible to human-wildlife conflicts.Geospatial data layers encompass land cover, topography, climate, and habitat suitability indices.Spatial analysis transcends mere visualization.It dives into complex spatial statistics, autocorrelation, and hotspot analyses.[14] It unveils patterns that explain the factors underpinning tiger distribution and movement.Hence, it provides a spatially unambiguous understanding of the ecological drivers shaping tiger populations.
Population Modelling: Population modelling [15]
relies on wide-ranging data about tiger demographics, births, deaths, and their movements.Using mathematical and computational frameworks, researchers have the capacity to develop resilient population models.Researchers often use Leslie matrix models or agent-based simulations [16] to create these models.The models shed light on patterns, fluctuations, and predictions concerning the tiger population.They incorporate significant parameters like carrying capacity, mortality, and reproductive rate.They enable the development of conservation strategies rooted in observed evidence.Among these strategies are habitat management, reintroduction efforts, and initiatives focused on conservation breeding.
Poaching Detection:
Illegal poaching is an insidious threat to tigers, and we must prioritize combating it.One strategy to prevent poaching is using machine learning algorithms [17] to identify covert poaching activities.By analysing extensive data from camera traps, algorithms capable of detecting irregularities and unusual patterns in tiger behaviour can be developed.These serve as indicators of poaching incidents.They can also differentiate between legitimate human presence and illicit activities.Proactive approaches like these for poaching detection enable swift response mechanisms.They solidify anti-poaching efforts and protect tigers from the negative consequences of illegal wildlife trading networks.
Human-Wildlife Conflict Analysis:
A complicated relationship between ecological, social, and economic factors maintains the delicate balance between tigers and human populations.Data analytics is crucial for perceiving this intricate relationship.Researchers can identify areas where human-wildlife conflicts are most prevalent by comprehensively analysing conflict data.They can do so while considering where and when conflicts occur, socio-economic elements, and land use.These human-wildlife conflict heatmaps, extracted through statistical models and geospatial analyses, are the basis for specific mitigation strategies.[18] These strategies include community-driven conflict resolution, restoring natural habitats, and establishing early warning systems.
Decision Support Systems: Leveraging Data for Informed Conservation Strategies
A Decision Support System (DSS) is a computer-based tool or software that helps people make informed decisions.It relies on data analysis and modelling.DSS is central in using insights obtained from data analytics.They help evaluate various options and potential outcomes, leading to effective choices and bridge the gap between data-driven insights and actionable strategies.We shall now review a few DSS applications.i.e., Scientific risk assessment, scenario planning, and adaptive management strategies.
Risk Assessment:
Assessing risks involves evaluating and quantifying various threats that tigers and their habitats face.This evaluation uses extensive datasets and advanced geospatial analyses.Usually, these risk assessment [19] models rely on Bayesian networks and machine learning classifiers.They synthesize an array of factors, including ecological elements, human activities, and climatic variables.They offer a means to quantify the likelihood and potential consequences of threats such as habitat loss and poaching.Risk assessment guides the prioritization of conservation initiatives.Therefore, it ensures the allocation of resources toward mitigating the most immediate threats.Adaptive management relies on continuous data acquisition, monitoring, and modelling to detect changing ecological dynamics and emerging threats.These systems effectively employ incoming data to refine and optimize conservation efforts through an iterative process.It involves statistical models, machine learning algorithms, and decision trees.Its adaptability enables rapid responses to unforeseen challenges.Therefore, conservation strategies will stay robust in changing ecosystems and human interactions.
Policy Recommendations: Advancing Tiger Conservation through Evidence-Based Strategies
We now present a set of evidence-driven policy suggestions that target strengthening tiger conservation initiatives in India.These recommendations are grounded in scientific rigor, empirical observations, and innovative approaches.Researchers can explore the efficiency of these AI-enhanced drones in enhancing surveillance capabilities, thereby substantially mitigating poaching activities.Predictive analytics can uncover patterns related to where and when poaching incidents occur.This information can help us take proactive measures through law enforcement.Moreover, when we combine knowledge from criminology and wildlife conservation, we can better understand the psychology of why poachers do what they do.This insight can help us develop more effective strategies to combat poaching.
Habitat Restoration and Corridor Protection:
The restoration and protection of critical tiger habitats and corridors require visionary solutions.Ecologists can use sophisticated algorithms to pinpoint areas with the highest restoration potential.[23] These algorithms consider historical land use, soil composition, and ecosystem services, among other metrics.As they incorporate landscape genetics, [24] they can help safeguard corridors by evaluating the genetic connections between tiger populations.Advanced remote sensing tools, like hyperspectral imaging [25], can detect early signs of habitat deterioration.This detection leads to proactive conservation actions.Habitat restoration through assisted natural regeneration using native plant species has advantages.It fits as a sustainable method for ecosystem restoration.The policy recommendations presented above are not strict commandments.They are flexible, researchbased approaches adaptable to the evolving context of tiger conservation.Embracing innovative methods and interdisciplinary collaboration can steer policy decisions toward data-driven, adaptive strategies.These strategies ensure the long-term prosperity of tigers in India.Such research contributes to the scientific understanding of tiger conservation.It also aids the practical commission of policies that resonate with complex human-tiger interactions.
Conclusion: Unleashing Data Analytics as a Catalyst for Tiger Conservation
As seen above, this paper highlights the transformative potential of data analytics in Indian tiger conservation.Beyond its theoretical importance, our study has far-reaching implications for on-the-ground conservation efforts.The power of analytics lies not solely in its capacity to generate insights but in its ability to catalyse tangible change.We advocate embracing innovative methods such as eDNA analysis, AI-driven monitoring systems, remote sensing, and interdisciplinary collaboration.This paper is a roadmap for conservationists, policymakers, and communities to navigate the intricate landscape of tiger conservation.Our findings stress the importance of collaborative governance models and community-led initiatives in fostering co-existence between humans and tigers.It sends a clear message to step up conservation efforts to stop poaching and unlawful activities within tiger habitats using technology for more effective surveillance.
This paper offers practical recommendations for conserving tiger habitats and ecological corridors, fostering tiger populations, supporting sustainable livelihoods, and ensuring equitable revenue-sharing among local communities.It takes a holistic approach, combining various data collection methods and advanced analytics, including spatial analysis, population modeling, poaching detection, and the study of human-wildlife conflicts.These insights create a comprehensive framework for conservationists and policymakers, enabling well-informed decisions to protect India's tiger population.
In the face of unprecedented ecological challenges, our study is a testament to the enduring spirit of Conservation.It beckons us to translate knowledge into action to defend the legacy of the tiger for generations to come.The future of these majestic creatures in India, and indeed in the world, depends on our collective dedication to the cause.As stewards of the natural world, we stand at a pivotal juncture.
The journey ahead is one of discovery, resilience, and co-existence.A journey that champions the survival of tigers in India and sets an inspiring example for global conservation efforts.Through the strategic implementation of analytics, we can ensure that the tiger's roar continues to resonate through the forests of India.This roar echoes the promise of harmony between humanity and nature.
Acknowledgments
We extend our heartfelt gratitude to all the individuals and organizations who have made immeasurable contributions to tiger conservation in India.While this paper is rooted in a comprehensive literary review, it is a tribute to the shared efforts of these stakeholders.It would not have been possible without their unwavering commitment.
Our gratefulness goes to the field researchers who have bravely ventured into tiger habitats to gather critical data, the conservationists who work tirelessly to protect these magnificent creatures, and the local communities who have shown a deep sense of responsibility and ownership toward tiger conservation.We also acknowledge the pioneering scholars whose research forms the foundation of our understanding of tiger behavior and ecology, as well as the technological innovators who have developed cutting-edge tools and techniques for tiger conservation.In addition, we express our appreciation to the policymakers who have taken steps to support conservation efforts and the funding agencies that have provided the resources necessary for this conservation efforts.Undoubtedly, collaborative conservation efforts are the way to go, and together, we stand as a ray of hope for the survival and thriving of this glorious species.It is through the collective efforts of all these individuals and organizations that we can continue to protect the legacy of the tiger in India for generations to come.
: 5 . 3 .
Scenario planning [20] with the help of DSS is like looking into the future.It gives us ideas of what might happen by considering factors like climate change, land-use dynamics, and human-wildlife interactions.Tools like predictive modelling, ecological niche modelling, and agent-based simulations help conservationists create robust plans that withstand environmental changes.With scenario planning, conservationists adopt a proactive stance.They are ready for what might happen.Thus, they can manage things better, use resources wisely, and make more sapient policy formulations.Adaptive Management: At the forefront of the DSS implementation stands the concept of adaptive management.It is a dynamic feedback loop [21].It takes advantage of real-time data for continuous enhancement of conservation strategies.
6. 1 .
Strengthening Enforcement: Effectively enforcing measures against poaching and other illegal activities within tiger habitats can enhance Conservation [22].Beyond conventional approaches, new techniques involve technologies such as drones with artificial intelligence (AI)-driven image recognition.
Vehicles (UAVs) and LiDAR technology: UAVs
with LiDAR (Light Detection and Ranging) technology can perform highly detailed aerial habitat surveys [8] and can help conservationists learn more about tiger habitats.LiDAR has the remarkable capability to see through forest canopies and create intricate 3D maps of the landscape.These maps can identify concealed trails, water sources, and potential poaching locations.Conservationists can pair LiDAR and UAVs to observe tiger movements and population densities, allowing real-time tracking of threats to tiger populations.
Sustainable Livelihoods:
[7]cannot stress enough the importance of Community-based conservation initiatives and promoting sustainable livelihoods for Conservation.Our researchers can focus on new ways to engage with local communities and study how they impact tiger conservation.For example, exploring measures to ensure equitable revenue sharing and evaluating the socio-economic effects of ecotourism on indigenous communities can foster community buy-in.Research that combines social sciences and Conservation can focus on two main things.The first is understanding how people and wildlife co-exist.The second is figuring out what makes these interactions positive.Research can aim to decrease human-wildlife conflicts through novel strategies.Bio acoustic deterrents [26] and digital early warning systems are some of them.Highlighting research priorities is essential to address knowledge gaps and steer conservation efforts effectively.Research can focus on non-invasive tracking of tigers using technologies like environmental DNA (eDNA) analysis.[7]Machinelearning algorithms that automatically identify individual tigers based on their distinctive stripe patterns open new possibilities for population monitoring.Studies can investigate the ecological effects of climate change on tiger habitats, directing solutions for adaptive management.Analysing genetic diversity and resistance of tiger populations to environmental stressors may also be a top priority.
|
v3-fos-license
|
2021-03-27T13:33:25.515Z
|
2021-03-26T00:00:00.000
|
232369955
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.nicl.2021.102644",
"pdf_hash": "a8bd726a306cceb29866668268b9a583b3ccdcca",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46605",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "445258144fd6bc57f93b9dd3a95137faa50bba9d",
"year": 2021
}
|
pes2o/s2orc
|
A novel local field potential-based functional approach for targeting the centromedian-parafascicular complex for deep brain stimulation
Highlights • A functional targeting method for centromedian-parafascicular nucleus of thalamus.• Strong local field potentials that correlate with position during stimuli onset.• Differentiable signal along the trajectory of the electrode in thalamus.
Introduction
Deep brain stimulation (DBS) is a surgical procedure commonly used to treat movement disorders such as Parkinson's disease, dystonia and essential tremor (Eisinger et al., 2019). Recently, DBS indications have expanded to include neuropsychiatric disorders such as Tourette syndrome (TS) (Goodman and Alterman, 2012). TS is a neurodevelopmental disorder characterized by involuntary motor and vocal tics. Although for most patients, the tic symptoms subside by adulthood, some patients experience persistent and medically-refractory symptoms (Wand et al., 1993). Based on the basal ganglia-thalamocortical loop dysfunction hypothesis, bilateral lesions of the intralaminar nuclei of the thalamus, which is a collection of neurons including the centromedianparafascicular (Cm-Pf) nuclei region, leads to reduction of tic symptoms in TS patients (Robertson et al., 1990). These and other lesioning studies demonstrated clinical improvement (Hassler and Dieckmann, 1970;Rickards et al., 2008) and the first reports of Cm-Pf high frequency TS DBS were later perfomed by Visser-Vandewalle and colleagues (Visser-Vandewalle et al., 2003). According to The International Tourette Deep Brain Stimulation Registry and Database, the most common brain target utilized worldwide has been the Cm-Pf region of thalamus (48.3% of all leads in the publicly available database) (Tourette Syndrome Association, 2019).
Current Cm-Pf targeting approaches for TS DBS are non-standardized and can result in substantial variability in final lead placement (Johnson et al., 2019). There is a paucity of literature focused on human microelectrode mapping in the Cm-Pf region as a potential technique to guide intraoperative DBS lead placement (Shields et al;Warren et al., 2020). As approximately 46% of DBS failures are due to lead misplacement (Okun et al., 2005), techniques that may improve targeting accuracy and intraoperative confirmation are extremely desirable. For example, previously we observed a Cm-Pf region TS DBS case where there was no benefit from DBS treatment following 4 months of programming and stimulation adjustment. Post-surgical measurement of the lead location as well as the functional neurophysiological recordings drawn from monthly post-operative monthly visits indicated that the ventral portion of the lead was likely placed in the ventralis intermediate (Vim) nucleus instead of Cm-Pf region. Following subsequent lead revision surgery, the patient achieved better treatment outcomes . This case highlights the importance of accurate lead placement and it should motivate continued efforts to refine neurosurgical targeting strategies.
Current targeting approaches of the Cm-Pf region typically involve an anterior-lateral entry angle mainly to avoid a trajectory through the lateral ventricles. This trajectory often proceeds through the Vim nucleus region immediately dorsal of the Cm-Pf region. The Vim nucleus is a well-studied motor nucleus of thalamus and common DBS target for essential tremor (Basha et al., 2014;Opri et al., 2019). In contrast, the Cm-Pf region has been investigated extensively for involvement in attention and limbic networks (Saalmann, 2014;Minamimoto and Kimura, 2002). Given the common use of a trajectory involving the Vim nucleus as well as the distinctive roles of both Cm-Pf region and Vim nucleus, we hypothesized that a functional mapping task could be utilized during awake DBS surgery. We posited that this task could differentiate the signal between the Vim and Cm-Pf and serve as a possible confirmatory marker of a lead placed in the intended Cm-Pf region.
Study overview
This observational study was part of a larger IRB-approved (IRB#201300850, NCT02056873) clinical trial of female and male human patients with TS undergoing DBS treatment. All participants provided written informed consent. Participants were implanted with a sensing-enabled DBS device (Activa PC + S, Medtronic PLC, MN) for the primary purpose of identifying neural correlates of tics and for implementing closed-loop stimulation. As part of this clinical trial, participants returned to University of Florida (UF) for postoperative macroelectrodes local field potential (LFP) recordings and DBS programming occurred at monthly visits primarily during the first six months following DBS surgery.
In addition to completing the primary objectives of the parent clinical trial, a subset of these participants (N = 5) completed a modified, rewarding Go/No-Go task post-operatively at multiple visits while LFP signals were acquired. To confirm and replicate the results obtained from these participants, a separate subset of participants (N = 2) completed the task intra-operatively during Cm-Pf region DBS lead implantation. In this paper, we present the complete dataset from the Go/ No-Go task performed in both the post-operative and intra-operative settings.
Study participants
The inclusion criteria included a DSM-V diagnosis of TS, Yale Global Tic Severity Scales (LECKMAN et al., 1989) > 35/50 for at least 12 months, and a motor tic subscore > 15. The tics must have been disabling to the patient, causing severe distress, possible self-injurious behavior, and/or quality of life disruption. We did not exclude patients with ADHD, OCD, or depression provided that the tics were the major issue prompting surgical intervention. Patients were also required to have failed trials of at least three dopamine blocking drugs and one trial of an alpha-2 adrenergic agonist prior to the DBS surgical intervention. Patients exhibiting additional unstable psychiatric disorders were excluded.
Across all seven participants, the average age was 33.66 ± 3.81 years (mean ± standard error) with a disease onset age of 9.0 ± 1.05 years. The postoperative data in this study are from participants consenting to testing after completing all other motor tasks required in the parent trial. Due to differences in recruitment timeframes, some participants completed more recording sessions than others and thus more bipolar contact pairs could be tested prior to exiting the study (see below; Table 1). The intraoperative patients were included in the same table for easier comparison of behavioral performance in the surgery room. The optimal stimulation contacts were provided for the time of recording.
Surgery
Participants underwent simultaneous bilateral electrode implantation of CM-Pf region DBS. A Cosman-Roberts-Wells headframe was placed and a stereotactic CT scan was obtained for co-registration to a preoperative MRI. The pre-operative MRI was acquired using a 3T SIEMENS MRI scanner (Siemens Medical Solutions, PA) per standard surgical procedures for anatomical targeting (Maling et al., 2012;Sudhyadhom et al., 2009). A Schaltenbrand-Bailey deformable atlas was manually fitted on each participant's pre-operative MRI using a 9 degrees-of-freedom affine transformation in the UF-designed targeting software. Trajectories that were selected traversed the dorsal-medial aspect of the Vim nucleus en-route to the Cm-Pf region (Fig. 2). Microelectrode targeting and general anesthesia were not performed. Model 3387 DBS electrodes (Medtronic PLC, MN) were implanted at the targeted locations. Ground and reference electrodes were placed on the scalp for intraoperative macroelectrode LFP recordings. At this point the two intraoperative participants described in this study then completed the Go/No-Go task.
Go/No-Go Task
We hypothesized that the Cm-Pf nuclei of thalamus will be activated by tasks that are involved in the attention of the participants, which are known to elicit responses in animal models (Minamimoto and Kimura, 2002). The modified Go/No-Go task is a complicated task that requires attention of the participants in order to achieve high accuracy. Although there are multiple segments of the task worthy of investigation in the traditional context of Go/No-Go task such as responses to reward, they are beyond the primary goal of the study, which was to use visual attention physiology for targeting the Cm-Pf nuclei region of thalamus. The visual cue phase is the most attention-demanding segment of the task, in which the participants pay attention to the type of stimuli presented and attempt to react accurately. The full Go/No-Go task will be presented in this section, however only the signals evoked by visual cue presentation, regardless of the Go or No-Go cue were considered for the main purpose of the study.
The experiment consists of four 20-second baseline recordings (two before the task and two after the task), two self-paced 10-button pressing recordings (one before the task, and one after the task; data not shown), two reaction time tests (one before the task, and one after the task; data not shown), and a Go/No-Go task. The task portion consisted of 120 trials for postoperative recordings and shortened version of 60 trials for intraoperative recordings to reduce time taken in the operating room. The task setup is outlined in Fig. 1. The participant was presented with a colored rectangle (visual stimulus) with four possible colors (blue, orange, yellow, purple), each corresponding to a unique condition: 1) press to receive an award (blue; Go To Win), 2) do not press to receive an award (orange; No Go To Win), 3) press to avoid losing (yellow; Go To Avoid Loss), 4) do not press to avoid losing (purple; No Go To Avoid Loss). The order of stimuli was random. The stimuli were presented for 1000 ms followed by a 250 ms wait period after participants reacted (by pressing button or not pressing button) to the stimuli (Fig. 1). Based on the participant's reaction, feedback was then displayed for 500 ms. Win outcomes were +100 points, lose outcomes were − 100 points, and avoid-loss outcomes were +0 points. After feedback a cross was displayed on the screen for 500 ms during the inter-trial interval as the pretrial baseline. The feedback portion of the task was displayed mainly to provide motivational encouragement to complete the task. A pressurebased push button was given to the participants to hold in their dominant hand for responding during the task. The sensor was connected to the external synchronization box (Alcantara et al., 2020), which was connected to the external amplifier's digital input (see below).
Experimental setup
The Go/No-Go task was designed and written in BCI2000 (Schalk et al., 2004). Therefore, all state triggers, markers, and behavioral data were collected in the same framework and was set at a 2400 Hz sampling rate. An external monitor with a resolution of 1280 × 1024 was placed in front of the patient with an appropriate viewing angle. Participants were instructed how to play the game beforehand, provided an opportunity to practice, and all 4 possible colors were shown to the patient in random order prior to each recording to ensure that the participants were able to differentiate all stimuli.
In the postoperative setting, all neural data were collected using an Activa PC + S system (Connolly et al., 2015), which is limited in recording capability to one bipolar channel per thalamic electrode. All neural recordings were collected using a 422 Hz sampling rate with a gain of 2000. A different bipolar contact pair was selected during each recording session for each patient. (Table 1). The bipolar contact pairs were chosen based on the best signal-to-noise ratio during motor tasks (data not shown) and in an effort to obtain recordings with maximum spatial differences spanning the electrode (dorsal vs ventral contacts). The dorsal and ventral electrode contacts refer to the recording contacts along the trajectory of the Medtronic 3387 deep electrode (Contact E00-E01 are more ventral and E02-E03 are more dorsal). The ventral electrode contacts were generally located within the Cm-Pf nuclei of thalamus while the dorsal electrode contacts were generally closer toward the VIM nucleus. The positive contact in the bipolar contact pairs was always the more ventral contact. LFP data were aligned with behavioral data using an external electromyography (EMG) system. At the beginning of the recording, two wireless EMG sensors were placed over the participant's neck (where the electrode wires passed below), and a 5 Hz electrical stimulation was initiated for 3 to 10 s. In addition, the BCI2000 system delivered state triggers to an external synchronization box which then converted the signal to the EMG system for alignment with the BCI2000 data. Based on this setup, for both postoperative and intraoperative recordings of the Go/No-Go task, all neural data, EMG data, and game states were aligned for unified analysis. Data alignment was completed in MATLAB 2016a (Mathworks, Natick, USA), and the aligned BCI2000 data, EMG data, and neural data were stored as MATfile (version 7.3).
In the intraoperative setting, DBS electrodes were connected to a g. tec HiAmp (Guger Technologies, Schiedlberg, Austria) external amplifier. Two separate corkscrew electrodes were placed in the scalp as the reference and grounding electrodes. The monopolar channels obtained during intraoperative sessions from an external amplifier were converted to bipolar recordingsalways the more ventral contact minus the more dorsal contactto resemble postoperative signals through post-processing. However, one recording error was made during the first intra-operative recording due to a mistake in the reference electrode selection. The reference for the first patient was the left hemisphere thalamic electrode contact 3 instead of the scalp corkscrew electrode, which led to different bipolar pairs used when comparing to other intraoperative leads (see below).
Imaging and lead measurements
Pre-operative T1-MRI and post-operative Stealth CT were obtained for each participant for lead measurements in anatomical space. The post-operative CT was acquired one month after surgery, and the postoperative CT was fitted to the pre-operative MRI using MATLAB multimodal co-registration for geometric transformation estimation (Image Processing Toolbox, MathWorks Inc, Natick, MA). After co-registration, the electrode position was measured in the T1-MRI space and reverse Fig. 1. Task overview. A) Overall block design of the experiment. The experiment is divided into 3 sections: pre-task baseline, task, and post-task baseline. Each voluntary button press section is around 10 s based on once per second rate of pressing. Each resting period is 20 s, however, segments of motor activity as identified by video and EMG recording were discarded. The reaction time calibration was used to ensure the participants are able to react within the 1000 ms window. B) The task design and timing. Each trial includes 500 ms inter-trial interval, 1000 ms stimulus presentation, 250 ms wait, and 500 ms feedback. The stimulus presentation timing will be shortened if reaction occurred early. The 4 possible colors (or trial type) are displayed under the task overview. transformed into the Schaltenbrand-Bailey common atlas space for group analysis based on each participant's individual surgical atlas transformation, that was manually fitted by the surgeon. The cartesian coordinate of a bipolar recording was expressed as the root-mean-square distance between the midpoint between the two contacts contributing to the bipolar recording and the target coordinate, which is the ventral Cm border (X: ±9.23, Y: − 8.56, Z: 2.49) in the digitized Schaltenbrand-Bailey atlas space. All thalamic electrodes from the right hemisphere were moved to the left by negating the X-axis in the Schaltenbrand-Bailey atlas space as the deformable Schaltenbrand-Bailey atlas space is symmetric and each side of the atlas is fitted independently of the other.
Data analysis
The data analysis was performed in MATLAB 2016a (Math works Inc, Natick, MA). Behavioral results including reaction time (RT) and accuracy for each recording session were calculated to ensure that participants were engaged in the task. The accuracy was calculated as the number of correct responses (pressing for Go trials, and not pressing for No-Go trials) divided by the total number of trials. RT was calculated as the time from stimulus presentation to button press for the Go trials performed correctly.
The neural recordings were filtered between 1 and 30 Hz using a 3rdorder Butterworth filter with zero-phase digital filtering. Due to the recordings spanning multiple months, all neural data were normalized prior to group analysis. For each recording session, the neural data were z-score normalized based on the average signal during the four 20-second baseline recordings (Fig. 1). However, baseline recordings that contained extensive EMG activities were excluded from z-score calculation. This occurred in one of the baseline recordings in one subject only.
Since the primary signal of interest was the event-related potential (ERP) after stimulus presentation, the task recording was converted into 1000 ms epochs (-200 ms to 800 ms) around each visual stimulus presentation. Trials with incorrect responses or with overall amplitude more than 3 standard deviations above or below the mean amplitude at any time point were removed from the analysis to avoid artifacts from influencing the results. The remaining trials were averaged within each recording session to create the stimulus onset ERP for each session. Then, the ERPs from all recording sessions were averaged again to obtain a grand average ERP. The grand average ERP after stimuli presentation and before average reaction time was tested against 0 using Wilcoxon's signed rank test. The most significant features were identified as the positive or negative deflections above or below 0 occurring for more than 50 ms at the group level. These features were extracted from each participant's recording at the individual level by finding the maximum or minimum point within the feature window of each individual run for the positive and negative deflection, respectively.
To assess whether the ERP reflected anatomical specificity of the Cm-Pf, these feature magnitudes were used for a correlation with the z-position of the bipolar recording. Shapiro-Wilk test was performed on both the z-positions and the ERP features to determine whether parametric (i. e., Pearson) or non-parametric (i.e., Spearman) correlation should be used.
For the two intraoperative recordings, the neural signals were filtered and processed similar to the postoperative recordings. All recordings were manually converted to bipolar recordings in MATLAB by taking the difference of two adjacent contacts (i.e., E00-E01, E01-E02, and E02-E03), with the exception of intraoperative subject #1 left hemisphere, where a monopolar recording was used due to contact E03 inadvertently being selected as recording reference. ERPs for each bipolar recording were computed by averaging all trials, and both the positive peak feature and negative peak feature were extracted from the recordings. One-tail Wilcoxon rank sum test was performed between the three bipolar recordings based on a-priori hypothesis derived from postoperative recordings, namely, that the ventral contact pairs should have a stronger positive peak feature and a stronger negative peak feature compared to dorsal contact pairs.
Study participants and behavioral results
All bipolar contact pairs used in the sessions, as well as the behavioral performance during each session, are provided in Table 1. A total of 18 sessions from five participants were recorded. There were no trials which were rejected due to large spikes in signals in postoperative patients. However, the intraoperative recordings contain more external electrical artifacts leading to large spikes manifesting with an amplitude that was 3 standard deviations above baseline. The percent of trials retained after rejection was 61.7% and 95%. The average reaction time for all sessions was 574 ms. Of the five post-operative participants, two of them had a unilateral neurostimulator with a depleted battery (Sub-ject#1 left device and Subject#2 right device), and one participant had a malfunctioned sensing module in the implanted neurostimulator which prevented data collection (Subject #5 left device). Among the 18 sessions, 10 sessions were simultaneous bilateral recordings. To simplify the analyses, simultaneous bilateral recording was treated as 2 independent recordings for thalamic electrodes in the left and right hemisphere, which led to a total of 7 electrodes across 5 participants. The positions of all electrodes are shown in Fig. 2.
Visual evoked potential features
The grand average ERP of all recording sessions from all patients, which contain recordings from both the ventral electrode contact pairs and dorsal electrode contact pairs, were computed and shown in Fig. 3A. Two deflections above or below 0 were identified. The first was a positive deflection occurring between 75 ms and 192 ms after visual stimuli presentation with a peak occurring at 160 ms. The second feature was a negative deflection occurring between 256 ms and 526 ms with a peak occurring at 360 ms. The average ERP from the ventral and dorsal recordings of the electrode are presented in Fig. 3B. The dorsal recordings (Blue) show weaker evoked potentials when compared to the ventral recordings (Red).
Shapiro-Wilk test indicated that neither the electrode positions nor the amplitude features were normally distributed, thus Spearman's correlations were used for the positive and negative peak features. The correlations of feature strength and position in the Z axis are shown in Fig. 3B. Both features correlated significantly with the recording locations. Namely, deeper electrodes were associated with more positive (p = 0.019) and more negative (p = 0.014) features.
Intra-operative verification
Electrode positions and LFP signals from the two intraoperative participants are provided in Fig. 4. All electrodes revealed a stronger a positive peak around ventral contact pairs, which tended to be just ventromedial to the CM region in the Schaltenbrand-Bailey atlas (Fig. 4). No significant peaks were found in VIM regions.
Wilcoxon rank sum test for the positive feature in intraoperative subject #1 showed that the left hemisphere E00-E03 and E01-E03 contact pairs were not statistically different from E01-E03, but E00-E03 is higher than E02-E03 (p = 0.0257). In the right hemisphere, E00-E01 had a stronger peak feature than both E01-E02 and E02-E03 (p < 0.0458 and p < 0.0222, respectively), but E01-E02 and E02-E03 were not different from each other. There were no statistical differences between the negative peak feature from all three bipolar pairs in the left hemisphere. For the right hemisphere, E00-E01 and E01-E02 had stronger negative peak features than E02-E03 (p < 0.0321 and p < 0.0028, respectively).
Wilcoxon rank sum test for the intraoperative subject #2 positive feature showed that left hemisphere E00-E01 was significantly higher than E01-E02 and E02-E03 (p < 0.0070 and p < 0.0013, respectively), but E01-E02 and E02-E03 were not different from each other. In the right hemisphere, E00-E01 was not statistically different from E01-E02 and E02-E03, due to high variability, but E01-E02 was significantly higher than E02-E03 (p < 0.0001). There were no statistical differences between the negative peak feature from all three bipolar pairs in the left hemisphere. For the right hemisphere, E00-E01 had stronger negative peak features than E01-E02 and E02-E03 (p < 0.0344 and p < 0.0115, respectively). In summary, the signals from both patients revealed the strongest response in the ventral contacts of the electrode which were closer to (CM-Pf region and immediately inferior to CM) and weaker in strength in recording location closer to the VIM region.
Discussion
Using an attention-driven cognitive task, in this study we found distinct LFP activity patterns within different regions of the thalamus in TS patients undergoing Cm-Pf DBS. ERP features revealed a greater strength when recorded closer to the Cm-Pf region along the planned DBS trajectory when compared directly to features from recording closer to Vim region. Interestingly, we identified two significant ERP features occurring after visual stimuli presentation: an early positive peak within the first 200 ms of visual stimuli presentation, and a negative peak shortly after. Both features' strengths were significantly correlated with the electrode position. In contrast to the Cm-Pf nuclei region, bipolar contact pairs closer to the Vim region showed little to no ERP response following visual stimuli. This result was observed in all 5 post-operative recording participants and was robustly present across multiple sessions.
To confirm these results in the intra-operative setting, we performed the same task during DBS implantation surgery for two additional participants. Although the referencing error in intra-operative subject #1 prevented side-by-side bipolar contact pairs for smaller LFP volumes, the visual evoked potential was present and display greater strength in the more ventral area along the DBS lead. Both positive features and negative features existed, but the positive feature was more prominent and seems to exist even in a monopolar configuration (intraoperative subject #1, left hemisphere). Intra-operative physiology for the Cm-Pf region targeting has been explored in limited studies. Warren et al. presented Cm targeting for DBS in 19 epilepsy patients. They observed reduced firing rates as the microelectrode trajectory entered the Cm region from the ventrolateral nucleus, but this group level result was not confirmed in 20-25% of the patients in their cohort (Warren et al., 2020). Shields et al. presented a case of Cm-Pf nuclei targeting using microelectrodes, however, the main differentiating strategy was based on thalamic border identification but not differentiating nuclei within thalamus (Shields et al., 2008).
Our potential solution was a novel LFP-based functional approach using the modified Go/NoGo task. This approach was intuitive because Fig. 3. Summary of the post-operative visual evoked potential features. A) Two features, a positive feature and a negative feature, were identified in the grand average ERP, the average neural response from all patients across all recording sessions of different electrode contact pairs. The positive feature occurred between 75 ms and 192 ms. The second feature was a negative deflection occurring between 256 ms and 526 ms. The dark gray interval represents 1 standard error above and below the grand average ERP. B) The average ERP of recordings from the recording contacts more dorsal in the electrode and recording contacts more ventral in the electrode were showed. The dark red and dark blue interval represents 1 standard error above and below the average ERPs. C) Correlation of the maximum peak during positive feature period and D) minimum peak during negative feature period with the electrode position (measured as mm above AC-PC line). Both features are statistically correlated with the electrode position, with the positive peak feature emerging as the stronger feature. Colored shaded regions indicate the 95% confidence interval of the linear fit. (AC-PC: anterior commissure-posterior commissure; ms = milliseconds; mm = millimeters). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) the Cm-Pf nuclei has been well described to be involved in attention. Most of the early evidence for the function of CM has been drawn from non-human primates and rodent studies involving ablation or visual cue pressing tasks (Minamimoto and Kimura, 2002;Kato et al., 2011). Recent human studies of the Cm-Pf nuclei also provides additional support to the role of attention processing during oddball tasks (Raeva, 2006;Schepers et al., 2017;Beck et al., 2020). In contrast, the Vim nucleus of the thalamus is heavily involved in motor control (Opri et al., 2019;Sommer, 2003). With an anterior-lateral entry angle, the DBS electrode typically passes through Vim nucleus before entering Cm-Pf nuclei of thalamus, and our results support and leverage the distinct functions of these nuclei regions. However, we were not able to examine the signals from various other nuclei within thalamus because they were not part of the surgical trajectory for these patients.
One limitation of all LFP macroelectrode recordings is that the specific anatomical source of the signals remains unknown. Originally, we selected an engaging task to elicit an attention-based signal from the Cm-Pf region of thalamus. However, combining post-operative imaging and multiple sessions with different electrode contact pairs, we observed that the strongest signal appears to originate a few millimeters inferior/ posterior to the Cm nucleus. This can be seen clearly in intra-operative recordings from subject #1 left electrode (Fig. 4A), which is more inferior than the others, and from intra-operative subject #2 ′ s right electrode (Fig. 4D), which is more posterior than the others. Both (subject #1 and #2) electrodes revealed a similar strength of the visual evoked potential in the ventral contact pairs, while other electrodes showed a gradient of a weak to strong potential as the recording configuration spanned proximally to distally.
It is likely that if the electrodes are implanted too deep along the intended trajectory, we may possibly measure activity from the pulvinar nucleus region of thalamus. The pulvinar nucleus of thalamus, similar to the Cm-Pf nuclei, is heavily involved in cognitive processing, especially higher-order visual processing (Snow et al., 2009;Fischer and Whitney, 2012;Kaas and Lyon, 2007). Positioned just millimeters posterior to Cm-Pf nuclei, the pulvinar nucleus could therefore potentially influence the LFP recorded from the distal contact pairs drawn from the DBS electrodes placed more posteriorly. The study was IRB approved and carried out in a manner to limit any additional risks to the study participants; therefore, electrophysiology signals were recorded based on the neurosurgeons' decision on the region of placement. We were not ethically able to record beyond the intended targeting position in a deliberate effort to prevent additional tissue damage in thalamus. Our strategy of targeting the Cm-Pf nuclei region of thalamus should be based on a the emergence of evoked potentials in the ventral recording contacts but weak to absence in dorsal recording contacts along the DBS electrode as opposed to simple identification of the strongest response to the visual stimuli.
In addition, it is unclear if the recording region is influenced by proximity to the visual pathways giving rise to evoked potentials or due to changes in the visual field itself. The current study design was not able to eliminate the possible source of the evoked potential from the visual change, however, further study should incorporate the visual oddball design to control for visual changes while focusing on the attentional aspects of the oddball stimuli.
This study had several important limitations. First, this was a pilot study and included different numbers of recordings from a small group of patients. However, despite the recording limitation, data were normalized, and we demonstrated the evoked potentials can be identified in individual level in the operating room setting. Second, we did not perform a real-time mapping along the full trajectory of the DBS electrode. The current approach was designed to measure the presence of evoked potential, which is shown to correlate with the proximity to the CM-Pf nuclei region in Fig. 3, after electrode was placed at the intended target location based on anatomical targeting approach. Although this task takes less than ten minutes to complete, we envision an even simpler task with less involvement of the patient during DBS electrode insertion would be a more practical way to translate the method into common practice. In addition, future studies can incorporate the clinical outcomes of each patient and their optimal therapeutic stimulation contacts to verify that the evoked potential can serve as important biomarker for optimal therapy locations.
Overall, we have presented a novel functional mapping approach for the Cm-Pf nuclei region of thalamus which can be used for TS DBS awake human neurosurgery. The method may also be applied to CM region targeting of other neurological and neuropsychiatric disorders beyond TS, however this method would require confirmation that the findings from our study are not unique to the TS population. Future work should confirm these results prospectively and in a larger sample size. We envision that the task could be further refined, and auditory testing could be explored as a way to avoid pulvinar influences on the visual responses.
Funding
This work was supported by R01NS096008, the Norman Fixel Institute for Neurological Diseases, the University of Florida Pruitt Family Endowed Faculty Fellowship, National Science Foundation PECASE 155348, and National Institutes of Health/National Center for Advancing Translational Sciences Clinical and Translational Science Awards UL1TR001427, KL2TR001429, and TL1TR001428 to the University of Florida. This work was also supported by National Institutes of Health National Institute of Neurological Disorders and Stroke Award F30NS111841.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
v3-fos-license
|
2019-08-20T13:23:49.340Z
|
2019-06-01T00:00:00.000
|
263855422
|
{
"extfieldsofstudy": [
"Economics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://content.sciendo.com/downloadpdf/journals/izajolp/9/1/article-20190004.pdf",
"pdf_hash": "a8b96d4bfc97858281c0069da0684f52885ecff6",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46607",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "73dcc512399eb65087de5fe1e391dc81c01319b3",
"year": 2019
}
|
pes2o/s2orc
|
Economics of Artificial Intelligence: Implications for the Future of Work
Abstract The current wave of technological change based on advancements in artificial intelligence (AI) has created widespread fear of job loss and further rises in inequality. This paper discusses the rationale for these fears, highlighting the specific nature of AI and comparing previous waves of automation and robotization with the current advancements made possible by a widespread adoption of AI. It argues that large opportunities in terms of increases in productivity can ensue, including for developing countries, given the vastly reduced costs of capital that some applications have demonstrated and the potential for productivity increases, especially among the low skilled. At the same time, risks in the form of further increases in inequality need to be addressed if the benefits from AI-based technological progress are to be broadly shared. For this, skills policies are necessary but not sufficient. In addition, new forms of regulating the digital economy are called for that prevent further rises in market concentration, ensure proper data protection and privacy, and help share the benefits of productivity growth through the combination of profit sharing, (digital) capital taxation, and a reduction in working time. The paper calls for a moderately optimistic outlook on the opportunities and risks from AI, provided that policymakers and social partners take the particular characteristics of these new technologies into account.
My research activities during the past decade have brought me in contact with developments in the use of electronic digital computers. These computers are startling even in a world that takes atomic energy and prospects of space travel in its stride. The computer and the new decision-making techniques associated with it are bringing changes in white-collar, executive, and professional work as momentous as those the introduction of machinery has brought to manual jobs. (Simon, 1960) 1 Introduction Values, norms, and language have evolved over the last six decades. What has remained the same, however, is the fear of the machine. Herbert Simon, Nobel Prize winner in economics, expressed in 1956 what many observers were convinced of at the time: "Machines will be capable, within twenty years, of doing any work a man can do," and hence that new technologies would make many jobs obsolete beyond the traditional blue-collar work in the manufacturing sweatshops. Today, we have grown used to computers around us: at home, in the office, at the bank, when travelling, or simply ordering food at the next drive-in restaurant. Rarely do we think of the jobs that might have been lost because of these computers and machines. Today, we no longer fear the computer that Professor Simon was afraid of, but something more profound: artificial intelligence (AI) or the capacity of machines to make predictions using large amounts of data to take actions in complex, unstructured environments (Agrawal et al., 2018a).
Complex decision-making under uncertainty is at the heart of modern economies.
Whether as a consumer deciding which products and services to consume, as an employee when it comes to choosing the right job and career, or as a manager when running daily operations or planning the next factory, we all face constantly and simultaneously complex, interrelated problems for which our natural intelligence seems to have made us particularly well equipped. Indeed, until recently, no machines were remotely deemed to be capable of matching our intellectual capacity, even though the idea of an intelligent machine emerged as soon as the invention of the computer in the 1930s. In 1936, long before the invention of modern, silicon-based computers, Alonzo Church and Alan Turing -independently from each other -discovered that any process of formal reasoning -such as problems in economics and management described above -can be simulated by digital machines. In other words, the difference between a computer and a brain is one in degree, not in principle. Turing (1950) later argued that there might be a time when humans would no longer be able to distinguish between interacting with another human or a digital machine, passing the so-called "Turing test". Moreover, indeed, in light of recent experiences by leading AI firms, this time no longer seems to be too far away.
Intelligent digital assistants such as the "Google Assistant" which can be assigned to autonomously make appointments over the phone is but one possible application of AI (OECD, 2017). 1 Speech and image recognition, natural language processing, and machine translation figure prominently as key areas of development around AI. Others include automatic text generation such as the preparation of (short) journalistic pieces, automatic generation of company statements, or customer tele-assistants. More sophisticated applications include medical expert systems to analyze and diagnose patients' pathologies (medtech), automated review of legal contracts to prepare litigation cases (lawtech), 2 self-driving cars or trucks, and the detection of patterns in stock markets for successful trading (algorithmic trading). Even creative arts, an area supposedly specific to humans, has seen a proliferation of applications in AI, from computers composing new pieces of music to painting programs replicating pictures in the style of a Rembrandt. 3 Common to all these applications is that they concern tasks that are considered to require specific human capacities related to visual perception, speech, sentiment recognition, and decision-making. In other words, AI is replacing mental tasks rather than physical ones, which were the target of previous waves of mechanization.
These advancements in AI have been made possible thanks to the confluence of three different, albeit related developments: • A phenomenal drop in computing costs has led to an explosion in installed computing power and storage capacity. Simple smartphones today are significantly more powerful than the computer that brought the first man to the moon. The costs for producing an iPhone 7, for instance, currently stands at around US$220; in the 1980s, it would have been around US$1.2 million in today's terms simply to pay for the memory capacity of such a phone.
• Second, the development and widespread adoption of the Internet and other forms of digital communication has led to a significant increase in the supply and storage of digital information, including in central locations (cloud computing), which allow the comparison and analysis of significant amounts of data for statistical purposes that are necessary to develop tools based on AI principles.
• Finally, the drop in capital costs for digital technologies has significantly lowered barriers of entry for start-ups, making it less necessary than in the past to mobilize huge amounts of capital before starting a new venture while at the same time offering substantial firstmover advantages. This shift in business models toward small, rapidly growing tech companies was often driven by university spin-offs funded through innovative financial products and supported by a seemingly endless supply of highly educated software engineers. A paradoxical consequence of the digital nature of latest innovations is that the lower barriers to entry have allowed new players to uproot incumbents while at the same time quickly leading to new forms of industry concentration (Bessen, 2017a).
Together, these three developments triggered a rapid increase in AI patent applications across different patent offices worldwide (Fig. 1). As a result, an endless stream of new services and products appeared, with those surviving the test of the market growing rapidly in size and quickly overtaking large, well-established companies in traditional business lines. Indeed, within the short period of 15 years, companies such as Google, Apple, Facebook, and Amazon have belittled historic behemoths of American capitalism of the likes of Walmart, General Motors, or General Electric. This sudden burst in applications of AI has created the sentiment of vastly accelerating technological change that is feared to disrupt labor markets in yet unforeseen magnitude (Ernst, 2018). What is puzzling, however, is that so far and despite the apparent acceleration in technological change, productivity growth has continued to decline in advanced economies.
Similarly, no disruption seems to have struck global labor markets so far which, on the contrary, seem to have recovered from their slump following the global financial crisis (ILO, 2018). What has changed is a continuous worsening of country-level income inequality, continuing a longterm trend that started in the 1980s. But even here, looking at the global level where inequality and poverty rates have fallen thanks largely to emerging economies catching up, neither the expected benefits nor feared costs of automation -even less of AI -have yet materialized at a large scale.
Most observers are not reassured, however. Many analysts are warning that advances in both robotics and AI over the next few decades could lead to significant job losses or job polarization and hence widen income and wealth disparities (Korinek and Stiglitz, 2017;Méda, 2016). A recent report by Bank of America Merrill Lynch in 2015 pointed to the potential for a rise in inequality as a result of increased automation. The report cited research by Oxford University, which found that up to 35% of all workers in the United Kingdom, and 47% of those in the United States, are at risk of being displaced by technology over the next 20 years (Frey and Osborne, 2017). According to the World Bank (2016), in developing countries many more jobs are at risk: 69% in India, 72% in Thailand, 77% in China, and a massive 85% in Ethiopia. Other researchers, however, reach much less dramatic conclusions (Arntz et al., 2016(Arntz et al., , 2017. Nevertheless, what all these studies have in common is that they focus on potential gross job destruction and cannot provide an answer to actual job destruction, net job displacements, or labor market turnover, which would be necessary to assess the challenge of automation from a policy perspective. Moreover, it is unclear to what extent conclusions can be drawn from many of the existing studies for technologies such as AI, on which little is known and almost no data exist.
This paper aims at addressing this knowledge gap to gain a better understanding of the economic and social implications of AI. To do so, it suggests starting from a granular analysis of how previous waves of automation have changed occupations and employment opportunities in the past. Specifically, we look at experiences of advanced and emerging economies with the automation of physical tasks through the rise in robotization. This approach can shed some light on the likely impact that the development and widespread diffusion of AI might have on employment, incomes, and inequality through the automation of mental tasks -as per our distinction between AI and robots/mechanization above. We also look at offshoring, in as much as it affects the role that AI can play in the structural transformation in developing countries.
The paper then tries to answer the following questions. First, to what extent is the current digital transformation through the rise in AI labor augmenting rather than labor saving? Moreover, what will be the implications for productivity and inequality given the specific, digital nature of AI applications? In particular, can we expect an acceleration in productivity and earnings growth thanks to widespread diffusion of AI in areas that have not yet been subject to largescale automation? Or, on the contrary, should we be afraid of technological rents arising from AI to be appropriated by the lucky few?
The answer that this paper gives to these questions is moderately optimistic. New, AI-based digital technologies may allow larger segments of the labor market to improve their productivity and to access better paying occupations and, thereby, may help promote (inclusive) growth.
This requires, however, that a certain number of policies are put in place that support the necessary shift in occupational demand, maintain a strong competitive environment to guarantee diffusion of innovation, and keep up aggregate demand to support structural transformation.
At the same time, AI applications raise the potential for productivity growth for interpersonal, less technical occupations and tasks, leading to higher demand for such work, which is likely to dampen the inequality trends observed over recent decades. A particular challenge arises for developing countries when they are part of a supply chain that forces them to adopt capitalintensive technologies despite an abundance of underutilized labor. Here, AI-driven automation might further drive up informality unless governments ensure a widespread adoption and diffusion of digital technological change beyond the supply chain sectors. In other words, the productivity-enhancing potential of AI is real but the specific characteristics of this new technology require policy responses that differ from those given during previous waves of technological change to generate shared benefits for the world of work.
To develop our argument, this paper starts with a historical perspective on automation.
It argues that the rise in educational attainment has led to an increasing skill-biased nature of technological change, bringing fewer benefits for productivity but increasing inequality; it is against this background that the introduction of AI needs to be assessed. Section 3 shifts the focus on tasks and away from jobs to help understand the implications this has had for employment and the organization of production, before Section 4 discusses the particular experience that advanced and emerging economies have made during the recent wave of robotization.
In Section 5, our focus then turns toward AI and the various effects on job growth, earnings dynamics, and firm productivity. In Section 6, we develop possible policy answers that can help address the issues that AI brings to allow for a proper sharing of technological rents both within countries and between advanced and less developed economies.
Automation and productivity in historical perspective
Historically, productivity and living standards have increased thanks to a continuous division of labor (specialization) and replacement of more tedious, arduous, and routine tasks by machines. In agriculture, for instance, a modern farmer buys sophisticated machinery for the industrial production of farm goods to be sold through regional distribution centers, rather than using self-made tools to plow one's acre for self-consumption as it was done for centuries. Highly specialized labor at each level of such supply chains that work through automated processes allows for a timely production of goods and services at constant, predefined levels of quality and quantity. Moreover, agriculture was only the first sector to benefit from automation, given its dominance even in advanced economies until the 1950s in terms of total number of jobs. Thanks to the invention of the steam mill and later to widespread electrification, manufacturing of goods from textile to automobiles pushed out the boundaries of productivity thanks to a combination of automation and ever finer division of labor.
In contrast to fears expressed today, the wave of automation that came with the first and second industrial revolution during the 19th and early 20th centuries led to a rapid increase in demand for low-or unskilled labor, raising concerns about the demeaning nature of technological change (Braverman, 1974;Marglin, 1974). 4 As productivity growth in agriculture led to a massive shedding of labor in this sector, unskilled laborers often found new employment opportunities in manufacturing or other sectors such as mining and construction that were blossoming thanks to automation. As the division of labor progressed, workers were asked to concentrate on ever narrower, highly repetitive tasks to be performed at high speed. This so-called Taylorist approach to the organization of work -also dubbed a "scientific management" approach to production by organizational specialist Frederick Taylor -created significant strain among workers who were less and less able to identify themselves with the final outcome of their work. As a consequence, in the 1960s, social movements started to flare up to express demand for less demeaning work, better working conditions, and faster wage growth.
At the same time, this was also the moment when productivity growth was highest among advanced economies, lifting large parts of the population out of poverty and creating a quickly expanding middle class.
With the rise in income, educational attainment grew as well. As (young) workers became increasingly educated, technological change shifted gears, laying the ground for the advent of the third industrial revolution based on the introduction of computers (Acemoglu, 2002).
In the decades following the 1970s, technological change became skill biased, increasing gradually the demand for medium-and high-skilled workers at the expense of those with only primary education levels or less. Although the observed rise in unemployment was only partly technological and in large part driven by changes in the macroeconomic environment, work 4 There is no commonly accepted classification of different stages of industrial advancements. The notion currently most in use is to talk about artificial intelligence and related innovations as the Fourth Industrial Revolution, see Schwab (2016). Previous stages include the introduction of the steam engine first industrial revolution (IR), the widespread use of electricity (second IR), and the use of computers (third IR).
processes started to change, with manufacturing employment falling gradually in all major advanced economies as more and more sophisticated machines and robots -"automatically controlled, reprogrammable, multipurpose manipulator, programmable in three or more axes, which can be either fixed in place or mobile for use in industrial automation applications" (International Organization for Standardization, ISO) -would replace routine and repetitive tasks. At the same time, designing, implementing, and maintaining these robots and computers led to the emergence of a whole new industry, albeit offering significantly less employment opportunities than those lost in the process of automation. Overall, existing studies suggest that employment effects specifically from the introduction of robots remained rather limited or -depending on the methodology used -were even positive in the aggregate (Acemoglu and Restrepo, 2017;Bessen, 2017b;Chiaccio et al., 2018;De Backer et al., 2018, Graetz andMichaels, 2015). When extending the analysis to developing countries, however, the introduction of robots shows significant and much more substantial negative effects on employment (Carbonero et al., 2018).
With the decline in manufacturing employment, the service sector took over the role of a jobs engine. Business services, transportation, and distribution (wholesale and retail) among others offered new jobs tailored to better educated and trained people in the workforce. From the 1990s onward, concerns over automation were limited to a smaller and smaller workforce in manufacturing and attention shifted toward working conditions and opportunities in services.
In particular in the United States, the advent of information and communication technologies (ICTs) produced a boom in investment in new technologies, which accelerated -temporarilyproductivity growth and offered new employment opportunities, albeit often under less favorable conditions than what had been experienced during the boom in manufacturing employment.
Nevertheless, this third industrial revolution based on ICT innovations and the introduction of robots has brought much less economic benefits than the previous two waves of technological change. Indeed, looking at economic development in seven selected leading economies from a long-term perspective, a deceleration in productivity growth can be detected, despite a short-lived acceleration during the 1990s (Fig. 2). This is also reflected in similar developments of Gross Domestic Product (GDP) per capita (i.e., including the inactive population) that show a remarkable absence of accelerating improvements in living standards. This observation had already perplexed economists during the 1980s when Robert Solow famously stated that "you can see the computer age everywhere but in the productivity statistics" (David, 1990). Besides measurement issues related to the digital nature of ICT innovations, this might be related to the fact that improvements in ICT impacted only a few sectors (notably transportation and logistics industries besides telecommunications) in contrast to previous, general purpose technologies such as electricity (Gordon, 2016). 5 The introduction of robots also offered new opportunities for automation along global supply chains, triggering a flourishing discussion about the global employment effects of offand re-shoring in both developed and emerging economies. The United Nations Conference on Trade and Development (UNCTAD, 2016) argues that the historical labor cost advantage of low-income countries might be eroded by robots if they become cheap and easily substitutable 5 The debate on the slowdown in measured productivity growth has been an active field of research in recent years and goes beyond the scope of this paper. For an overview of the different arguments, see https://www.brookings.edu/ research/the-productivity-slump-fact-or-fiction-the-measurement-debate/.
for labor. According to this scenario, the most affected industry should be manufacturing. This adverse effect might be strengthened by the growing labor quality in developing countries and the ensuing rise in labor costs. The Boston Consulting Group, for instance, reports that wages in China and Mexico increased by 500% and 67% between 2004 and 2014, respectively (Sirkin et al., 2014). This convergence in cost competitiveness is likely to continue in the future, eroding the incentives for producers to move their activities from developed to developing countries.
Offshoring, re-shoring, and robotization are part of a general rethinking of business strategies that have become more complex and based on a wider set of variables than simple cost comparisons (De Backer et al., 2016). On the one hand, the need to face different types of risk and to deal with increased volatility in demand, exchange rates, or commodity prices has shaped outsourcing decisions. These and other issues might have pushed several companies to shore the production back home (e.g., Adidas, General Electric, and Plantronics). On the other hand, the possibility of using cloud-based solutions has reduced the advantage of having lowcost programmers in developing countries. A study of A.T. Kearney has produced projections of job losses in India, Philippines, Poland, and the United States, imputing different automation paces for different outsourced business processes. Their results suggest that countries that have previously benefited from offshoring business processes stand to suffer more job losses than those where this type of job is still onshore (A.T. Kearney Global Services Location Index, 2017).
A shared concern about robotization arose from job polarization, or the fact that middleskill, middle-income jobs are disappearing to the benefit of job creation both at the high and at the low end of the wage distribution (Autor, 2010;Autor et al., 2003). Such developments toward worsening inequality seem to have eroded the benefits brought from earlier waves of productivity increases that lifted the boat for everybody in the long run. Moreover, this change in occupational growth does not seem to affect only advanced economies but represents a widespread phenomenon that is also shared by emerging and developing economies ( Figure 3).
Indeed, recent evidence suggests that structural change as experienced in advanced economies since the 1950s seems to be characterized by a "hollowing-out" of the middle class, with negative consequences for income inequality and inclusiveness but potentially also for economic development more broadly (Bárány and Siegel, 2018).
The current wave of technological change in the form of AI, therefore, comes at a time when the anticipated benefits from the previous wave have not (yet?) fully been felt and where costs -in the form of higher inequality and lower income growth for the middle class -are becoming manifest. Consequently, concerns are rising that this time, unemployment might actually increase and earnings fall, not least because in periods of stagnating output, increases in labor productivity induced by new technologies necessarily lead to a fall in labor demand.
Even if it does not lead to fewer jobs, such shifts could cause working conditions to deteriorate and earnings to fall further behind productivity, as they already have in the past (ILO, 2016).
To better understand this, however, we need to look more closely at the linkages between productivity, organization of production, and employment.
3 Jobs, tasks, and the organization of production When firms automate production, job growth is affected through three channels (Acemoglu and Restrepo, 2017; Chiacchio et al., 2018;Vivarelli, 2014). First, new technologies lead to a direct substitution of jobs and tasks currently performed by workers (the displacement effect); second, there is a complementary increase in jobs and tasks necessary to use, run, and supervise the new machines (the skill complementarity effect); and third, there is a demand effect both from lower prices and a general increase in disposable income in the economy due to higher productivity (the productivity effect). Typically, these effects do not materialize simultaneously, and the standard narrative runs that unemployment is initially going to rise with automation before falling again when prices and productivity adjust broadly across the economy, often at a much later stage. When distinguishing between different time horizons, these differences in short-vs. long-term effects of productivity growth on unemployment can be indeed discerned in historical trends for the total economy (Semmler and Chen, 2017), even though effects at the industry level might differ and depend on the price elasticity of demand for industrial goods (Bessen, 2017b).
This analysis of how technological change impacts employment is, however, based on three shortcuts. First, it is assumed that when tasks are being substituted by machines entire jobs disappear (almost) immediately. Second, occupational supply is assumed to be inelastic so that a skill-biased change in labor demand induced by technological change will lead to technological unemployment or worsening working conditions (Autor et al., 2006;ILO, 2015); over-or under-qualification does not exist. Finally, the increase in demand that is made possible through higher productivity is supposed to be uniformly distributed across sectors, irrespective of the extent to which these are being automated. In consequence, sectors with higher degrees of automation will experience a relative drop in the share of demand and therefore create less employment, in comparison to those that do not benefit from automation, which again will lead to job polarization and rising income inequality (Bessen, 2018). To understand whether AI will force labor markets through the exact same pattern of adjustment, it is useful to take a closer look at these three assumptions.
Changing jobs and tasks
Jobs are constituted by a set of tasks. If some of these tasks are automatized, job profiles might change by adding new tasks or modifying existing ones instead of suppressing a job entirely. The task description of an administrative assistant over time can demonstrate how similar jobs continue to perform certain tasks that have not (yet) been automatized alongside other, new tasks that either did not exist before or were performed by a different group of workers. Hence, whether or not jobs disappear depends on whether it remains profitable to group certain tasks into specific job profiles and hire workers specifically for these (new) jobs, which is a question more of demand for particular products and services that these jobs are supposed to deliver than of supply of skills to fill the jobs (Acemoglu and Autor, 2011; Bessen, 2017b).
Importantly, cross-country differences exist regarding how jobs are being designed and tasks regrouped into jobs. Ernst and Chentouf (2014) show that tasks have different characteristics regarding their training, supervisory, and production requirements, which are not necessarily aligned. Depending on the importance a company puts on training its workers, supervising them or aligning their workflows, different tasks may be regrouped to jobs from one company to another. Partly, this will depend on country characteristics regarding education and training infrastructure, tax incentives, and social benefit systems (Sengenberger, 1987). Hence, even companies operating in the same industry but in different countries might react to institutional differences with a very different setup of their internal work processes and job profiles, as exemplified by the differences between Apple and Samsung in the way they externalize their production chains. Consequently, whether the automation of tasks will lead to jobs disappearing is as much a technological question as it is an institutional one and cannot be determined a priori by looking at the automation process alone. Recent evidence seems to confirm the importance of institutional factors in determining the outcome of occupational changes, as seemingly similar patterns of job polarization across countries can be driven by different factors (Albertini et al., 2017).
Even when tasks can be automated they might not disappear altogether. Rather than executing a particular task, for instance, an employee might be charged to ensure that the machine is conducting the task properly and to intervene in case of an emergency or error (MGI, 2018a). In the case of air pilots, for instance, the introduction of automatic pilots has not made obsolete their role. Even though on average a pilot only flies a plane for roughly 7 minutes during an entire flight, having a human sitting at the control panel is as essential as before to intervene in extreme situations or sudden disruptions or in technical malfunctions not foreseen by the auto pilot (such as a simultaneous breakdown of both engines). 6 Similarly, it might still require a worker to ensure that machines are properly parameterized and set up, especially when orders change or a new production line needs to be set up. Also, the relative time spent on each individual task might change: thanks to support by AI on diagnosing diseases, doctors, for instance, might spend less time on analyzing symptoms and more time on ensuring a patient's well-being and individual needs. Either way, automation of a task might not necessarily lead to that task no longer requiring human assistance. Rather, the question becomes whether it remains profitable to bundle a set of tasks to a specific job, as well as how quickly a worker can shift within the current job to perform slightly modified tasks or task sets.
If that entails requiring new skills that are costly to learn, automation can be expected to lead to inequality within occupations rather than across (Bessen, 2015a). 7
Capital-skill complementarity
Inequality and joblessness among (low-skilled) workers will also depend on the extent to which machines are complementary to high-skilled labor. The complementarity between skills and machines is not bound by technological factors alone, as the historical account above of different waves of industrial revolutions has demonstrated. Rather, whether or not firms introduce skill-biased technologies depends on whether these are profitable (Acemoglu, 2002). In the 19th century in particular, workers seem to have had comparative advantages over machines in certain repetitive tasks that required high dexterity, for which machines at the time were not yet ready. The relative abundance of unskilled labor at the time made it unprofitable for companies to develop technologies that would allow them to substitute for unskilled labor, as can still be observed in sweatshops around the developing world today. However, as soon as the supply in skilled labor increased and hence relative prices of skilled vs. unskilled labor fell, technologies that made their use profitable began to be developed, leading to the pattern of skill-biased technological change that we can see today (Goldin and Katz, 1998).
With the installation of ever more complex machines, the demand for workers capable of operating and maintaining them rose constantly. Nevertheless, the number of supervisory and skilled workers that these new machines commanded was nowhere near sufficient to create enough jobs to compensate for the loss in demand for the low-skilled workers they were replacing. Hence, capital-skill complementarity became synonymous not only with rising income inequality but also with an increase in technological unemployment to the extent that low-skilled workers were not able to switch occupations or sectors. Most importantly, it was a key explanation of why an increase in the relative supply of the educated workforce did not lead to a fall in the skill premium, that is, the wage difference between high-and low-skilled workers, as one would expect in the absence of such a complementarity. As technological progress gradually reduced the price of capital, investment in new equipment continued and led to a gradual rise in the skill premium.
The extent to which new technologies require the complementary input of skilled labor is, therefore, a main determinant as regards the effect of AI on employment and inequality.
Indeed, even modest changes in the degree of complementarity can produce vast differences in labor market outcomes (Berg et al., 2018a;IMF, 2018). To the extent that AI is expected to replace mental tasks as explained above, it is, however, not entirely obvious that AI-based innovations might be characterized by strong capital-skill complementarities. Indeed, the entire logic of AI-based systems is to offer expert knowledge to nonspecialists. Whether these systems concern sophisticated medical devices such as activity trackers, agricultural expert systems to guide farmers in selecting and planting the right variety of seed at the right time or sharing platforms for optimizing multimodal transportation, they often require little or no prior knowledge, connect a vast array of users, and provide advice and guidance that help lift productivity, particularly in sectors dominated by low-skilled workers. In construction, for instance, still an area of low productivity that continues to absorb a significant share of low-skilled workers, new computer-based planning systems, for instance, could help to speed up the construction time, cutting waste and optimizing the maintenance cycle of buildings, without changing the skill composition of the sector (MGI, 2017). In other words, part of the promise of AI is that it actually can help lift productivity especially of low-skilled workers, while cutting demand for high-and medium-skilled professionals, quite the opposite of what has been observed in the past.
The evolution of demand and the emergence of new tasks
The rise in productivity that is generated by technological change will help expand incomes and demand. Whether unemployment increases or working conditions worsen will then depend on the types of goods and services this additional demand will be addressed to (Bessen, 2018). Typically, technological change does not progress uniformly across sectors. Hence, the additional income that is generated by automation in one sector might not lead to more demand for that same sector, contributing to a fall in labor demand for that sector. In contrast, if demand for products or services from the automated sector reacts very strongly to changes in price, that is, if demand is highly price elastic, any effects from labor-saving automation might be more than offset by increases in demand (Bessen, 2018). A recent example is the introduction of automated teller machines (ATMs) in the banking industry starting in the 1970s. Despite the labor-saving nature of the ATM, employment in banking grew continuously as the cost of opening new outlets fell, helping to attract a larger customer base while at the same time shifting tasks among bank employees away from clerk services to sales and counseling (Bessen, 2015b).
Similarly, as demand grows overall, highly price elastic but labor-intensive sectors might benefit, creating additional job opportunities or helping to create new tasks. In the United Kingdom, for instance, demand for recreational and cultural activities has increased by more than 5% points in the consumer basket between 1988 and 2017, in part thanks to the gains made from automation that allowed people to spend less on apparel or food. Similarly, in the United States over a shorter period (1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006)(2007)(2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015)(2016)(2017), spending on health care increased by 2% points in the average consumer basket. Such changes in relative spending patterns toward more labor-intensive sectors can be widely observed and are one of the key factors to explain that technological unemployment has often remained a temporary phenomenon if at all. 8 At the same time, with consumers getting richer, demand for luxury goods and services increases, as can be observed from the steady rise in the numbers of personal coaches and trainers. 9
The impact of AI on jobs and wages
Taken together, the impact of a large-scale introduction of AI on jobs and wages will depend on three factors: the price elasticity of supply of capital vs. the elasticity of labor, the substitution elasticity between capital and labor, and the direction of technical change induced by AI, that is, whether AI is capital or labor augmenting. The more inelastic the supply of AI, the higher the substitution elasticity between AI and jobs and the more labor-saving AI-based innovations are, the higher will be the extent of technological unemployment and the lower will be any wage gains. Based on the discussion in this section, a nuanced picture arises, in particular as regards the implications of AI for labor markets in developing countries.
First, the elasticity of supply of capital and labor depend to a large extent on how heterogeneous both factors are. The more homogenous a factor input is, the more elastic its supply will be and the less will this factor be in a position to generate high returns. 10 In this sense, skilled labor is less elastic than unskilled one, a key factor behind the wage premium for skills.
Similarly, intangibles, such as AI, or robots might not easily be reproducible due to intellectual property rights, data (collection) ownership, or physical limits to consumption of energy and natural resources, which makes the supply of such high-tech capital less elastic. This is likely to be more problematic in advanced economies where overall access to financial markets is well developed and intellectual property rights enforced, leading to a low relative price of traditional capital. In developing countries, on the other hand, the capital price of AI relative to 8 Data on consumer basket spending items are taken from ILO statistics. 9 Absolute numbers are small, though, despite a global growth rate of around 12% between 2011 and 2016. Currently, an estimated 53,300 people are classified as personal coaches and roughly 128,300 people have part of their tasks related to coaching (ICF, 2016). 10 A. Marshall uses the concept of "quasi-rents" to describe excess returns over and above the marginal product that will erode over time as factor supply adjusts. In this regard, the less elastic factor commands a higher quasi-rent and will benefit more from the increase in productivity. In modern, search theoretical approaches to the labor market, quasirents are linked to the degree of specificity that is determined by both search/transaction costs and the value of the outside option (Marshall, 1890).
traditional capital is likely to be lower, given more restricted access to capital and higher risk premia overall as regards investment. Investment in AI might, therefore, be relatively more elastic given the generally higher profits to be made in such an environment. At the same time, developing countries still have large supply of unskilled labor, which prevents wages from rising (faster) but which also reduces incentives to invest in AI technology. Only when the supply of (unskilled) labor slows down, the incentive for a shift toward automation will become stronger, as it can be currently observed in China and other emerging countries (see Carbonero et al., 2018, who document the rapid rise in robotization in some of these countries). 11 Second, a high elasticity of substitution between capital and labor leads to a reduction in labor demand with the introduction of new technologies. Previous waves of high-tech innovations came with a strong complementarity between capital and skilled labor, leading to increases in wage premia and job polarization. As we argued before, however, with AI the degree of complementarity between capital and skilled labor might actually be lower as AI has the potential to increase the productivity of low-skilled labor. Finally, to the extent that investment in AI is capital or factor augmenting it will increase capital productivity or scale up production without displacing labor. In this case, the productivity effect is stronger and leads to more jobs and higher wages, albeit the impact on the wage premium for skill labor is unclear. In the case of labor-saving technical change induced by AI, however, the situation is more complex as labor is replaced and the overall impact on labor markets depends on the size of the productivity effect and the extent to which induced demand is big enough to compensate for displaced labor. As discussed previously, the impact of labor-saving technological change on labor demand will also depend on the price elasticity for the goods and services that are being automated: to the extent that automation happens in (services) sectors with large unmet demand, price elasticity might be high and a reduced price thanks to automation will lead to a strong increase in demand that compensates for the substitution effect. Moreover, in the next section, we discuss that many applications of AI are capital and factor augmenting rather than labor saving, for instance, when they improve the matching process on different (labor and product) markets and enhance the productivity of installed capital (for instance, in the energy sector).
Considering these three factors leads to a more optimistic outlook as regards the impact of AI on jobs and wages, in particular when looking at its potential to support the catching up process in developing countries. The extent to which AI supports labor demand and wage growth will, however, depend on the concrete applications that are currently being developed.
Moreover, the distributional consequences of AI are linked to broader considerations about the 11 The point at which real wages start to accelerate in the process of a country's economic development is also known as the "Lewis turning point," at which the supply of low-skilled labor slows down or declines, for instance due to slower population growth, lower internal migration from rural areas, or a general increase in the level of education and skills (for the Chinese experience, see Zhang et al., 2011).
implications of the rise of intangibles -which AI capital belongs to -and competitive forces on product markets. This is what we will turn to next.
4 What is different about AI?
Can we expect AI to have labor market effects similar to previous waves of automation such as those resulting from robotization? Many observers believe, indeed, that AI -given its focus on mental rather than physical tasks -has the potential to become another "general purpose technology" with a wide range of applications in various sectors and occupations (e.g., Furman and Seamans, 2018). This could mean that the results we have found so far, given the fact that they are based on robotization of only a few sectors, could generate even more significant (negative) employment effects when AI affects a far larger set of industries and occupations. However, as we have argued at the beginning of this paper, not all the insights that studies on robotization have generated might carry over where AI-based technologies are being developed and adopted more widely. Most notably, whether AI-based technologies are characterized by the same degree of capital-skill complementarity as robots is not entirely obvious. In this section, we look more closely at the specific applications that look feasible from a current perspective, and the potential labor market implications, making use of the discussion in the previous sections.
Specific characteristics of AI
As discussed in Section 1, the development of AI has benefited from three interrelated trends: the availability of large (unstructured) databases, the explosion of computing power, and the rise in venture capital to finance innovative, technological projects. These have allowed the rapid development of new applications in areas where humans were thought to have a particular advantage: making predictions and taking decisions regarding routine yet nonmechanical tasks. Typically, these types of tasks were mainly found in the services sectors, which employeven in emerging economies -more than half and sometimes up to 70% of the workforce.
Three main groups of tasks have become the focus of AI applications, in particular: • Matching tasks: The most prominent group of tasks concerns all those jobs that consisted in matching supply and demand, especially on markets with a heterogeneous product and services structure. Whether ride-hailing services (Uber, Lyft, Didi Xiuching), hotel and accommodation services (AirBnB, Ebookers, Booking.com), retail (Amazon), or human resource management (LinkedIn) among others, machines have proved to be significantly faster and more efficient in identifying matches in these markets. This, in turn, helps companies to cut costs on finding customers or suppliers and offering less expensive solutions to their growing customer base, often, however, at the cost of worsening working conditions of their suppliers and their employees. In particular in the gig economy, where demand for micro-tasks such as image classification or survey responses is matched with workers available for short-term, on-demand tasks, working conditions are often below (national) minimum conditions (Berg et al., 2018b). An additional concern arises where privacy rights are not or insufficiently being protected, leaving employers in a strong position to (further) undermine worker rights and working conditions (De Stefano, 2018).
• Classification tasks: Early applications of AI concentrated on image and text recognition techniques, especially facial recognition, partly in relation to the increase in surveillance cameras and techniques. In the meantime, however, an explosion of applications has taken place in this area, including in medical applications (X-ray image diagnosing), legal services (reading and classifying legal documents), accounting and auditing (analyzing balance sheets, fraud detection), recruitment (screening applicants), and potentially threatening the jobs of a significant number of well-paid workers in the services industry. Yet, it also promises to enhance significantly the productivity of the most productive workers in these industries even further: automatic text generation software allows journalists and editors to concentrate on those key, high-valued added papers that attract a large customer base to their employers.
Similarly, automatized research designs help scientists to focus on the most promising areas of their experiments (for instance, in the development of new drugs) while allowing the computer to discard all those research avenues that are likely to fail (Cockburn et al., 2018). The democratization of expert knowledge that these AI applications bring, however, also runs the risk of expert deskilling and abuse, for instance, in the case of facial recognition, which has recently led industry leaders to call for a careful regulation of these technologies. 12 • Process management tasks: A final set of applications concerns a combination of the two previous sets of tasks, identifying patterns and bringing different suppliers and customers together along a supply chain (Culey, 2012). This type of complex network management also arises in the management of electric grids and complex infrastructure and building projects, including the maintenance of finished projects (through the Internet of things, IoT) or multimodal transportation solutions to curb inner-city traffic.
In combination with decentralized tracking and certification schemes (Blockchain), it includes the implementation of expert systems across supply chains, allowing upstream producers to integrate diversified supply chains through better information about product quality, certification schemes, and market conditions. These types of expert and complex management systems are of particular relevance in developing and emerging countries, helping local producers to gain access to a wider set of expertise on production conditions, supply chains, or simple learning tools. 13 It is this latter group of tasks that currently bears no resemblance to what robots used to automate in the past. Rather, these new AI-based innovations constitute a new group of tasks that either cannot be properly carried out by humans due to their complexity or have been too expensive to be performed by human workers, even in combination with traditional technologies (Benhamou and Janin, 2018). 14 12 See https://blogs.microsoft.com/on-the-issues/2018/07/13/facial-recognition-technology-the-need-for-publicregulation-and-corporate-responsibility/. 13 The potential of applications of AI to developing countries has already been recognized by major tech companies.
Google recently announced that it was to open its first African AI research lab in Accra (Ghana) to develop tools specifically designed for local market conditions; see https://www.blog.google/topics/google-africa/google-ai-ghana/ 14 One of the insights from the earlier endogenous growth literature was indeed that new types of goods and services become available only once they are sufficiently profitable to be carried out. In other words, labor demand for the production of certain products or services is essentially zero at any point in time to the extent that current technologies do not allow them to be carried out profitably. By one account, AI has added up to 7% of GDP in the United States due to these additional services that were hitherto not accessible to humans (Cohen, 2018).
Without stretching the task-based methodology discussed previously too much, these three fields of applications of AI can be categorized as (a) task substitution; (b) task complementarity; and (c) task expansion. In the case of matching applications, existing tasks are being taken over, often in a more efficient way, through algorithms that allow the matching of supply and demand more rapidly and more precisely. In the case of classification tasks, AI-based applications help workers involved in such tasks to concentrate on those that require specific attention while leaving the more routine, repetitive tasks to a computer. Finally, as regards process management tasks, here AI-based applications often carry out tasks for which no human workforce was available to begin with, precisely because of the complexity of the tasks; in this case, the computer essentially expands the number of tasks that are being carried out in an economy, thereby enhancing total factor productivity regardless of whether production is based mainly on skilled or unskilled labor. A priori, therefore, it is not possible to determine whether the development and diffusion of AI-based applications will contribute to widespread job destruction or to an increase in inequality. The effects of AI will depend on the relative importance of these three different areas of applications of AI. In particular, they will depend on the direction that technological change will take in the future, under the impression of policies, tax incentives and public and private investment in technological research (Mazzucato, 2013). In other words, the extent to which AI will lead to a recomposition of tasks and jobs will partly depend on the particular technology and innovation policies in place to orient the technological progress in socially desired ways. We will get back to this point in our final section on policy options.
The economic and social implications of large-scale applications of AI
The large-scale application of AI might yet generate additional economic and social implications, irrespective of whether these applications are substitutes, complements, or extensions of existing tasks. These implications have to do with the particular nature of AI: AI is digital in nature and therefore non-rivalrous, similar to other digital products and services, that is, digital services can be used by more than one person without affecting each other. Moreover, AI aims at providing individual solutions to economic problems, not only allowing for a more enhanced product and service diversification than ever seen before but also for much finer price discrimination than on existing markets. Such price discrimination is, however, a double-edged sword, as the additional opportunities it might provide for some have to be compared against the proliferation of preexisting biases this might entail. Nevertheless, and related, the use of AI in helping to reduce matching frictions -irrespective of its task substitution nature -also creates more opportunities for market interconnection and exchange.
Finally, AI systems by their very nature represent embodied technological change, with specific implications for the skill-biased nature of this form of economic progress. Let us look at these issues in more detail.
First, digital technologies that are characterized by non-rivalry in the use of their products and services often provide cumulative advantages to those entering first a particular market (segment). Once fixed costs for the development of new digital services are being deployed, a growing market can be served (almost) at zero marginal costs, with economies of scale significantly larger than during previous waves of technological change based on automation of mechanical tasks (Moretti, 2012). This gives rise to superstar firms where few companies dominate and occupy a privileged, highly profitable position, potentially limiting competitive pressure by erecting barriers to entry (Rosen, 1981;Autor et al., 2017aAutor et al., , 2017b. Second-movers often face uphill battles to enter the market or have to focus on small market niches with less profitable opportunities, producing large inequalities between individuals and between firms. Korinek and Ng (2017) argue that recent technological changes have transformed an increasing number of sectors in the economy into the so-called "superstars sectors," in which a small number of entrepreneurs or professionals concentrate the demand of a large range of consumers. Examples include the high-tech sector, sports, the music industry, management, finance, etc. Importantly, these superstar dynamics are not limited to firms producing digital goods and services, but increasingly include those using them, thereby affecting a potentially much larger group of sectors and occupations. As a result, superstar firms and employees concentrate enormous rewards in a wide range of activities, widening the gap with the rest of the economy and reducing the share of income to labor (Autor et al., 2017a).
The superstar dynamic is further reinforced through business practices that enhance the first-mover advantage. Indeed, some companies are adopting data-driven business models and strategies to obtain a competitive "data advantage" over rivals. Data-driven mergers (e.g., Facebook's acquisition of WhatsApp) are increasing the risk of abuses by dominant tech firms.
Data-driven exclusionary practices and mergers raise significant implications not only for privacy and consumer protection, but also for competition law. Due to network effects, datadriven mergers may increase entry barriers and enable some big firms to become bigger until they dominate the whole industry (Stucke and Grunes, 2016). In this light, some commentators within the antitrust community are raising concerns about the potential harm of data-driven mergers and abuse by dominant companies built on data. The Organisation for Economic Co-operation and Development (OECD) recently warned that data-driven markets can lead to a "winner takes all" result (OECD, 2015a). These network-driven market concentrations are likely to grow larger with AI, which is very much based on large, centrally available databases.
A second source of change comes from the fact that AI-based systems allow for a much finer discrimination between different customer groups. Indeed, market segmentation and differential pricing is nothing new and has been practiced for some time. However, AI allows firms to predict individual costumer behavior and price sensitivity in much more detail. Based on previous consumer and search patterns, for instance, on online shopping platforms or as revealed by credit card transactions, suppliers can essentially charge individual prices or suggest individualized price-service quality combinations that allow them to reap a much larger part of the consumer surplus than in the past. Such so-called third-degree price discrimination has not yet been a matter of active research in relation to AI but some insights from previous research allow a couple of conclusions to be drawn (see Tirole, 1988;Gifford and Kudrle, 2010). 15 With this form of price discrimination, producers offer (groups of) consumers the same type of product or service at different prices, based on the relative willingness of consumers to pay for these products. A typical example consists of internationally traded goods, such as pharmaceuticals, that are priced differently depending on a country's consumer characteristics, 15 Arguably, AI-based price discrimination could be considered as first-degree price discrimination, allowing full extraction of consumer rents by producers. This, however, would require perfect prediction of a consumer's willingness to pay, something that runs counter to the underlying principle of AI systems as stochastic prediction machines.
which may depend on differences in regulation and taxation. One general conclusion from this research is that welfare increases if and only if the total output produced by serving different market segments at different prices exceeds the output in a situation where all consumers pay the same price. It turns out that this is the case under fairly general conditions, but it also implies a shift of (part of) the consumer rent to producers, thereby worsening any prior trends toward higher levels of inequality (Varian, 1985).
Recent developments have aimed at applying this to human resource management as well. Indeed, the area of what has become known as "Human Resources (HR) analytics" aims exactly at this type of price discrimination to attract workers to companies, differentiating between categories of employees in terms of working conditions, wages, fringe benefits, or responsibilities. A particular concern with this type of discrimination of working conditions arises from the fact that reservation wages of different groups of otherwise similar jobseekers may be caused by past discriminations observed in the labor market. Women or ethnic minorities, for instance, might be ready to accept lower wage offers, as they were experiencing higher entry barriers in the past. An automated recruitment system based on analyzing historic data would replicate this type of bias, thereby reinforcing preexisting discrimination (Ponce Del Castillo, 2018). Hence, even though price discrimination might, in general, allow expansion of the number of available jobs, it is suboptimal in cases where differences in willingness to pay (or to accept job offers) depend on previous discriminatory practices. So far, however, it seems that people continue to hold favorable views of algorithmic decision-makers vis-à-vis humans, suggesting that even though algorithms come with their own biases, these might be (seen as) less harmful than those perpetrated by humans (Logg et al., 2018).
At the same time, however -and this is a third area of economy-wide applications of AI-based systems -matching frictions on labor markets can be substantially reduced when automated systems allow a significantly larger pool of applicants to be processed. Indeed, mobility of workers, whether across occupations, sectors or locations seem to have declined in recent decades (Bunker, 2016;Danninger, 2016;Molloy et al., 2014). Part of this fall in labor mobility has to do with regulatory barriers such as occupational licensing or barriers to geographic mobility. But a significant part relates to informational frictions and difficulties for employers in properly identifying competencies from past experiences or education. Similar to applications in the area of HR analytics discussed above, AI-driven matching systems are helping to identify the appropriate mix of firm internally and externally available competences to bring them together for specific projects such as the development of new products or services. Indeed, AI has already started to shift the boundaries of the firm in favor of more and more services being insourced from external (labor) markets, such as through micro-tasks available through gig platforms (Berg et al., 2018b). Job search platforms such as Monster.com or LinkedIn are already offering detailed models of job vacancies and available candidates to help recruitment managers and applicants in matching job requirements with candidates' (self-declared) competences. The benefit of using AI in this area comes not only from the larger pool of applicants and vacancies that can be matched against each other (thereby enhancing labor market fluidity). It also lies with the improved identification of competences based on self-declaration and historic professional experiences that might be difficult for an individual recruitment manager to properly discern. So far, these systems still seem to be far from perfect and riddled with biases, as anyone who has used them can confirm. Nevertheless, the expected efficiency gains promise to be large: according to MGI (2015), for instance, enhanced matching efficiency thanks to such online job platforms could yield an additional 72 million jobs worldwide and spur global GDP by 2% within the next decade. Notwithstanding, these efficiency gains must be matched against a likely increase in employment volatility and job insecurity, especially when such newly created jobs are of only temporary nature.
A final, economy-wide implication of AI concerns the fact that technological change driven by AI is embodied in new and often cheap equipment, accessible to a wide range of users. 16 Its digital nature and the fact that many AI-based expert systems can be run from currently available mobile phones have contributed to its significant diffusion, including among users in emerging and developing countries. The particularly steep fall in capital prices that is being fueled by AI is likely to help boost productivity especially in those regions and parts of the world where lack of finance and other barriers have prevented the implementation and diffusion of existing technologies. As discussed above, expert systems are currently being developed to help, for instance, smallholder farmers to get better information on what, when and how to seed to improve the agricultural yield. In particular in certain semiarid regions in Africa, precise advice on meteorological conditions in combination with proper farming and irrigation techniques has been shown to yield substantial potential for productivity gains through water savings and more appropriate seeds. 17 Given that today more than one-third of all workers worldwide still work in the agricultural sector, such productivity increases promise to alter significantly the development potential and income opportunities, including among low-income countries. Similarly, using AI-based matching and supply chain systems holds the potential to cut down on logistics and transportation costs, an issue particularly relevant for producers in developing countries that often lack access to large distribution networks. 18 Finally, the delivery and implementation of public policies often depends on timely and precise information about areas in need of intervention. AI-based expert systems have been shown to help policymakers, in particular in countries with limited fiscal resources, in better managing their interventions, delivering better, more granular information, and allowing an improved coordination of various actors necessary to, for instance, deploy medical care or emergency interventions. 19
Policies
The previous sections have demonstrated the wide and varied job-specific and economy-wide implications of AI that result from its general purpose nature. AI's potential to generate major productivity enhancements, in particular in sectors and countries that so far have not benefited from significant structural change, has to be matched against the risk of worsening gaps in income inequality as first-mover advantage looms large and can easily be reaped. The following section discusses some of the policy implications that this assessment warrants. Specifically, it 16 Economists distinguish between embodied and disembodied technological change. The former relates to all those forms of innovation that are being implemented through investment in new tools, machines, and equipment. The latter arises from innovations in the way existing labor and capital is being organized, for instance through organizational innovations or innovations in infrastructure and regulation that help to make more efficient use of existing technologies. 17 See, for instance, the Tunisian start-up iFarming, http://www.jeuneafrique.com/501309/economie/start-up-de-lasemaine-ifarming-future-licorne-tunisienne-de-lirrigation-en-temps-reel/. 18 https://medium.com/@KodiakRating/6-applications-of-artificial-intelligence-for-your-supply-chain-b82e1e7400c8. 19 https://channels.theinnovationenterprise.com/articles/ai-in-developing-countries. will focus on four areas of policy interventions: (1) supporting the adjustment of the workforce to be able to transit to jobs and tasks in which workers continue to benefit from a comparative advantage, while being able to make use of the new technologies; (2) guaranteeing an equal playing field between firms by maintaining a competitive environment and preventing individual companies from reaching market dominance, a tendency that has already worsened inequality and hampered productivity growth; (3) reinforcing existing tax and social protection systems in order to mitigate both the impact of the ongoing transformation of the world of work as well as the deepening of income inequalities; and (4) enhancing international cooperation and social dialogue to broadly share technological rents.
Skills and occupational mobility
The current education systems need to be examined given the arrival of the AI-based wave of technological change. Its current setup as a young age, once-and-for-all type system of skill provision is no longer sufficient when it comes to retraining workers who expect to have an increasingly lengthy work career. Most current proposals, however, start from the premise that what is required is a general uplifting in technical skills for workers to be able to cope with the coming changes. The previous discussion has argued that this is not necessarily the case beyond the capacity to use these new technologies. Importantly, even if the expected increase in the demand for technological skills materializes as currently predicted, social and emotional skills remain the dominant driver for total hours worked, at least in advanced economies, according to a recent study by McKinsey Global Institute (Fig. 4). This dovetails well with the general considerations developed previously about the generic nature of AI-driven technological change. Indeed, technological skills will mainly be asked for in areas where new digital products and services are being developed, which by the nature of this digital industry will remain relatively limited. However, in the areas of application and use of these technologies, new opportunities emerge. In this regard, a certain generic understanding of the availability and use cases of new technologies will be necessary as a broad skill, much as reading and basic mathematics skills are considered to be required for today's low-skilled workforce. However, in an age where there are more mobile subscriptions than actual users and a penetration rate of smartphones of more than 60% of the total population in most advanced economies, many users are already exposed to new technologies and dispose of basic numeric skills. 20 As routine tasks such as verification, compliance, and system processing are increasingly being taken over by machines, human work will shift toward sales, market development, and consulting/coaching, all of which are tasks that require strong social, empathic, and interpersonal competences rather than relying exclusively on technical skills. The latter will still be necessary, but mostly in order for workers to use rather than to develop new technologies.
These are not new competences, and social and emotional skills have been emphasized by employers in the past. Indeed, an increasing need for social skills has already been observed over the past decades (Deming, 2017). However, current education systems with their strong focus on providing technical skills will need to integrate competence development in this area to a larger extent than in the past. At the same time, this shift in the skill basis also holds the promise that even those people who might find it challenging to access highly technical skills will have a higher chance to integrate into the labor market successfully, provided that they hold the right social and interpersonal skills. In this regard, AI-driven technical change will not necessarily be as skill biased as the previous wave of digital technologies. In particular, in those countries where only few people possess the right technical skills to contribute to the development of AI applications, users of these new tools can expect to enter the labor market successfully even with a diverse and nontechnical skill set. This is especially promising for currently low-income countries that often do not possess the resources to set up education systems with a similar scope and breadth as more advanced economies. In these countries, AI-based tools can play a particularly productive role in overcoming educational challenges, as they allow local consumer behavior and production characteristics to be sourced to provide tailor-made solutions, for instance for smallholder farmers.
Indeed, whereas previous generations of expert systems were often based on hardwired expertise gathered in different countries and contexts, the learning capacity of AI tools makes them particularly amenable to be deployed in a variety of situations without much prior knowledge about local circumstances. Local users of these technologies, therefore, are not required to know much about the underlying technology, nor need they provide sophisticated input into such devices. Rather, their day-to-day usages will allow AI-based tools to generate advice based on overall best practices in combination with local circumstances. This creates low entry barriers for the diffusion of these new technologies and allows training and education to be focused on basic numeric and literacy skills. Hence, even though developing countries might find it 20 See http://resources.newzoo.com/global-mobile-market-report. challenging to upgrade their education systems quickly and thoroughly enough to expect to be able to produce AI applications, even with limited resources they might expect to be able to use these applications on a broader scale, with large benefits for their growth potential.
A final point concerns occupational and geographic mobility. As new applications will emerge in yet unknown areas of the labor market or new locations, maintaining fluidity between occupations and geographical areas remains important. In this sense, activation and education systems need to account for flexibility both across and within occupations over a lifetime and between locations. Younger generations currently entering the labor market can and do expect to work until their mid-to late 60s, including in emerging economies. 21 Education systems that provide skills only at a young age are unlikely to fit the purpose of an ageing society with (fast) technological change (Agrawal et al., 2018b). Several attempts have already been made to promote (incentives for) lifelong learning, but opportunity costs are typically very high for workers in their prime working years, and skill provision for those on a job search often focuses on a speedy return to employment rather than a more long-term sustainable solution to any shortcomings in skills. Activation systems more broadly need to integrate the perspective of employability over the life span with a focus on competence development that can be used across a range of locations and possibly countries.
In this regard, education systems will need to focus increasingly on competences rather than on skills and promote the certification and portability of these competences. Partly, this will require a widening of the currently narrow occupational licensing that continues to hamper successful labor market integration, even in the absence of AI-based technological change. 22 Moreover, international coordination on a broad set of competences will be required to allow for more labor mobility and better international comparability of those competences, which should help workers more easily to find employment opportunities in new occupations, sectors or locations. Recent initiatives to develop "skills passports" allow to document and certify competences acquired on the job. These could be extended toward a broader, potentially mandatory industry-or nation-wide scheme that helps workers over their working life to assess and identify both their current competences and possible gaps in light of a labor market transition. 23
Ensuring a level playing field among firms
Besides ensuring a properly prepared workforce, policymakers also face the challenge of maintaining a dynamic labor demand. As discussed above, the digital nature of AI creates significant and persistent first-mover advantages that deepens the gap between early adopters at the technological frontier and the remaining firms. As a consequence, productivity differentials have widened across all OECD countries and firm-level concentration has increased globally, 21 Although not strictly the subject of this paper, the issue of population ageing cannot be abstracted when discussing labor supply and incentives for (higher) education. High levels of educational investment pay off when people grow older. At the same time, given (fast) technological change, educational obsolescence requires constant (and more important) renewal of competences and skills as average retirement ages recede. Despite a long-lasting recognition of the importance of lifelong learning, so far very little has been undertaken to allow workers to benefit from a continuous upgrading of their skills. 22 For a summary of the effects of occupational licensing on the foreign born in Germany, see Runst, 2018; for an overview of their effects in the United States, see https://www.brookings.edu/research/occupational-licensing-and-the-americanworker/. 23 See, for instance, the initiative at the European level, the Europass: https://europass.cedefop.europa.eu/documents/ european-skills-passport.
with potentially pernicious effects on productivity growth and job creation (Andrews et al., 2016;Autor et al., 2017aAutor et al., , 2017b. Large productivity differentials between firms have, indeed, been shown in the past to constitute a barrier for wider diffusion of technological progress and innovation among lagging firms, a pervasive problem in countries with large informal economies (Aghion et al., 2005;Boone, 2001). The danger is that with the early adoption of AI-based technologies in leading companies, the productivity differential is set to widen, leading to a rise in market concentration and a push toward "informalization" of those companies that are falling further and further behind the productivity frontier, with consequences for wage growth and working conditions. In addition, the concentration of profit and wealth among a few, large companies creates the risk of regulatory capture by the rich, with adverse consequences for open markets, innovation diffusion, enforcement of (labor) regulation, and country's capacity to collect taxes (see Naudé and Nagler, 2015).
Establishing and maintaining a competitive environment for AI to benefit the economy more broadly can be achieved through three different policy measures.
Investing in digital infrastructure to share the benefits of AI more broadly
Investing in digital infrastructure is a key measure to ensure that companies across a broad spectrum of sectors and locations can successfully compete. For emerging economies, this creates new opportunities, as in the absence of a legacy infrastructure (e.g., in high-speed fiber-optic Internet) new public infrastructure can be deployed without interference from an incumbent, thereby helping to create a level playing field. Certain successful non-AI innovations in electronic payment systems (M-pesa in Kenya) or electric vehicle development (China) can testify to the success of such a strategy. However, even in the presence of an incumbent, policymakers need to ensure that the latest infrastructure with high scalability is being deployed to allow companies to take full advantage of the new technologies. Without such an (public) investment in digital infrastructure, the applications and deployment of AI will remain limited, in particular in developing countries where such infrastructure is significantly lacking.
Providing basic AI tools in the form of open source to enhance access to AI for all
AI is first and foremost a set of (statistical) methods that need to be implemented in a concrete business case. Often, however, the initial step to explore and evaluate opportunities that arise from AI might not be fully anticipated by market players, especially if they are not operating at the technological frontier. This might be a particular problem in developing and emerg- Keeping access to official statistics and basic AI functions as a public good will be essential in maintaining a competitive environment and preventing further industry concentration. Governments could, for instance, pursue an Open Data Policy that would guarantee free access to official statistics, including (anonymized) access to large micro datasets, which currently often require significant amount of time and financial resources to work with. Such a policy could be complemented by setting up public (research) institutions that help codevelop new algorithms for Big Data analysis under the requirement that these algorithms remain open access as well.
Adjusting antitrust policies to prevent first-movers from establishing marketdominant positions
A final set of policy measures consists in adjusting antitrust legislation and intellectual property rights to the particular challenges posed by the digital economy and AI in particular.
In this regard, the question of how to properly account for, price and tax the data input that is essential for developing and training new AI algorithms constitutes a key issue. In addition, the legal framework needs to be extended to not only grant ownership to data but also to the predictions generated from these data. This will have direct implications for the sharing of technological rents. 26 Currently, intellectual property around AI is governed by different regulations and laws.
Data (collections) are protected by copyright laws, whereas AI algorithms fall under the premise of patents, which are characterized by stricter time limits (and hence potentially weaker protection). The output of AI tools (for instance, creative works) are, so far, not protected. Similarly, individual data are not being protected (but rather are considered confidential), including against false information (Scassa, 2018). This is a particular challenge when workers, customers, or debtors are shunned from market opportunities because of false information recorded in the databases on which AI algorithms base their assessment. Accounts of credit scoring systems shunning potential debtors from financial services because of erroneous information regarding birthdates or names are well known. With more and more matching taking place through algorithmic processes, these problems are likely to multiply as market participants do not have the possibility to verify and potentially contest the data recorded about their digital profiles (avatars). Extending the existing copyright framework to individual data may, however, be too strict as it would also prevent the development of efficiency-enhancing applications. Rather, a more balanced protection of ownership across different categories (data, data collection, algorithms, services) seems to be necessary to balance better the need for privacy and data truthfulness with business interest to innovate and develop new products and services.
In this regard, switching costs between networks prove to be a particularly challenging issue that locks customers to specific service providers (Stucke and Grunes, 2016, ch. 10). For instance, signing up and matching with potential employers on gig work platforms entail substantial costs for workers (Berg et al., 2018b). If the time and energy invested to create and update a profile on one particular platform cannot be transferred to another one (for instance, because the platform does not allow the download and transport of the entire network tree of a particular worker), this creates a significant position of dominance for the platform provider and distorts the terms of trade in its favor. Given that the value of such a network and the 26 https://www.techworld.com/data/ip-rights-for-ai-who-owns-copyright-on-content-created-by-machines-3671082/.
precision of the AI matching algorithm depend on the number of network members, the first to open a platform and attract members enjoys a substantial advantage over possible competitors and can pocket a significant share of the consumer surplus.
Another issue arises from the ownership of the products and services produced by AI.
As mentioned, copyright law currently does not protect work produced by nonhumans (the famous monkey who reproduces a work by Shakespeare by accident). Therefore, AI-based creativity, for instance, in plastic arts or music, is currently not being covered by copyright law.
At the same time, individual software code used to produce these works is highly protected, limiting the replication, diffusion, and use of this software in a different, potentially more productive, use. Again, the market barriers that this creates are substantial, besides distorting the incentives for developing AI rather than benefiting from its outcomes. Specifically, this creates significant adoption barriers for applications of AI in low-income and developing countries, despite the potentially large benefits they could confer in these countries. In both cases, therefore -the portability of network information and the protection of copyrightslegislation is bound to evolve to recognize the new reality and to weigh the benefits of open competition against the challenge of worsening (pre)existing inequalities. Open source projects and the development of Creative Commons as an alternative to traditional copyrights can be seen as first steps in this direction.
Taken together, this discussion suggests to move the current intellectual property rights system away from the (strict) protection of upstream data input toward patents and copyrights of downstream products and services. This would help strengthen competition at the insourcing of new data, while giving data providers strong incentives and the means to ensure data truthfulness. Lower switching costs and stronger competition around algorithm development would erode current monopoly rents and while enforcing at the end consumer of digital products and services would shift business models back toward more traditional pricing models.
Potentially, this will require the development of data industry standards in order to allow a smooth interoperability of different data systems.
Social protection and taxation to tackle inequality and job polarization
Providing support for those in transition and ensuring social cohesion through a reduction in income inequality remains a key challenge given the AI-based wave of technological change.
In this regard, tax-benefit systems play a key role in helping workers to cope with transitions to new opportunities in different occupations, sectors, or locations. This seems particularly important in light of the reduction in labor mobility discussed above, that entails significant adverse consequences for the possibilities of workers to benefit from new opportunities. Besides limits to occupational mobility related to lack of skills or industry concentration, the taxbenefit system and other institutional barriers are hampering mobility as well. Portability of benefits, including within the same jurisdiction, is not always guaranteed, lowering incentives for people to move. Often, public providers of employment services (PES) are not well connected either across different locations, preventing jobseekers from getting to know about interesting opportunities. Information sharing is hampered by the lack of a digital infrastructure, outdated modes of information gathering and storage, or simply the use of incompatible standards across different branches of the social protection system. Besides a general investment in the digital infrastructure, AI-based matching tools could provide instruments to address some of these issues, provided that regulatory barriers to PES are lowered and incentives strengthened to make use of these services. Well-designed social protection systems, therefore, need to include elements of a strong and well-maintained digital infrastructure, portability of rights across occupational and geographical boundaries, and a proper incentive and support structure to help workers in successfully undertaking their transition to a new job opportunity.
Importantly, these social protection systems need to be well funded to provide a sufficient economic stimulus. Indeed, among the difficulties for a successful transition is the lack of a dynamic economic environment to stimulate structural change. Historically, social protection systems have played a significant role in providing insurance against large shortfalls in demand in such situations, and smoothing of aggregate demand thanks to social protection is typically the largest component among labor market policies in contributing to job creation (Ernst, 2015). In this regard, social protection systems can only function when they are supported by well-funded, stable government revenues. Given the increasing importance of superstar firms in the economy, taxing, and redistributing excess profits these firms earn will become increasingly important in ensuring that AI will not lead to an unequal society. Indeed, besides securing funding to social protection systems, an efficient tax system is also an important tool in addressing rising inequality. However, in this age of fast technological change and digitalization, using tax policies to address income inequality faces particular challenges. Technological changes are altering parts of the tax system in important and sometimes dramatic ways, providing both new risks for policymakers and tax administrations to ensure adequate and equal taxation. For instance, digitalization has accelerated the spread of global supply chains in which multinational enterprises integrate their worldwide operations. In this context, taxing rights on income generated from cross-border activities is a challenging task for policymakers.
These changes in the age of digitalization and globalization can exacerbate base erosion and profit shifting risks (see OECD, 2015b).
Possible solution to such base erosion is to move from a resident taxation to a customer-based tax system (Falcão, 2018a). Such a system would allow to levy tax revenues where they are generated (i.e., at the level of the individual customer), especially when a large part of the customer base is outside the resident country of the (content) provider. Moving toward a consumption-based tax system is, however, not without its own risks as such taxes might exacerbate income inequalities. In this regard, some recent studies argue that despite their apparent regressive nature, general indirect taxes (e.g., value-added tax, sales tax) can potentially reduce income inequality, provided they lead to an increase in labor force participation rates (OECD, 2018;Ciminelli et al., 2017). Nevertheless, indirect taxation of digital contents might need to be complemented with new forms of corporate taxation, which can, when properly designed, stimulate innovation rather than deterring it. As pointed out by Acemoglu et al. (2018), taxing incumbents rather than subsidizing their R&D activities can help strengthen innovation as it will force companies to either innovate or exit the market. Governments can stimulate market exit of low innovative companies, thereby lifting innovation and productivity growth while still ensuring sufficient government revenues. In other words, responding to the needs of properly taxing the digital economy provides opportunities for changes in the tax system that can help maintain a stable tax revenue base while strengthening economy efficiency through higher labor force participation and stronger innovation incentives.
Among the most prominent implications of technological change is that it affects the prices of factors of production (including wages) and of produced goods. A possible way to address both rising income inequalities and skill-biased technological change, therefore, consists in introducing differential taxation to favor labor over capital. Low-skilled workers might benefit, for instance, from wage and hiring subsidies or tax credits, to keep labor demand high for this type of work. Alternatively, tax policies might focus on making capital more expensive, such as the much-discussed robot tax famously advocated by Bill Gates. Such a tax might help to generate significant fiscal revenues without distorting investment incentives, provided that the supply of capital or of inputs complementary to capital are sufficiently inelastic (Korinek and Stiglitz, 2017). More promising solutions include broad resource taxation such as carbon taxes, which would encourage resource-saving instead of labor-saving innovation. It would thus simultaneously address two of the most serious global problems, global climate change and inequality (Falcão, 2018b). Similarly, the elimination of tax deductions for interest and the imposition of a tax on capital would increase the cost of capital and induce more capital augmenting rather than labor-saving innovation. Nevertheless, given the challenges of taxing excess profits arising from digital technologies, alternative ways for a fair distribution of technological rents will need to be considered, which is what we turn to in the next section.
How to share technological rents more broadly?
Rather than trying to tax away excess profits, some policy proposals take issue directly with the way technological rents are currently being appropriated. Indeed, part of the growing inequality produced by the digital economy (and specifically by AI applications) relates to the fact that consumers share their data for free in exchange for "free services." This "zero marginal cost society" was long heralded as the new business model (Rifkin, 2014) but increasingly shows its limitations both in terms of people's privacy concerns and in terms of its economic and social impact, as discussed previously. One solution to address at least the economic side of the issue could be that consumers continue to share their data freely but restrict their use for specific purposes that provide only limited profit opportunities. As soon as a company expects to develop new, profitable products or services -for instance, thanks to medical information that is being shared -consumers' consent needs to be requested and rewarded, for instance, through participation in the expected profits. Given the essential role of data in building and training algorithms for AI tools, such a system could reestablish proper, marginal cost-based incentives in comparison to the current free data-free services model (Ibarra et al., 2018).
Such a reward not only rectifies the inequalities that arise from the current system but also maintains incentives for people to share their personal data, a prerequisite for new tools to be designed and developed.
Related to properly setting incentives for data sharing is the issue of data privacy and data control. As mentioned earlier, matching algorithms that rely on large, unstructured databases run the risk of establishing biased profiles of candidates -for instance, on gig or recruitment platforms -that limit employment opportunities and depress working conditions for (certain groups of) candidates, thereby perpetuating preexisting biases. In the case of algorithmic platforms, however, litigation processes are currently underdeveloped or absent, making it difficult to state a case against unfair treatment on these platforms. Several initiatives have already been started, involving social partners in supporting gig platform workers when faced with such a situation (Berg et al., 2018b). Policymakers and social partners will, however, need to become more alert to these developments, as new applications of AI in areas such as HR analytics are indicating that companies will increasingly cross-analyze large amounts of data in analyzing their workforce performance, some of which is likely to undermine existing national and international labor regulations (De Stefano, 2018). Concrete policy proposals in the use of data for particular purposes will need to be determined and negotiated among social partners on a case-by-case basis, but might involve the restriction of different sources of personal information to be matched for analytical purposes. In this regard, the application and impact of the recently introduced European legislation on General Data Protection Regulation (GDPR) will need to be closely monitored and analyzed to draw useful inferences for other countries and for specific labor market applications.
Several observers have suggested the adoption of policies to share productivity gains more broadly. The two most prominent suggestions are a reduction in working time (possibly combined with a universal basic income support) and shared capital ownership by encouraging workers -either individually or through collective funds -to participate in capital gains and profits ("shared capitalism", Freeman et al., 2009;Freeman, 2016). Neither of these two policy proposals is specific to AI-driven technological change, but given the speed and extent to which AI seems to affect the economy, both proposals can rely on historical experience and might, therefore, easily be implemented and scaled up. A reduction in average working time comes at a moment where productivity gains have not been shared through shorter working weeks over the past few decades but face -in particular in advanced economies -a slowdown or even decrease in labor supply, which might make it difficult to be defended politically. Profit sharing models have also been around as policy proposals for some time and implementedgradually -among companies and countries in advanced economies (e.g., participation in France). At present, this proposal continues to face strong political resistance, not least because of the fear by capital owners of being restricted in their use of profits and investment. Empirical evidence shows, however, that such a policy could effectively reduce inequalities while at the same time improving company performance (Kurtulus and Kruse, 2017).
Large economies of scale and first-mover advantages from AI (as described above) run the risk of worsening the income gap not only within but also between countries. Convergence achieved over the past three decades by moving people in the developing world out of poverty thanks to increased access to technological transfer and international trade might be put at risk when few companies in advanced economies reap most of the benefits from new, AI-based technologies. In the absence of a fairer international system, many benefits that could accrue to low-income countries thanks to their significantly reduced price of capital might not materialize when leading innovating firms are setting up new barriers to the entry and diffusion of technologies. Many of the potentially development-enhancing AI applications discussed above are developed and patented in advanced economies, leaving access to their use and their benefits among rich nations. Developing countries, therefore, lose out from the benefits of AI on two fronts: first, by not having access to the tax income generated by innovating companies due to the particular way in which international tax treaties allow digital services to be taxed (Falcão, 2018a); and second, as discussed above, by not having an open access to patented AI applications that would be particularly beneficial for their economic development. As patents are creating a legal monopoly (albeit temporarily), this reinforces the first-mover advantages of digital innovations such as AI, to the detriment of those countries that have less capacity to develop these systems for themselves. Besides open AI approaches previously mentioned, this also calls for action by international development agencies in supporting the implementation of AI and big data strategies in developing countries, helping them to access and develop these technologies for their own national benefit and supporting their diffusion among both private and public actors that would significantly benefit the delivery, implementation and monitoring of policies, such as upholding international labor standards (Grabher-Meyer and Gmyrek, 2017).
Outlook and open questions
The current wave of applications based on AI promises to be the largest and most widely ranging technological change observed over the past decades. Its general purpose nature that allows this new technology to be applied in a large span of sectors and occupations, irrespective of the skill level of the involved workforce, creates a broadly shared fear of job loss and control over people's lives. Previous experience with automation, in particular stemming from robotization over the last three decades, seems to suggest that this new wave of technological change might bring significant challenges, especially to developing countries as they face both automation and re-shoring of existing tasks and thereby lose their advantage of lower labor costs that were underpinning their development model over the recent past.
This paper has argued that there are significant opportunities arising from these new, AI-based technologies, including for developing countries, and that the risks, rather than being on the side of job losses, are linked to further worsening income inequalities, both within and across countries. The particular digital nature of AI makes it easy to diffuse but creates large first-mover advantages that can contribute to further rising market concentration and inequality. At the same time, its versatility and general purpose nature allow the creation of expert systems that are potentially beneficial in a large range of occupations, even among lowskilled or low productive ones. In this respect, the paper has argued that the large reduction in capital costs that is brought about by AI applications together with the fact that the direction of technological change is, in part at least, driven by the relative supply of low-vs. high-skilled labor, developing countries stand to benefit from AI, provided it diffuses widely and that technological rents are broadly shared.
For the opportunities to exceed the risks, however, policies need to be adjusted at both the national and the international levels. This paper argues that skills policies in and of themselves, albeit necessary, will not be sufficient in this regard. Policymakers and social partners need to ensure that individual companies cannot gain market dominance, thereby excluding users from their algorithm or maintaining and replicating existing biases. The paper argues that a different way of protecting data is required, giving people more control over their individual information. In addition, existing initiatives such as those undertaken by social partners in the platform economy need to be developed further and implemented more widely. At the international level, a better sharing of the benefits of the new digital economy, possibly through an adjustment in international tax treaties, will also be necessary to prevent digital companies from undermining a country's fiscal revenue base. Finally, long-standing policy proposals for a fairer global economy should be brought to new life in the light of the significant economic rewards that AI-based innovations promise. This includes a continuous reduction in working hours, especially among those countries where long hours are still the norm, as well as sharing the receipts of innovation rents through profit sharing policies that have already been successfully implemented in some countries in the past.
Given the novelty of this technological innovation, a continuous observation and monitoring of its applications and impact will be necessary, both by national and international actors. Several possible consequences can already be distinguished, such as those discussed in this paper. Others, in particular regarding the specific impact AI-based innovations will have on workplace organization and the employment relationship more broadly, remain highly uncertain. As the technology is evolving quickly, new risks and opportunities might arise that will require constant regulatory adjustment to ensure that technological rents are broadly shared. Also, constant exchanges among policymakers and regulators are necessary to avoid regulatory capture, as well as proper support for local actors to benefit from the advantages of AI. The international community and the ILO, in particular, are well suited to provide this important platform for exchange and experience and to support countries and social partners in adjusting their regulations, as well as negotiation with the necessary information and policy recommendations.
Several questions on the potential long-term consequences of the development of AI arise.
One concerns the particular form that AI will take in the future and whether humans will be able to apprehend the decisions being taken by machines. As discussed, current applications of AI run the risk of replicating biases from human decisions (e.g., in hiring). This poses obvious ethical questions, in particular as these applications no longer allow a transparent account of how the decisions have been taken. Recent developments in this area that rely on a different methodology (genetic algorithms rather than neural networks) might offer a more transparent alternative, but for the moment it is too early to assess their full potential. Another, more fundamental, question concerns the shift from automating workforce to automating "brain force," with machines (autonomously) acquiring new skills and competencies at a much faster pace than humans will be able to. If such a shift from specific AI (as discussed in this paper) to general AI takes place, human capital will no longer be the constraining factor in the technological evolution, which could happen much faster than before. In other words, evolution would no longer be constrained by "biological computers" (i.e., humans) but could move to machines, a vision recently put forward by Harari (2016). Ultimately, however, the type of AI algorithms that will be used and the decision to develop and implement general AI -independently of its technical feasibility -will eventually be determined by policymakers and customers who might deliberately vote and decide against some of the more harmful manifestations of AI. For the moment, at least, it remains the case that robots cannot vote.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Competing interests
The authors declare that they have no competing interests.
|
v3-fos-license
|
2023-11-17T05:21:43.388Z
|
2023-10-01T00:00:00.000
|
265218761
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://sciendo.com/pdf/10.2478/jccm-2023-0031",
"pdf_hash": "859554a4fcdb536763f8bf6929214a1757652914",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46608",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "859554a4fcdb536763f8bf6929214a1757652914",
"year": 2023
}
|
pes2o/s2orc
|
Is Carboxyhaemoglobin an Effective Bedside Prognostic Tool for Sepsis and Septic Shock Patients?
Abstract Introduction Proper management of sepsis poses a challenge even today, with early diagnosis and targeted treatment being the most important steps. Easy, cost-effective bedside tools are needed in order to pinpoint towards the outcome of sepsis or septic shock. Aim of study This study aims to find a correlation between Sequential Organ Failure Assessment (SOFA), Acute Physiology and Chronic Health Evaluation II (APACHE II) and Simplified Acute Physiology Score II (SAPS II) severity scores, the Neutrophil-Lymphocytes Ratio (NLR) and carboxyhaemoglobin (COHb) levels in septic or septic shock patients with the scope of establishing a bed side cost-effective prognostic tool. Materials and methods A pilot, prospective, observational, and ongoing study was conducted on 61 patients admitted with sepsis or septic shock according to the SEPSIS 3 Consensus definition. We followed clinical and paraclinical parameters on day 1 (D1) and day 5 (D5) after meeting the inclusion criteria. Results On D1 we found a statistically significant positive correlation between each severity score (p <0.0001), r = 0.7287 for SOFA vs. APACHE II with CI: 0.5841–0.8285, r = 0.6862 for SOFA vs. SAPS II with CI: 0.5251–0.7998 and r = 0.8534 for APACHE II vs. SAPS II with CI: 0.7663 to 0.9097. On D5 we observed similar results: a significant positive correlation between each severity score (p <0.0001), with r = 0.7877 for SOFA vs. APACHE II with CI: 0.6283 to 0.8836, r = 0.8210 for SOFA vs. SAPS II with CI: 0.6822 to 0.9027 and r = 0.8880 for APACHE II vs. SAPS II., CI: 0.7952 to 0.9401. Nil correlation was found between the severity scores, NLR and COHb on D1 and D5. Conclusion Cost-effective bedside tools to pinpoint towards the outcome of sepsis are yet to be found, however the positive correlation between the severity scores point out to a combination of such tools for prognosis prediction of septic or septic shock patients.
Is Carboxyhaemoglobin an Effective Bedside Prognostic Tool for Sepsis and Septic Shock Patients?
Introduction In 2016 the SEPSIS 3 Consensus updated the definition of sepsis and septic shock.Thus, sepsis represents a lifethreatening organ dysfunction caused by a dysregulated host response to infection.Moreover, septic shock, a subset of sepsis, is defined by profound circulatory, cellular, and metabolic abnormalities, and a greater risk of mortality than sepsis [1,2].Early diagnosis, infec-tion control and aggressive resuscitation are part of the globally standardized management [3][4][5][6].
Early assessment of the severity of sepsis and its prognosis remains imprecise.The scarcity of comparative studies of different prognostic markers such as C-reactive protein (CRP), procalcitonin (PCT) and prognostic scores such as SOFA, Acute Physiology and Chronic Health Evaluation II (APACHE II) and Sim-plified Acute Physiology Score II (SAPS II) requires further investigation.
Starting with the year 2000, Roman Záhorec studied more closely the ratio between neutrophils and lymphocytes (Neutrophil to lymphocyte ratio -NLR) and its predictive role regarding the evolution of sepsis [7,8].It can be easily calculated by dividing the number of neutrophils to that of lymphocytes.Neutrophils contribute to the innate immune response through phagocytosis and the release of cytokines whilst lymphocytes, representing the adaptive immune response, decrease in number under stress conditions.Inflammation is due to demarginating and accelerated apoptosis [8].
Neutrophilia and lymphocytopenia are reactions to physiological stress, thus NLR represents the balance between the innate and adaptive immune response [8,9].However, NLR cannot pinpoint the exact cause of the patient's condition, high values are found in severe inflammation, trauma, major surgery, neoplasm, being associated with high mortality and morbidity [3,8].NLR is increased by any source of physiologic stress and might be used in sorting the "sick versus not sick" [9].
Optimal cut-off values for measuring stress intensity and inflammatory response have been refined according to clinical studies and observations.The normal cut-off value of NLR is approximately 1-3, the values increase in proportion to the degree of physiological stress, especially in septic shock [9,10].Values above 3 and below 0.7 are pathological and are associated with significant mortality and morbidity [8].The gray area, which corresponds to an NLR of 2.3 -3.0, raises the suspicion of a latent, subclinical inflammation.Values between 3 and 17 represent different degrees of inflammation; septic shock is found between 17 and 23, while critical systemic inflammation, terminal cancer, major surgical interventions, polytrauma correspond to NLR ≥ 23+ [8,9].
The role played by the liver in sepsis has been thoroughly studied across the years, the inhibition of hepatocyte clearance of bilirubin and elevation of transaminase levels mirror the intensity of liver function impairment [11].The matter in question is represented by the large amount of carbon monoxide (CO) produced by the liver in sepsis, by catabolism of heme via hemeoxygenase-1 (HO-1) pathway.CO is detected in blood as carboxyhemoglobin (COHb) and CO excretion in breath [12].
Therefore, COHb levels might be utilized as a bedside prognostic tool to monitor the progression of sep-sis and could provide early information regarding the outcome of both bacterial and viral infection [13].
APACHE II, SOFA and SAPS II scores are wellknown mortality predictors calculated for septic patients in the Intensive Care Unit (ICU) to assess disease severity, treatment response, and mortality risk [10,14].
This study aims to find an easy, quick, bed side and cost-effective tool for predicting sepsis and septic shock outcome by correlating the SOFA, APACHE II and SAPS II severity scores with NLR and COHb levels, in order to guide the clinician on the evolution of the disease.
Materials and methods
This is a pilot, prospective, observational, and ongoing study conducted on a number of 61 patients admitted to the Anesthesia and Intensive Care Department of the Târgu Mureș Emergency Clinical County Hospital, Mureș County, Romania between July 2021 and September 2022 (Figure 1).
The inclusion criteria were: age above 18 years and diagnosis of sepsis or septic shock according to the SEPSIS 3 Consensus.The exclusion criteria were: current neoplasia, current chemo-or radiotherapy, corticosteroid treatment or immunosuppressive medication, or evidence of autoimmune disorders.
Patient data were obtained on Day 1 (D1) and Day 5 (D5) after meeting the criteria of diagnosis of sepsis or septic shock.Clinical and paraclinical parameters followed were: blood count, biochemical blood tests, arterial blood gas (ABG) analysis, serial bacteriological tests, along with calculating the severity scores (SOFA, APACHE II and SAPS II).The need for vasoactive medication, along with the ventilation parameters were recorded.COHb was determined by an arterial puncture using a standard heparinized syringe (Stat Profile Prime Plus, Manufacturer: Nova Biomedical, Waltham, MA 02454-9141 USA, year of manufacture 2018).All the obtained data were recorded in a database.
Regarding the interpretation of Neutrophil to lymphocyte ratio presented in our study, we used the NLRmeter implemented by Roman Záhorec [8].
This study was conducted with the approval of the Hospitals Ethics Committee approval no 5416/25.02.2021 for septic patients.The General Data Protection Regulation (GDPR) agreement was respected, and the obtained data was used for research purposes only.
Statistical Analysis
The obtained data were recorded in a database and statistically analyzed using GraphPad Prism 8. Data series normality was tested using the D' Agostino & Pearson test.Descriptive statistics are reported as median, minimum, maximum, percentiles (25th, 75th) and interquartile range.For each day (D1 and D5), we performed correlation analysis (Pearson and Spearman two-tailed correlation test) between each severity score (SOFA, APACHE II and SAPS II), NLR and COHb values.All statistical tests used a significant threshold of p = 0.05.
Results
The average age of patients was 68 years, minimum age was 33 years, maximum age 90 years old (Figure 2).The majority of cases were situated in the 60 -80 years interval, particularly due to decreased immunity and the presence of comorbid disorders, increasing the frailty of patients.The distribution by gender compiled 23 females and 38 males.Out of the 61 patients included in this study, on the first day of enrollment 24 presented with septic shock and 37 with sepsis, and of all the patients 15 survived (Figure 3).Descriptive statistics for SOFA, APACHE II, SAPS II, NLR and COHb on D1 are displayed in Table 1.
On D1 we found a statistically significant positive correlation between each severity score (p <0.0001), r = 0.7287 for SOFA vs. APACHE II with CI: 0.5841-0.8285,r = 0.6862 for SOFA vs. SAPS II with CI: 0.5251-0.7998and r = 0.8534 for APACHE II vs. SAPS II with CI: 0.7663 to 0.9097.
We found no correlation between the severity scores vs. NLR and COHb levels, however, we found a statistically significant negative correlation between NLR and COHb levels with a CI: -0.4815 to 0.0044956, r = -0.2543(Figure 4 -Figure 7).
Descriptive statistics for SOFA, APACHE II, SAPS II, NLR and COHb on D5 are displayed in Table 2.
On D5 we observed a statistically significant positive correlation was found between each severity score (p <0.0001), with r = 0.7877 for SOFA vs. APACHE II with CI: 0.6283 to 0.8836, r = 0.8210 for SOFA vs. SAPS II with CI: 0.6822 to 0.9027 and r = 0.8880 for APACHE II vs. SAPS II, CI: 0.7952 to 0.9401.
Nil correlation was found between the severity scores, NLR and COHb on D5 (Figure 8 -11).
In regards of mortality, we evaluated the predictability of evolution of either sepsis or septic shock by observing the changes in NLR and COHb from D1 to D5 and we found the following in Table 3. Survivors of either sepsis or septic shock presented an improvement in the NLR and COHb levels from D1 to D5.In non-survivors we observed a decrease of NLR from D1 to D5 whilst COHb levels increased.Also, in 16 nonsurviving patients we managed to record data only on D1 due to death before D5 of study inclusion.
We studied the evolution of both NLR and COHb levels on D1 and D5 for sepsis and septic shock in survivors and non-survivors, and the results are illustrated in Figure 12 -15.
On D1 of study inclusion, we found that most sepsis non-survivors presented with NLR values over 23, characterized as critical systemic inflammation, maintaining high values in D5 as well.NLR values on D1 of sepsis survivors ranged between 3 and 23, with a decrease towards D5.
On the other hand, septic shock survivors and nonsurvivors presented high values of NLR throughout D1 to D5, no patients presented values between 0.1 -0.7 and between 2 -3, corresponding to the gray area, latent, subclinical, low-grade inflammation.
For COHb, we used increments of 0.5% to categorize the groups of patients, the reference range values from the ABG analyzer were between 0.5 -1.5%.For sepsis non-survivors we observed that on D1 the majority presented values of COHb between 1 -1.5%, whilst sepsis survivors were more uniformly distributed between 0.5% and 2%.
Regarding septic shock patients, most non-survivors presented values between 1 -1.5% in D1 of establishing the criteria for septic shock.In comparison with sepsis survivors, we observed values of COHb in the range of 2 -2.5% on D1 for septic shock survivors.
A more comprehensive view of the pathology of pa-tients included in the study is achieved by examining the site of infection, cause and the pathogens involved.
Regarding the site of infection, the majority of patients included in the study presented pulmonary and abdominal infections (Table 4), hence the predominant aetiology was bronchopneumonia and peritonitis.Pathogens involved in the infectious process vary greatly, highest incidence is represented by Acinetobacter baumanii, followed by Klebsiella pneumoniae with its forms and Pseudomonas aeruginosa (Table 5).
Discussions
Proper management of sepsis poses a challenge even today, with early diagnosis and targeted treatment being the most important steps.Easy, cost-effective bedside tools that can pinpoint towards the outcome of sepsis and septic shock have been the pinnacle of research for many years, especially since world economy suffered severe headwinds amid weak growth prospects and heightened uncertainties.[15,16].In a fast-paced society, the need for easy-to-read tests should be considered an appendix of physicians.The severity scores are still a reliable tool used in ICU for assessing the prognostic of either sepsis or septic shock by evaluating disease severity, response to treatment and risk of mortality in the ICU.The impor-tance of appreciating the evolutionary path of pathology plays an important role in effectively prescribing the needed medication.On the first and fifth day of our study we found a positive correlation between each severity score, SOFA, APACHE II and SAPS II, results that are supported by recent scientific literature [14,17].However, the debate concerning the accuracy of predicting morbidity and mortality remains, since the scores reflect the state of disease at a certain moment of the patients' stay in the ICU.Thereby, constant reevaluation of either SOFA, APACHE II or SAPS II, independently or together, ought to be done.Nonetheless, the severity scores do not take into consideration other factors such as lifestyle, received medications or the quality of the follow-up care to estimate in the long run the morbidity and mortality of septic or septic shock patients [18].
Fluid resuscitation and vasoactive agents are part of the treatment of sepsis and septic shock, the impact of hemodynamic instability is also translated as an impairment of microcirculation.Liver dysfunction occurs synergistically with tissue hypoxia and impaired hepatic microcirculation [19].Production of endogenous CO is the result of heme catabolism by heme oxygenase enzymes [12].Disruption of heme metabolism and liver dysfunction associated with sepsis leads to an increase of COHb levels mainly due to oxidative stress, hypoxia, cytokines, endotoxins, and inflammatory mediators [13,20].COHb is a parameter measured frequently in ICU during routine arterial blood gas analysis, an abrupt change of trend in COHb levels could lead the physician to notice a new course of the disease [12].
Although we observed no statistically significant correlation between COHb level and severity scores, on D1 of meeting the criteria for sepsis, the majority non-surviving patients presented elevated levels of COHb.Septic shock patients, both survivors and nonsurvivors, on D1 of inclusion in the study, presented elevated COHb levels.
Literature on COHb and the role played in sepsis is scarce, an overview on PubMed returns 34 results from 1974 to 2023.R. Palmieri an V. Gupta published in 2023 a review regarding carboxyhemoglobin toxicity stating that carbon monoxide, after displacing oxygen form hemoglobin, deceases its oxygen-carrying capacity causing tissue hypoxia and acidosis, and plays a role in inhibiting aerobic metabolism by binding to mitochondrial cytochrome oxidase [21].
Another study published in 2023 by G. Vadar and E. Ozek studied COHb levels in late-onset sepsis in preterm neonates and found increased levels of COHb in the beginning of sepsis that decreased in response to antibiotics, but the variation of COHb, when used in conjunction with other sepsis biomarkers, could predict the outcome of sepsis [22].Our study found an increase in COHb levels from D1 to D5 in both sepsis and septic shock non-survivors, comparable results to to recent literature [13,22].
The NLR was introduced as a prognostic tool for sepsis and septic shock because of its simplicity and cost-free [23].Calculated as a ratio between the neutrophil and lymphocyte counts in peripheral blood, it found its usefulness in predicting outcomes in oncology patients [24].
Drewry et al. found that persistent and profound immunosuppression prior to succumbing to sepsis or septic shock manifests with low count of circulating lymphocytes on the fourth day following the diagnosis of sepsis and could predict short-or long-term survival [25].Our study identified in D1 in a number of 11 sepsis non-survivor's values of NLR over 23, corresponding to critical systemic inflammation and supraphysiological stress [7,8].
Another study, by Buonacera A. et al. suggested that an increase of NLR in under 6 hours following acute physiological stress could suggest the use of NLR as a marker of acute stress before other laboratory parameters, such as C-reactive protein, white blood cell count, etc. [26].We found increased values of NLR on D1 in 16 patients who succumbed before D5 of the study, suggesting a profound inflammatory status, results found also by a recent metanalysis by Hunag Z. and collaborators, describing that high NLR values are associated with poor prognosis [27].A loud immune response associated with the presence of cytokine storm, impaired microcirculation causing alteration of mitochondrial function and massive tissue damage are responsible for untimely death [28].
Drăgoescu A. and collaborators, in a study published in 2021, described NLR as a more reliable tool in identifying patients with more severe forms of sepsis when comparing it to the SOFA severity score [4].In our study we found statistically significant positive correlations in D1 and D5 between each severity score, SOFA, APA-CHE II and SAPS II, and no correlation between NLR and the severity scores.However, the high values of NLR on D1 for both sepsis and septic shock non-survivors and the correlations we found between the severity scores suggest profound immune system derangement.
CO is an endogenous gas found in exhaled human air, studied since 1920's when it was attributed to pollution and smoking.Its involvement in inflammation, cell death, and metabolism control is well established [29].It is mainly produced in the liver by catabolism of heme via HO-1 pathway, along with other products of heme degradation like free ferrous iron and biliverdin [31].Other sources of CO production are myoglobin, cytochromes, peroxidases, and catalase, contributing 20-25% to the total amount of endogenous CO [30,32].CO is detected in blood as COHb and CO excretion in breath.In sepsis, the microcirculation of liver is impaired, hence the intense degradation of heme and increased production of CO.COHb measurement by arterial puncture is a fast, easy and cost-effective method to receive information regarding liver function impairment caused by tissue hypoxia and affected microcirculation.In regard to exogenous CO intoxication, debate is ongoing with studies supporting and combatting the use of COHb as a useful method to detect CO toxicity [29,30] Our study had a number of limitations: a small number of patients that were enrolled, as well as being conducted in a single center.This is a pilot and ongoing study involving establishing different bed-side prognostic tools for either sepsis or septic shock.
Conclusions
NLR and COHb levels are straightforward biomarkers that are easy to calculate and cost-effective that can offer a perspective upon the complex relations of the immune process: inflammation and immunity.
Increased levels of CO produced by the liver, by following the variation of COHb and not the cut-off val-ues, should alert the physician regarding worsening of the condition, even if the patient might be a smoker and have higher COHb values per se.
A combination of prognostic tools should be utilized when aiming to predict the evolution of sepsis or septic shock.Future in-depth studies should focus on identifying the power of NLR and COHb levels as mortality predictors regarding sepsis, as well as considering other easily available biomarkers.
Figure 1 .Fig. 1 .Figure 2 .
Figure 1.Visual description of the study (N -number of patients; SOFA -Sequential Organ Failure Assessment; APACHE II -Acute Physiology and Chronic Health Evaluation II; SAPS II -Simplified Acute Physiology Score II)
Table 2 . Descriptive statistics on D5 SOFA APACHE II (points) SAPS II (points) NLR
SOFA -Sequential Organ Failure Assessment; APACHE II -Acute Physiology and Chronic Health Evaluation II; SAPS II -Simplified Acute Physiology Score II; NLR -Neutrophil-Lymphocytes Ratio; COHb -
|
v3-fos-license
|
2021-09-03T13:10:28.738Z
|
2021-08-27T00:00:00.000
|
237392456
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2021.710334/pdf",
"pdf_hash": "7520375a41d1393acbb3a659e2e50a4ec4c03032",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46610",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7520375a41d1393acbb3a659e2e50a4ec4c03032",
"year": 2021
}
|
pes2o/s2orc
|
The Role of Transthoracic Echocardiography in the Evaluation of Patients With Ischemic Stroke
Background: Ischemic stroke can be classified into five etiological types, according to the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification, and its adequate investigation and characterization can aid in its clinical management and in preventing new events. Transthoracic echocardiography (TTE) plays a key role in investigating its etiology; approximately one-third of the patients remain without an adequate definition of the etiology or are classified as the undetermined TOAST type. Objectives: To evaluate if the percentage of patients with indeterminate etiology according to the TOAST classification decreased after transthoracic echocardiography, to determine whether or not the prognosis after ischemic stroke is worse among patients classified as the undetermined TOAST type, and to verify the predictive capacity of echocardiography on the prognosis after ischemic stroke. Methods: In this retrospective cohort study, clinical, neurological, and echocardiographic examinations were conducted when the patient was hospitalized for stroke. In-hospital mortality and functional capacity were evaluated at hospital discharge and 90 days thereafter. Multiple linear regression and multiple logistic regression models were adjusted for confounding factors. The level of significance was 5%. Results: A total of 1,100 patients (men = 606; 55.09%), with a mean age of 68.1 ± 13.3 years, were included in this study. Using TTE, 977 patients (88.82%) were evaluated and 448 patients (40.7%) were classified as the undetermined TOAST type. The patients who underwent TTE were 3.1 times less likely to classified as the undetermined TOAST type (OR = 0.32; p < 0.001). Echocardiography during hospitalization was a protective factor against poor prognosis, and reduced the odds of in-hospital death by 11.1 times (OR: 0.090; p < 0.001). However, the presence of the undetermined TOAST classification elevated the chance of mortality during hospitalization by 2.0 times (OR: 2.00; p = 0.013). Conclusions: Echocardiography during hospitalization for ischemic stroke reduces the chances of an undetermined TOAST classification and the risk of in-hospital mortality. However, being classified as the undetermined TOAST type increases the chance of mortality during hospitalization, suggesting that evaluating patients using echocardiography during hospitalization for acute ischemic stroke is important.
INTRODUCTION
A stroke is characterized by an acute neurological deficit attributed to a focal lesion of vascular origin in the central nervous system (CNS), which may be secondary to an ischemic infarction, or a parenchymal or subarachnoid hemorrhage (1). A CNS infarction is defined as the death of the brain, retinal, or spinal cord cells due to ischemia, as confirmed by pathological evidence on imaging examination. A CNS infarction may also be defined by other evidences of injury to the vascular territory or by the persistence of symptoms for more than 24 h after excluding other causes (1).
It is estimated that during their lifetime, one in six men and one in five women present with stroke (2), which is the second leading cause of death and is responsible for approximately one in eight deaths worldwide (3). In Brazil, stroke is the second leading cause of death and the leading cause of disability (4,5).
Stroke can be classified according to the pathology, etiology, and clinical presentation (6). According to the pathological classification, a stroke may be hemorrhagic or ischemic, with the latter corresponding to 80% of the total stroke cases. Etiologically, ischemic stroke can be categorized into five types, according to the Trial of Org 10172 in Acute Stroke Treatment (TOAST) classification: (1) large-artery atherosclerosis, (2) cardioembolism, (3) small-vessel occlusion, (4) stroke of other determined etiology, and (5) stroke of undetermined etiology (7). The proportion of patients in each group differed among the studied populations. The definition of the etiological mechanism is important for evaluating severity, progression, and prognosis. The cardioembolic type is responsible for 14-30% of ischemic conditions and has higher mortality, greater severity, and worse functional outcome compared to the other etiologies (8,9); the affected patients are predisposed to early recurrence. Atrial fibrillation (AF) is the main finding associated with this type of stroke.
Rücker et al. studied 3,346 ischemic stroke patients to determine the long-term survival and recurrence after ischemic stroke according to the etiological subtype (the TOAST Abbreviations: TOAST, Trial of Org 10172 in Acute Stroke Treatment; TTE, transthoracic echocardiography; mRS, Modified Rankin scale; OR, odds ratio; 95% CI, 95% confidence interval; LA, left atrium diameter; LVM, left ventricular mass; LVEF, left ventricular ejection fraction by the Teichholz method; ASC, alteration of segmental contractility; LVH, left ventricular hypertrophy; LVR, left ventricle remodeling; SDD, severe diastolic dysfunction; MoDD, moderate diastolic dysfunction; MiDD, mild diastolic dysfunction; S AoV Insuf, severe aortic valve insufficiency; Mo AoV Insuf, moderate aortic valve insufficiency; Mi AoV Insuf, mild aortic valve insufficiency; NIHSS, National Institute of Health Stroke Scale; LACS, lacunar syndromes; PACS, partial anterior circulation syndromes; POCS, posterior circulation syndromes; TACS, total anterior circulation syndromes; ACEI, angiotensin-converting enzyme inhibitor; ARB, angiotensin II receptor blocker. classification) in a population-based stroke registry in Germany. Their study showed that the 5 year survival rate was higher in patients with stroke, due to the occlusion of the small arteries, and lower in patients with cardioembolic stroke. Furthermore, the 5 year recurrence rates were lower in women with stroke, due to small artery occlusion, and in men with large artery atherosclerosis. The highest recurrence rates, in both women and men, were seen in indeterminate stroke (10). Existing literature still reports a certain degree of conflict in the clinical prognosis, mortality, and recurrence rate in the undetermined TOAST type, and this can be attributed to the heterogeneity of this etiological subtype, which comprises different pathophysiological mechanisms.
The cardiovascular risk profile and echocardiographic findings in patients with AF detected after a stroke are comparable to those of patients previously diagnosed with AF, but differ from those of patients without AF. Preexisting heart disease is the major cause of AF and is first diagnosed after a stroke (11).
Some disorders are considered to be high-risk sources for the cardioembolic type, such as mitral stenosis, heart valve prosthesis, myocardial infarction in the previous 4 weeks, mural thrombus in the left cavities, left ventricular aneurysm, any documented history of permanent or transient fibrillation or atrial flutter with or without spontaneous contrast echocardiogram or left atrial thrombus, sinus node disease, dilated cardiomyopathy, ejection fraction <35%, endocarditis, intracardiac mass, patent foramen ovale with in situ thrombosis, and patent foramen ovale associated with pulmonary thromboembolism or peripheral venous thrombosis prior to the ischemic stroke (12). Furthermore, with regard to structural heart diseases, four studies considered left ventricular dysfunction defined as recent heart failure, a 25% reduction in left ventricular ejection fraction, and an ejection fraction inferior to 50% as independent risk factors for stroke, despite a population overlap in three of the four studies. Two of the studies also considered ventricular hypertrophy and a left ventricular mass >110 g/m 2 in women and 134 g/m 2 in men as independent risk factors for stroke (13).
Left atrial enlargement is an independent factor for stroke and is associated with a 20% chance of thromboembolism per year in the presence of a left atrium >2.5 cm/m 2 with moderate to severe left ventricular contractility changes (14).
Left ventricular dysfunction and left atrial size were the strongest independent predictors of late thromboembolism. Patients without these two predictors on echocardiography, or without the three identified clinical predictors of thromboembolism (history of hypertension, recent heart failure, and previous thromboembolism) had a low risk of thromboembolism (1% per year). However, patients with no thromboembolism predictors but with one or both echocardiographic predictors had a 6% risk of stroke per year, showing that in addition to clinical assessment, echocardiography can stratify patients with AF and guide their therapy (15).
Despite all investigations, about a third of ischemic stroke patients cannot be categorized etiologically and are classified as the undetermined TOAST type, which can comprise potential cardiac sources of embolism, atherothrombotic causes, and cerebral embolism from indeterminate sources (16). The American Heart Association and American Stroke Association (AHA/ASA) guidelines recommend echocardiography for evaluating a patient with ischemic stroke only in selected cases (class IIa/class IIb). A recent study published by Harris et al. aimed to investigate the utility of transthoracic echocardiography (TTE) as a part of an acute ischemic stroke workup and revealed that the overall yield of TTE in acute ischemic stroke was low (17). Conversely, TTE has been performed as part of the assessment of stroke patients in recent years. More recently, point-of-care ultrasound (POCUS) has increased its field of application and TTE has been used as a screening method in the stroke unit (18). Robust registries, such as The Cornell Acute Stroke Academic Registry (CAESAR), routinely perform echocardiography as a strategy for evaluating patients with ischemic stroke (19).
Although current literature has not been able to clarify the role of echocardiography in the routine examination of patients with ischemic stroke, it is an important investigation in them. It is an easily available, non-invasive, relatively inexpensive method, which is easy to perform in centers that have integrated stroke and cardiology units, providing information that can change both the treatment and the understanding of the etiological mechanism of stroke.
Therefore, the objectives of this study were to assess the following: (1a) Whether or not the percentage of patients with ischemic stroke classified as the undetermined TOAST type decreased as a result of echocardiographic examination, (1b) Whether or not the prognosis after ischemic stroke is worse in patients with an undetermined TOAST classification, and (2) The predictive capacity of echocardiography in determining the prognosis of patients with ischemic stroke.
It was hypothesized that transthoracic echocardiography, in the routine investigation of patients with ischemic stroke, permits better etiological assessment, and consequently, improves the prognosis after the event.
Study Design
This retrospective cohort study was performed at the Stroke Unit (SU) of the Clinical Hospital of the School of Medicine of Botucatu (HC-FMB-UNESP), and included 1,100 inpatients diagnosed with ischemic stroke. Data collection was conducted at two time points: at hospital admission and 90 days after hospital discharge.
The study was approved by the Research Ethics Committee (REC) of the School of Medicine of Botucatu under no. 2,698,569.
Study Population
The sample size was estimated based on simple random sampling, with a normal distribution for the numerical outcomes, type I error = 0.05, and, of all possible associations, the association between left ventricular remodeling (one of the echocardiographic examination variables) and an unfavorable modified Rankin scale (mRS) at 90 days to estimate the test power. Based on the descriptive findings obtained from this association, the test power was estimated to be above 80% for the analyzed association, indicating that the sample size to analyze objective 1a (n = 1,100), objective 1b (n = 994), and objective 2 (n = 927) was large enough to ensure test powers >80%.
The study included adults diagnosed with ischemic stroke after clinical evaluation and imaging, such as computed tomography (CT) at admission, and control evaluations, between October 2012 and February 2018.
Clinical Evaluation
The following data were collected from the electronic medical records of clinical evaluations performed by the assistant medical team during the hospitalization period: age, sex, race (white/non-white), the presence of comorbidities (systemic arterial hypertension, type 2 diabetes mellitus, dyslipidemia, smoking, alcohol use, illicit drug use, arrhythmias such as AF or atrial flutter, and a history of previous stroke), the continuous use of medications [acetylsalicylic acid, clopidogrel, anticoagulants, angiotensin-converting enzyme (ACE) inhibitors, angiotensin II receptor blockers (ARB), and statins].
Neurological Evaluation
Data pertaining to neurological assessments conducted by the medical team during the hospitalization of the patients at the SU and at 90 days after hospital discharge were collected from the electronic medical records. This data included the score obtained using the National Institutes of Health Stroke Scale (NIHSS) (20) at admission, on hospital discharge, and at 90 days after discharge; the clinical condition classified by the Oxfordshire or Bamford scale (21); the TOAST classification (7); the modified Rankin scale (22) (previous, at hospital discharge, and 90 days after discharge); the recurrence of stroke; the presence of carotid and vertebrobasilar system stenosis and its quantification.
All the variables were obtained from the stroke data bank of Botucatu Medical School. The database is audited monthly by the stroke unit coordinator.
Echocardiographic Evaluation
The patients underwent TTE during hospitalization at the SU. Transesophageal echocardiography was performed when a rightleft intracardiac shunt was suspected on TTE, or in case of other findings that required better diagnostic interpretation.
The following parameters were verified with these examinations
The Etiological Investigation Protocol in Patients With Ischemic Stroke
The investigation protocol at the institution was based on the TOAST classification. All the patients underwent a brain CT at admission, while some underwent an additional scan after 24 h. Depending on the clinical progression, MRI was done for the patients with posterior circulation events or in those with a doubtful diagnosis.
CT angiography of the cerebral and cervical arteries was performed when the patient arrived within 8 h of the ictus, and duplex ultrasound of the cervical arteries and transcranial Doppler were performed 8 h after the ictus. The study was complemented by an anatomical examination (CT angiography or digital angiography) whenever required.
TTE was conducted to locate a cardioembolic source other than AF, while a transesophageal echocardiogram was requested to assess a right-left circulation shunt, left atrial appendage thrombus, and an atheroma in the thoracic aorta.
All patients underwent electrocardiography at admission followed by 24 h of cardiac monitoring. The 24 h Holter test was performed for patients older than 55 years with suspected arrhythmias, and for cryptogenic strokes.
The patient underwent laboratory investigations for syphilis, Chagas disease, glycated hemoglobin, thyroid stimulating hormone (TSH), total cholesterol and fractions, and triglycerides. An autoimmune panel was also performed for patients aged <55 years.
Statistical Analysis
Continuous variables were expressed as mean and standard deviation, while categorical variables were presented as absolute values and percentages. The statistical models were built to separately answer each objective defined in the study.
The potential confounders (variables identified in the maximal model that were clinically relevant, with p < 0.20) considered for all the objectives of this study were as follows: age; sex; race; systemic arterial hypertension; type 2 diabetes mellitus; dyslipidemia; smoking; alcoholism; the use of illicit drugs; AF; previous stroke; the continuous use of acetylsalicylic acid, clopidogrel, anticoagulant, ACEI, ARB, and statins; NIHSS at admission; TOAST classification at admission; mRS at admission.
The association between the echocardiographic examination and being classified as the undetermined TOAST type was analyzed using the multiple logistic regression model, including the potential pre-established confounders. The variables included were those presenting statistical significance in the univariate analysis (Objective 1a).
To verify the association between the classification as the undetermined TOAST type and the NIHSS scale score at discharge and 90 days after hospital discharge, the multiple linear regression model was used independently after adjusting for potential confounders. The multiple logistic regression model, adjusted for potential pre-established confounders, was also used to verify the association between the classification as the undetermined TOAST type and unfavorable mRS (mRS > 3 at discharge and 90 days after hospital discharge), and in-hospital mortality. The variables included were those presenting statistical significance in the univariate analysis (Objective 1b).
The association between the echocardiographic variables previously described and the NIHSS scale score at discharge and 90 days after hospital discharge was analyzed using the multiple linear regression model independently and adjusted for potential confounders. The multiple logistic regression model was used to Frontiers in Cardiovascular Medicine | www.frontiersin.org verify the association between the echocardiographic variables and unfavorable mRS scores (mRS > 3) at discharge and 90 days after hospital discharge, and in-hospital mortality. The models were adjusted for potential pre-established confounders. The variables included were those presenting statistical significance in the univariate analysis (Objective 2).
A comparison between the TOAST types and the echocardiographic variables was performed using the Kruskal-Wallis non-parametric test, followed by the Dunn test for multiple comparisons. Statistical significance was set at p < 0.05. The analysis was performed using the SPSS version 21 software.
Patient Inclusion in the Study
A total of 1,508 patients were admitted to the SU between October 2012 and February 2018. Of these, 1,243 patients had a confirmed diagnosis of cerebral infarction, and the 1,100 patients diagnosed with ischemic stroke were included in this study (Figure 1). Table 1 shows the demographic characteristics of the patients admitted with ischemic stroke, as well as the neurological assessments regarding the TOAST classification, Bamford clinical classification, and the degree of disability using the modified Rankin scale. Echocardiography was performed in 977 patients (88.82%). Table 2 shows the risk factors for cardiovascular diseases and the medications used being at the time of hospitalization.
Demographic and Clinical Characteristics of Patients Admitted With Ischemic Stroke
Association Between Echocardiography and Classification as the Undetermined TOAST Type (Objective 1a) Table 3 shows that patients undergoing TTE were 3.1 times less likely to be classified as the undetermined TOAST type (OR = 0.32; 95% CI: 0.21-0.51; p < 0.001).
The number needed to treat was calculated to be 3.4, implying that for every 3.4 TTEs performed, one patient would be prevented from being classified as the undetermined TOAST type.
Association Between Being Classified as the Undetermined TOAST Type and the Outcomes at Discharge and at 90 Days After Hospital Discharge (Objective 1b)
There was no association between being classified as the undetermined TOAST type and the outcomes at hospital discharge, as can be seen from the following: NIHSS score (β: −0.040; p = 0.871) and mRS score >3 (OR: 0.901; p = 0.544) using the multiple linear regression model corrected for confounding variables (alcoholism, history of previous stroke, TACS ischemic stroke, POCS ischemic stroke, PACS ischemic stroke, echocardiogram, age, and NIHSS at admission), and by the multiple logistic regression model corrected for admission, LACS ischemic stroke, PACS ischemic stroke, POCS ischemic stroke, and TACS ischemic stroke). With regard to the prognosis 90 days after hospital discharge, no association was found between the classification as the undetermined TOAST type and NIHSS score outcomes (β: −0.160; p = 0.560) and mRS score >3 (OR: 0.812; p = 0.261) 90 days after hospital discharge, using the multiple linear regression model corrected for confounding variables (AF, history of previous stroke, use of clopidogrel, use of oral anticoagulants, TACS ischemic stroke, POCS ischemic stroke, PACS ischemic stroke, echocardiogram, age, NIHSS at admission, previous mRS), and the multiple logistic regression model corrected for confounding variables (age, male sex, diabetes mellitus, dyslipidemia, alcoholism, use of oral anticoagulants, NIHSS at admission, LACS ischemic stroke, PACS ischemic stroke, POCS ischemic stroke, TACS ischemic stroke, previous mRS, and echocardiogram).
Undergoing an echocardiogram was a protective factor against death during hospitalization, and reduced the possibility of inhospital death by 11.1 times (OR: 0.090; p < 0.001). Conversely, being classified as the undetermined TOAST type increased the chances of mortality during hospitalization by 2.0 times (OR: 2.00; p = 0.013), as shown in Table 4.
Association Between Echocardiographic Variables and Outcomes at Discharge and at 90 Days After Hospital Discharge (Objective 2)
There was no association between the echocardiographic variables and NIHSS score outcomes ( Table 5) and mRS > 3 ( Table 6) at discharge. At 90 days after hospital discharge, there was no association between the echocardiographic findings and NIHSS score outcomes. Furthermore, there was no association between the Table 7.
DISCUSSION
In the present study, echocardiography during hospitalization due to ischemic stroke reduced the possibility of being classified as the undetermined TOAST type and was associated with lower in-hospital mortality.
The distribution of the different TOAST classifications was similar to that reported previously in literature; >30% (40.7%) of the patients were classified as the undetermined type, despite the investigation protocol (23). The incidence of small vessel TOAST classification was 16.45%, that of large vessels was 14.81%, and that of other causes was 4.72%. Of the 23.14% patients classified as the cardioembolic type, the main associated factor was the presence of AF (in 18% of all ischemic strokes), with a higher incidence than seen in previous studies on Brazilian patients (24).
In this study, echocardiography decreased the number of patients classified without a defined etiology, and this relationship confirmed the study hypothesis. TTE is a noninvasive and low-cost examination, and the association described above proves the importance of including it in an investigation protocol for patients hospitalized with ischemic stroke.
Although echocardiography did not correlate with a better patient prognosis, as measured by the NIHSS and Rankin scale scores, both at discharge and after 90 days, it increased the chance of identifying specific TOAST classifications, thereby decreasing the chance of inappropriate patient treatment. Although the highest NIHSS was found among patients who did not undergo echocardiography, this variable was taken into account and adjusted in the multiple regression for the mortality outcome. The correlation between echocardiography and the lower frequency of death at admission reinforces the importance of this examination for proper patient management.
The 5 year survival probability was higher in patients with small artery occlusion stroke (73. 8 There was no association between the variables assessed on the echocardiogram and the NIHSS and mRS scores at 90 days. These data do not corroborate with those of studies that evaluated systolic function through ejection fraction and diastolic dysfunction and reported a worse outcome in stroke (25,26). Ventricular mass was reported to be a risk factor for non-fatal ischemic stroke (27), as well as for recurrence and death in cases of severe LVH (28).
A possible reason for this difference in the correlation between the echocardiogram measurements and prognosis in previous literature is the evaluation of these variables without considering the TOAST etiological classification. Patients with a cardioembolic TOAST classification generally present with more changes in the echocardiographic measurements. In this study, for example, patients with a cardioembolic TOAST classification had a higher ventricular mass, larger atrial diameter, and a lower EF. Thus, such echocardiogram findings are possibly collinear with a cardioembolic TOAST classification and are not independent prognostic factors.
A large number of patients with ischemic stroke were on antiplatelet therapy (31.54%). A comprehensive investigation of these patients can help in identifying conditions in which anticoagulation would be the appropriate prophylaxis; echocardiography may have an important role here. Thus, a thorough investigation of the patient is important for the proper characterization and management of ischemic stroke. Emerging evidence suggests that atrial enlargement may be a biomarker of AF-independent underlying thrombogenic atrial heart disease with an independent risk of indeterminate or recurrent cardioembolic stroke (17).
Despite its findings, this study has its limitations. It is a retrospective study involving a single center only, making it impossible to obtain the measurement of the left atrial volume for analysis. Left atrial volume has been shown to be a powerful prognostic variable in heart disease. We also understand as a limitation that the design of this study did not allow the identification of mechanisms by which the echocardiogram correlated with a reduction in mortality, which opens up frontiers for future studies in this area.
Based on the results of the present study, we can conclude that echocardiography during hospitalization for ischemic stroke may be associated with a decreased chance of an undetermined TOAST classification, and also with lower mortality during hospitalization. On the other hand, an undetermined TOAST classification may correlate with higher mortality during hospitalization, suggesting the importance of including echocardiography in the hospital investigation protocol for patients with ischemic stroke.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by Research Ethics Committee (REC) of the School of Medicine of Botucatu under no. 2,698,569. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
RT contributed to the literature search, study design, data collection, data analysis and interpretation, and writing of the manuscript. GS, GM, and ST participated in the literature search, study design, data analysis and interpretation, and in the writing of the manuscript. JS, GL, LM, and RB conducted the literature search, data analysis and interpretation, and wrote the manuscript. HN and SZ participated in the literature search, study design, data analysis and interpretation, and wrote the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version.
|
v3-fos-license
|
2023-12-05T16:26:54.043Z
|
2023-09-30T00:00:00.000
|
265621151
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://e-journal.unair.ac.id/CCJ/article/download/48180/26322",
"pdf_hash": "2b0f48267faa9686340b82eb68c851a9c5981a32",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46612",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e7d7c9928977bc77d556ede5e6a50b91a74aff62",
"year": 2023
}
|
pes2o/s2orc
|
Case Report Atrial Septal Defect with Paroxysmal Atrial Tachyarrhythmia in Middle Age Soldier Patient: A Case Report
Background: Atrial septal defects (ASDs) are frequently asymptomatic and can remain undiagnosed until adulthood. Atrial tachyarrhythmias are not uncommon seen in patients with ASDs. Atrial fibrillation and atrial flutter are relatively rare in childhood, but become more prevalent with increasing age at time of repair or closure. Case Summary: The present case was an active duty 50-year-old male soldier, referred to the arrhythmia division of Gatot Soebroto Army Hospital with palpitations and physical intolerance. Holter examination and electrophysiology study revealed atrial tachyarrhythmias. Transesophageal echocardiography was performed before radiofrequency catheter ablation, and unexpectedly found left to right shunt ostium secundum ASD. Right heart catheterization confirmed left to right shunt ASD with high flow-low resistance. He then underwent paroxysmal atrial tachyarrhythmias catheter ablation, followed by percutaneous transcatheter ASD closure using occluder device without fluoroscopy within six months. Both the procedures went well without any complications. His symptoms had improved during follow up, although he had episode of rapid paroxysmal atrial fibrillation on holter evaluation six months later. Conclusion: We conclude that ASD closure is still recommendable even in late middle age patients combined with arrhythmias management.
Introduction
Atrial septal defect (ASD) represents a direct communication between right atrial (RA) and left atrial (LA) has a unique slow clinical progression.
Ostium secundum type ASD (ASD II) as characterized by a communication at the level of fossa ovalis is the most frequent type, representing 80% of ASDs diagnosed [1] .Isolated ASDs represent about 7% of all cardiac anomalies and can be diagnosed at any age [2] .Patients may be asymptomatic into their fourth and fifth decade [3] , and sometimes found incidentally on imaging studies.For this reason, many individuals can be undiagnosed early in life and will be able to serve in the military.
However, majority of the ASDs patient will develop symptoms including reduced functional capacity, exertional shortness of breath, and palpitations (supraventricular tachyarrhythmias), and less frequently pulmonary infections and right heart failure [4] .One of the major sources of morbidity are atrial tachyarrhythmias (ATs).ATs define as atrial fibrillation (AF), atrial flutter (AFL) and supraventricular tachycardias (SVTs).In patients above the age of 40 with unrepaired ASDs, the rate of ATs is even higher, with one study reporting the prevalence as high as 19% [5] , which itself may be an underestimation.
Percutaneous closure has lately become the primary treatment option for ASD II, and according to European Society of Cardiology (ESC) guidelines, should be the therapy of choice when anatomical conditions are favorable [1] .The association between percutaneous ASD closure and atrial arrhythmias is controversial.On the one hand, reverse atrial remodeling after closure might lead to a decreased chance of supraventricular arrhythmias [6] .On the other hand, the presence of a closure device has a possible pro-arrhythmogenic effect. [7]
Case Presentation
A 50-year-old male presented to the arrhythmia mm), and causes RV volume overload and pulmonary over circulation [1] .A unique feature of ASD is its slow clinical progression with most children and young adults being free of symptoms, contributing to late diagnosis; hence, ASD represents the most common congenital heart disease (CHD) diagnosed in adulthood, accounting for 25-30% of new diagnoses [8] .Thus, it is important for all cardiologists to have a solid foundation of the basic pathophysiology and management of CHD and understand when to make a referral.Besides that, as many forms of simple or maybe moderate-complexity CHD can be asymptomatic at younger age, many such individuals will be able to serve in the military [9] .
When symptoms occur, patients often first notice dyspnea, fatigue, exercise intolerance, or palpitations [10] .Some patients may present with syncope or even with peripheral edema from overt right heart failure and others may develop recurrent pulmonary infections [11] .ATs, including AF and AFL, are present preoperatively in about one-fifth of adults with ASDs [12] .Our patient had only one month history of palpitations and physical intolerance.
In adults, an ASD may not be initially considered in abnormalities [10] .However, the interatrial communication may remain undiagnosed unless there is a high index of suspicion.As with other diagnoses, the sensitivity of echocardiography depends on the echo machine, acoustic windows, ultrasonographer, and echo reader.TEE provides higher definition visualization of the interatrial septum, it can more precisely assess the size of an ASD and guide procedural planning [11] .TEE provides a better appreciation of cardiac anatomy and hemodynamic evaluation than TTE in patients with ASD [12] .Because our patient is a male active duty officer in his fifties, he was not suspected of having a CHD and underdiagnosed in the first place.
The primary indication for ASD closure is a hemodynamically significant shunt (i.e. one that cause RA or RV enlargement), irrespective of age and symptoms, unless severe and irreversible pulmonary arterial hypertension (PAH) is present [1,13] .Available approaches to ASD II closure include percutaneous device closure and surgical closure.Surgical closure is reasonable when the anatomy of the defect is not amenable to a percutaneous approach or when concomitant tricuspid valve repair or replacement is planned.For those who have an ostium primum, sinus venosus ASD, or coronary sinus defect, surgery is the recommended technique [11] .Surgical repair has low mortality <1% in patients without significant comorbidity, and good long-term outcome when performed early (childhood, adolescence) and in the absence of pulmonary hypertension (PH) [13] .A percutaneous approach is preferred when the anatomy of the defect is suitable as it avoids the need for cardiopulmonary bypass, cardioplegia, thoracotomy, sternotomy and related bleeding, or central nervous system complications, while carrying a cosmetic advantage, also allowing a shorter hospital stay with faster rehabilitation [11,14] .
A meta-analysis study suggests transcatheter ASD closure resulted safer in terms of in-hospital mortality, perioperative stroke, and post-procedural AF compared to traditional surgery [15] .
Percutaneous closure of ASD II under fluoroscopic
guidance is now considered a routine procedure.
Studies using a variety of devices have reported good success and low complication rates in children and adults, even in the elderly. [16,17]low dose of radiation exposure during fluoroscopy can be achieved for transcatheter ASD closure even in complex ASDs by reduction of frame rate, avoidance of lateral view and cine acquisition, and limitation of fluoroscopic time by avoiding unnecessary maneuvers and using echocardiographic guidance as much as possible [18] .But it has been suggested that echocardiography alone could be used to guide device placement.TEE or TTE without fluoroscopy have been used successfully to guide peratrial or periventricular repair of ventricular septal defects [19] .Some studies have reported the use of TEE or TTE to guide percutaneous ASD closure without fluoroscopy [19,20] .The first successful transcatheter closure of ASD II using TEE fluoroscopy-free technique in Indonesia was held by Prakoso R, et al in 2018 [21] .Percutaneous ASD closure under TEE guidance alone is an effective and safe procedure.
Nevertheless, the distance to the mitral valve must be considered carefully because it can complicate the procedure if the distance is too short.A potentially important advantage of TEE-guided percutaneous closure over fluoroscopy-guided closure is that it avoids exposure to radiation and contrast agents.In addition to reducing the risks for the patient, TEE-guided percutaneous closure without fluoroscopy also prevents radiation to the medical staff and avoids the need for heavy lead clothing. [22]e chronic left-to-right shunt associated with ASDs leads to increased hemodynamic load and geometric remodeling, both at a cellular and macroscopic level.This is most commonly seen in the RA and RV, but has also been described in left heart structures [17,23] .Furthermore, this chronic volume stress leads to the electrical remodeling that may precipitate development of arrhythmias.Atrial myocyte electrophysiologic properties are altered, with increased intra-atrial conduction time a common finding, likely from combination of interstitial fibrosis and chamber enlargement [24,25] .
Sinus node conduction properties may also be as altered, even in the pre-operative state [25,26] .ATs are commonly seen in patients with ASDs, regardless of ASD type.AFL and AF are relatively rare in childhood, but become more prevalent with increasing age at time of repair or closure [17] .AFL and AF in patients with ASDs may be treated in similar fashion to the general population, with appropriate consideration for rhythm control strategies with anti-arrhythmic medications and electrical cardioversion as indicated. [23]propriate anti-coagulation guidelines should also be followed [27] .All patients with symptoms consistent with potential arrhythmias should be referred for EP assessment prior to ASD closure, and assessed with at least a 24-hour Holter ECG monitoring.If indicated, any EP study with or without ablation must be performed before device implantation as this will make access to the LA more complicated afterwards, although still feasible. [28]osure of an existing ASD, in isolation, is generally insufficient to abolish an existing AT and catheter ablation should be considered before defect closure [29] .Ablation procedures have inconsistent mediumterm results in patients with documented atrial arrhythmia prior to device closure with about 50% having symptomatic arrhythmia on follow-up. [30]eptember 2023 | Vol 4 | Article 4 However, this should not preclude ablation procedures wherever possible.Surgical treatment of ASD, which had been the only treatment method for more than 45 years, may be associated with the occurrence of rhythm disorders such as AF or SVT, although some authors noted a reduction in supraventricular arrhythmic burden after closure. [31] a treatment option, percutaneous ASD II closure is also associate with this.A prospective study showed transcatheter closure of ASD II does not reduce arrhythmia that appears prior to ASD closure [32] .It is associated with a transient increase in supraventricular premature beats and a small risk of AV conduction abnormalities and paroxysmal AF in early follow-up.Larger device size and longer procedure time are associated with increased risk of supraventricular arrhythmia on early follow-up. [33]rial septal defect closure after the age of 40 years appears not to affect the frequency of arrhythmia development during follow-up.However, the patient's morbidity benefits from closure at any age (exercise capacity, shortness of breath, right heart failure), particularly when it can be done by catheter intervention [32] .The remodelling process and associated increase in cardiopulmonary function commence immediately after closure and continue for several years [34] .Decreased RV volume improves ventricular interaction and LV filling.
Subsequent increase in LV stroke volume and cardiac output is probably the main mechanism behind the improvement of exercise capacity after closure.These effects occur in patients of all ages, both symptomatic and asymptomatic [35] .This supports timely closure of sizeable ASD II, regardless of age and symptoms [36] .Patients who have had percutaneous ASD device closure should have an TTE performed at 24 hours to assess for device malposition, residual shunt, and pericardial effusion.Repeat TTE is recommended at 3, 6, and 12 months.A routine clinical follow-up and TTE should be done every 1 to 3 years thereafter. [37]llowing closure of ASD, other considerations arise for evaluation and treatment of ATs.Incidence of ATs is decreased post-closure, but recurrence rate may still be significant, particularly in patients who underwent ASD closure at older age, had larger shunts, or with other comorbidities [23,25,30] .It is therefore advisable to conduct a thorough followup after ASD II closure, including ECG monitoring, especially in the early post-procedural period. [33] this case report, our patient had arrhythmia catheter ablation after was uncovered AT on Holter
Figure 2 .
Figure 2. Chest X-ray P/A view showing mild cardiomegaly with prominent right pulmonary artery.
the different diagnosis because there is considerable overlap in symptoms.TTE is one of the main initial tests for the evaluation of patients with this constellation of symptoms.The guidelines recommend diagnosing an ASD by demonstration of shunting across the interatrial septum, with evaluation of the right heart and for associated September 2023 | Vol 4 | Article 4 examination and EP study.He had percutaneous transcatheter ASD II closure without fluoroscopy procedures six months later, because of many considerations which had been mention above.He stated a physical improvement after the procedures, and was able to carry out activities as before.September 2023 | Vol 4 | Article 4 However, this case report had a limitation because the absence of an objective assessment for patient's quality of life.We did not perform the 6minute walking test (6MWT) as an assessment of the functional capacity or other cardiopulmonary exercise test.Because he is still at risk of having heart rhythm disturbances in the future, he should have a thorough follow-up periodically.Conclusion Atrial septal defect as a common congenital heart disease in adult is still undersuspicious and can remain undiagnosed.Early diagnosis and follow-up of ASDs offers the best opportunity to avoid late complications
|
v3-fos-license
|
2021-01-03T19:20:29.547Z
|
2020-12-01T00:00:00.000
|
230345049
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.15562/bmj.v9i3.1905",
"pdf_hash": "8859b28163a6540e93166f098c0342e0c871e912",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46614",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "8859b28163a6540e93166f098c0342e0c871e912",
"year": 2020
}
|
pes2o/s2orc
|
Investigating the relationship between accountability and job satisfaction: A case study in hospitals affiliated to Yazd University of Medical Sciences 2017
Introduction: high accountability of employees, the organization can be helped to achieve the goals. In general, this research is to investigate the relationship between accountability and job satisfaction in nurses in an educational hospital affiliated to Yazd University of Medical Sciences Method: This study was a descriptive study of correlation type that has been conducted a cross-sectional study in 2017. We involved 190 nurses from selected educational hospitals affiliated to the Yazd University of Medical Sciences in the city of Yazd that have been chosen by stratified random sampling method. The data collection tool was Costa and M Care Accountability Questionnaire concerning 12 items, and Kendall and Smith Job Satisfaction Questionnaire were 37 items. Likert’s 5 scales were used to determine the score of accountability and job satisfaction. The data was analyzed by descriptive-analytical statistical tests and Pearson correlation coefficient, a significance level of 0.05, and in the SPSS software version 20. Results: there was no significant relationship between accountability and job satisfaction except one of job satisfaction (nature of work) (p=0.009). Also, a significant association was observed between three demographic variables of the age group (31-35) years (p=0.047) and work experience between (11-15) years (p=0.03) and the gender of women (p = 0.005) and two variables of accountability and satisfaction. Conclusion: According to the results, it is better to pay more attention to work nature to implement responsibilities more efficiently and increase job satisfaction.
INTRODUCTION
Nursing services among direct and immediate services are indisputable. They have covered a wide variety of patients, clients, groups, and communities throughout the background. 1 Nurse's job performance as one of the most important professions within the hospital. As a result, it is affected by several factors, particularly organizational commitment. 2 Clinical research nurses (CRNs) play a significant role in enhancing the quality of clinical trials. 3 The highest appearance and emergence of nursing performance is where the relationship between the nurse and the client is established to meet the needs and demands of clients to receive nursing services. 1 Concerning the current increasing scope of nurse's authorities and responsibilities compared to the past, the knowledge and skill of nurses in this area must promote; 3,4 they should also have the power of decision making. 4 The healthcare system has entered an age of accountability that rapid and unpredictable changes occur in the health care system. That professional accountability in the nursing area has had high importance. Nursing as a professional career should be responsible for providing its care legally and ethically. 1 One of the goals of the nursing profession is to promote the human personality and dignity of individuals who are under care. Nursing care should lead to the benefit of clients and prevent their damage. Therefore, ethical decision making and awareness of the reasons for choosing a decision, compared with other decisions, are the inseparable components of everyday work of nurses. 5 Accountability is the commitment that each person gives to the organization to perform the assigned duties. 4 Becoming a nurse is acquiring knowledge and learning specific skills, but becoming a nurse also includes the assimilation, attitudes, and values of the nursing profession. 5 Organizational commitment results in increased effort, motivation, job satisfaction, lower absenteeism from work, and increased retention in the organization. 6 In this regard, the more the person is accountable, the organization will promote sooner and helps the organization to achieve its goals. In general, accountability is best done during job satisfaction. 4 Responsibility is defined as the degree
ORIGINAL ARTICLE
of an individual's perseverance, conscientiousness, organization versus laziness, lack of accountability, and action without thinking. These dimensions are more summarized in a specific attribute that is indicative of accuracy, accountability, and credibility for people, contrary to the people who are lazy and undisciplined. 5 Job satisfaction is a reaction of general and overall emotion that people have about their job. 7 Job satisfaction is one's attitude toward his job. 8 In other words, job satisfaction is considered as a positive emotional feeling that, when an individual is working with satisfaction, reaches a job satisfaction, and having job satisfaction, an individual has motivation in his work. 4 Job satisfaction is defined as the feeling that a person has in comparing his/her evaluation of previous work, expectations, and job experiences with current job. 7 The importance of job satisfaction due to the very constructive role in the development and improvement of organization. 8 One of the most common internal factors effecting an organizational is personality characteristics. 9 Also, by increasing the job satisfaction of nurses in their work environment, they were accountable for their work. 10 Job satisfaction is one of the branches of job pleasure. Nurses' job satisfaction is an essential part of their lives, which can affect the patients' safety, and the performance, usefulness, and quality of care 7 found the impact of job satisfaction on organizational commitment and staff retention rate among Malaysian employees. Nawab studied the influence of compensation for staff on job satisfaction in the educational sector. 11 The study analyzes the points that to what extent people with individual, motivational, and the different socioeconomic status during work are correlated with others; 13 to what extent they are accountable, they participate in decision-making. How much they are also alien to their jobs, 14 job satisfaction from Smith, Kendall, and Hullin in the following dimensions is defined: satisfaction with the work, satisfaction with the superior, satisfaction with the colleagues, satisfaction with the promotion, and satisfaction with the salary. 19 Furthermore, organizational citizenship behavior has been defined based on the theory of Podsak off in the dimensions of Assisting Behavior, Chivalry, Organizational Loyalty, Organizational Obedience, Individual Initiatives, Civic Virtue or Behavior and Self Development. 19 People lacking the trait of conscientiousness may be blamed and criticized for the lack of validity, discipline, and high endeavor, but they enjoy experiencing short life; in other words, they are never named bored and tired. 20 The expansion of nurse's shortage and high exit of nurses from this profession is a global problem that exists both in developed and developing countries. 21 There are many matters that nurses are struggling with, which can affect their satisfaction. 21 Kurtz knows job satisfaction is a positive or pleasure emotional state resulting from an individual's assessment of his job or job experiences. 22 Concerning the high acceptance of patients in the educational hospitals of Yazd University of Medical Sciences and the significant number of nurses are working in this center compared to other hospitals and considering the importance of the subject variables, conducting this research in the mentioned spatial domain seems to be necessary. In the present study, by investigating the relationship between accountability and job satisfaction in nurses working in educational hospitals, it has been tried to present some suggestions to enhance job satisfaction in terms of the nurses' accountability.
METHOD
One hundred ninety nurses working have been selected randomly from the educational hospitals in Yazd. They have been determined by a stratified sampling method according to the size of hospitals from three educational hospitals.
The data has been utilized in the statistical sample and a questionnaire which has been arranged regarding the research variables and make them operational to test the hypothesis. The research questionnaire consists of two categories of questions; the first category is coded with alphabetical letters. For collecting data, two questionnaires of Costa and McCare Accountability Questionnaire 23 and Smith P C, Kendall LM, and Hullin's CL Job Satisfaction Questionnaire 24, 25 have been applied. The accountability questionnaire has three areas: (conscientiousness with four questions, perseverance with two questions, and organizing with six questions), and these 12 questions have been measured with Likert's 5 scales. Also, to determine and specify the statistical characteristics in terms of gender, age, marital status, place of employment, and work experience, the second category has been adjusted in two parts, which consist of two separate questionnaires to test the hypotheses of the present research.
The method of scoring questions in Likert's 5 scales as: totally disagree (1), disagree (2), to some extent (3), agree (4), and totally agree (5). Of course, some questions have been scored reversely. 25 The Scores scope of this questionnaire varies between 12 and 60. If the subjects' score counted in 12-24, 24-48, and 48-60 they would indicate lowlimit, medium limit, and high limit accountability, respectively. 23 The second part is job satisfaction questions based on Smith, Kendall, and Hullin theory. The questions include five dimensions (salary with seven questions, the nature of work with eight questions, supervision with eight questions, colleagues with seven questions, and promotion and upgrade with seven questions), that has been arranged with a total of 37 questions and with Likert's 5 scales. In this research, the validity of the data collection tool is by the symbolic or face validity method, which is confirmed due to the use of professors' and experts' viewpoints.
The scoring of this questionnaire based on the Likert's 5 scale is as follows: very high= 5, high= 4, medium= 3, low= 2 and very low= 1. The scores scope of this questionnaire varies from 37 to 185 that if the score of subjects is 87 or lower, it indicates that job satisfaction is at low limit. If the score of subjects is between 87 and 135, it indicates that job satisfaction is at medium limit. And finally, if the score of subjects is between 135 and 185, it indicates that the job satisfaction of people is an optimal level.
To analyze the data for demographic, frequency,
FINDING
The following tables show the obtained results of the study performed on 190 nurses, including selected hospitals affiliated to the Shahid Sadoughi University of Medical Sciences of the city of Yazd.
The results of Table 2 show the mean score of accountability dimensions (M=45.56±6.16) of the people under study. According to the obtained results, the mean score of the conscientiousness dimension (M=19.52) has been higher than other dimensions of accountability. As the results of table below, the accountability variable of the nurses working in the hospitals was at moderate (not that much accountable) level. Table 3 shows the mean of total score (M=109.23±15.41) and the dimensions of job satisfaction of nurses. According to the obtained results, among the satisfaction dimensions, the salary dimension had the highest mean (M=24.35±3.74), and the dimension of colleagues had the lowest mean in comparison to other satisfaction dimensions (M=19.74±4.45). And according to the obtained results, the job satisfaction variable in the nurses working in the selected hospitals was evaluated at moderate level.
Based on the findings presented in Table 4 of the conducted Pearson hypothesis test, there was a reverse correlation between job satisfaction score and accountability Score (r=-.05), and this correlation was not statistically significant (p=0.45). Also, in investigating the correlation between the accountability score and job satisfaction dimensions, a significant relationship with reverse correlation can be observed (r=-0.19) only between one of the dimensions of job satisfaction (the nature of work) with (p=0.009).
According to the results of Table 5, a positive correlation was observed between job satisfaction scores and accountability scores in the nurses of Shahid Rahnamon Hospital (r=0.77). A reverse correlation was observed in the nurses of Shahid Sadoughi and Afshar hospitals to order(r=-0.56) and (r=-0.326), but these results were not statistically significant.
DISCUSSION
Based on the obtained results in this research, which investigates the relationship between the independent variable of accountability and the dependent variable of job satisfaction in the nurses of educational hospitals of the Shahid Sadoughi University of Medical Sciences of the city of Yazd, it can be argued that there is no significant relationship between these two variables and only between accountability with the dimension of the nature of work, a significant relationship can be observed, that while investigating similar researches conducted in this area; In a study entitled, "Investigating the relationship between accountability and job satisfaction in nurses working in the educational hospitals of Tabriz University of Medical Sciences, " there is a significant relationship between accountability and job satisfaction and two of its dimensions (job satisfaction and promotions satisfaction). Still, there is no significant relationship between accountability, and three other job satisfaction (satisfaction with supervision and monitoring, satisfaction with career colleagues, satisfaction with salary, and payment). 12 The relationship of job satisfaction and motivation for progress with mental health and accountability of female educators of Ahvaz educational institutions. The data analysis method is a correlation method that multiple and straightforward correlation coefficients between predictor variables and criterion variables have been calculated. Also, variable correlations (two variables) of job satisfaction and motivation for progress with mental health were higher than simple correlations of each one of the first two variables with the latter variable. Also, multi-variable correlation (two variables) of job satisfaction and motivation for progress with accountability were higher than the simple correlations of each of the two previous variables with the subsequent variable. 13 Investigating job satisfaction about socioeconomic and demographic factors; a case study of one of the organizations in Fars Province; the more one has a higher group correlation in the organization, he has been more satisfied with his job and organization. Independent variables (such as age, gender, etc.) have not shown significant relationship with the dependent variable directly. 14 Dennis Brooks et al. (2014), in research entitled "Policy as a factor of relationship between job satisfaction and accountability, " concluded that those who have higher job satisfaction towards the work in the organization have better accountability during work. 15 In 2014, Hall, in research entitled "Accountability and organizational support satisfaction as management among the managers, " concluded that accountability and job satisfaction have powerful links with each other. Finally, managers with high accountability have had high job satisfaction. And the remarkable point was that if the satisfaction was low, the accountability was poor too. 16 In 2009, Sorensen, in research entitled "The relationship between job satisfaction of nurses and their accountability" concluded that the results show that accountability irrelatively high and job satisfaction is at the medium limit. Accountability and job satisfaction are significantly related at the medium level. The correlation between the subscales of accountability is significant but low. 17 In a research conducted by Abdolali Lalsayizadeh et al. (1999), entitled as "Investigating job satisfaction concerning socioeconomic and demographic factors, " that the case study has been one of the organizations in Fars province. The higher was in the organization; he has been more satisfied with his job and organization. Independent variables (such as age, gender, etc.) have not directly shown a significant relationship with the dependent variable. 14
CONCLUSION
Based on the results of the research shows that there is no relationship between accountability and job satisfaction in the nurses working in the educational hospitals of the Shahid Sadoughi University of Medical Sciences of the city of Yazd. Only some of the dimensions affect this relationship. For this purpose, the following suggestions are presented: Increasing the salary payment level to the employees, especially nurses, concerning the high mean score that exists between salary and wages and the job satisfaction increase. Also, the recruitment of nurses who have more perseverance in terms of personality showed significant relationship between the work nurses do and accountability and caused positive impact on the performance of other staff. Also, by employing more nurses from women's gender groups, taking into account the individual and statistical characteristics of the analysis of questionnaires showed a direct relationship between gender and job satisfaction.
Consequently, accountability, more accountable people can be employed in nursing area. Based on the research findings of the goal of the relationship between accountability and job satisfaction of nurses, it can be concluded that there is no positive and significant correlation between job satisfaction score and accountability score among nurses of Shahid Rahnamon Hospital. With increasing accountability, job satisfaction always rises, and a reverse correlation was observed among nurses of Shahid Sadoughi and Afshar hospitals. Still, these results were not statistically significant, meaning that, with increasing accountability, job satisfaction always decreases. Also, by separating the relationship based on demographic variables, a significant relationship can be observed only in two variables of age group and work experience (31-35) years and (11-15) years, respectively.
According to the information obtained from the analyses, it is better to provide conditions and facilities based on age groups and different work experience for nurses, which result in more motivation and finally, job satisfaction. During this project, there were limitations such as the lack of scientific studies in the specific fields and poor collaboration of the research population that made the conditions hard to research.
CONFLICT OF INTEREST
There is no competing interest regarding the manuscript.
ETHICS CONSIDERATION
This study was conducted after obtaining ethical clearance from the School of Public Health -Shahid Sadoughi University of Medical Sciences, with ethical clearance reference number IR.SSU.SPH. REC.1398.155
FUNDING
The current study has not received any specific grant from the government or any private sectors.
|
v3-fos-license
|
2019-04-24T13:13:18.033Z
|
2005-07-01T00:00:00.000
|
140676222
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "http://digital.bl.fcen.uba.ar/Download/paper/paper_01480227_v110_n7_p1_Bianchi.pdf",
"pdf_hash": "b5701844161db98bc94ef80d80b13db5766bd6f5",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46616",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "b5701844161db98bc94ef80d80b13db5766bd6f5",
"year": 2005
}
|
pes2o/s2orc
|
Vertical stratification and air-sea CO 2 fluxes in the Patagonian shelf
[1] The thermohaline structure across the tidal fronts of the continental shelf off Patagonia is analyzed using historical and recent summer hydrographic sections. The near-summer tidal front location is determined on the basis of the magnitude of vertical stratification of the water column as measured by the Simpson parameter. Sea surface and air CO2 partial pressures based on data from eleven transects collected in summer and fall from 2000 to 2004 are used to estimate CO2 fluxes over the shelf. The near-shore waters are a source of CO2 to the atmosphere while the midshelf region is a CO2 sink. The transition between source and sink regions closely follows the location of tidal fronts, suggesting a link between vertical stratification of the water column and the regional CO2 balance. The highest surface values of Chl a are associated with the strongest CO2 sinks. The colocation of lowest CO2 partial pressure (pCO2) and highest Chl a suggests that phytoplankton blooms on the stratified side of the fronts draw the ocean’s CO2 to very low levels. The mean shelf sea-air difference in pCO2 (DpCO2) is 24 matm and rises to 29 matm if the shelf break front is included. Peaks in DpCO2 of 110 matm, among the highest observed in the global ocean, are observed. The estimated summer mean CO2 flux over the shelf is 4.4 mmol m 2 d 1 and rises to 5.7 mmol m 2 d 1 when the shelf break area is taken into account. Thus, during the warm season the shelf off Patagonia is a significant atmospheric CO2 sink.
Study Area: Background
[2] The area covered by the Argentine shelf is close to 1,000,000 km 2 .It is characterized by a smooth slope and scarce relief features [Parker et al., 1997].The shelf broadens from north to south, ranging from 170 km at 38°S to more than 600 km south of 50°S.The main source of the shelf water masses is the subantarctic water flowing from the northern Drake Passage, through the Cape Horn Current [Hart, 1946] between the Atlantic coast and the Malvinas Islands, and the Malvinas Current in the eastern border of the shelf [Bianchi et al., 1982].The fresh water sources of the shelf are the small continental discharge and the lowsalinity water outflowing from the Magellan Strait.The latter is due to high precipitation in the South Pacific, close to the west coast of Tierra del Fuego, and the melting of continental ice [Lusquin ˜os, 1971a[Lusquin ˜os, , 1971b;;Lusquin ˜os and Valde ´s, 1971;Piola and Rivas, 1997].The present study is limited to the area south of 39°S and consequently south of the region influenced by the input and variability of the Plata river.In addition, the evaporation-precipitation imbalance locally affects the salinity of some areas.
[3] Because seasonal variability precludes the utilization of temperature to classify the water masses, salinity is frequently used for this purpose.Four water masses can be defined according to salinity (Figure 1): Malvinas water (>33.8),coastal water (low salinity, <33.4), shelf water or midshelf water, ranging from 33.4 to 33.8, and a highsalinity coastal water close to the San Matı ´as and Nuevo Gulfs, where the low-salinity tongue arising from the Magellan Strait turns offshore [Bianchi et al., 1982;Guerrero and Piola, 1997].The surface salinity in the San Matı ´as Gulf (SMG) is higher than 34.0 because of enhanced evaporation [Scasso and Piola, 1988].The SMG positive thermal anomaly relative to the adjacent shelf waters [Krepper and Bianchi, 1982] is probably due to a greater residence time in the gulf [Piola and Scasso, 1988;Rivas and Beier, 1990].
[4] Abrupt changes in the water properties define ocean fronts.In the Argentine shelf, these fronts are the shelf break front, between the Malvinas waters and midshelf waters [Martos and Piccolo, 1988;Carreto et al., 1995] and tidal fronts [Carreto et al., 1986;Glorioso, 1987], that develop in the warm season between vertically homogeneous coastal waters and stratified midshelf waters.Both the shelf break front and the tidal fronts are the borders that delimit the above defined water masses (Figure 1).
[5] South of 41°S, the shelf width is close to one quarter of the semidiurnal tide wavelength leading to favorable conditions for resonance [Piola and Rivas, 1997].The tidal amplitude in the Patagonian shelf is one of the highest in the world ocean [Kantha et al., 1995], and tidal currents are very energetic.Consequently, by vertically mixing the coastal waters, the tidal current bottom friction is a key mechanism in the generation of the tidal fronts.These processes can generate upwelling regions in the Patagonian shelf [Simpson and Hunter, 1974;Bakun and Parrish, 1991].Using numerical models, Glorioso and Flather [1997] identified regions where the tidal energy dissipation is likely to produce these conditions, i.e., offshore Valde ´s Peninsula (VP), Cabo Blanco (CB), and the region between 50°S and De los Estados Islands.Palma et al. [2004] used a high-resolution barotropic model to determine the location of the main frontal systems of the Patagonian shelf and found a good correspondence with summer sea surface temperature (SST) gradients.Their results are similar to those of Glorioso and Flather [1997], showing the most intense fronts near of VP, north of SMG and off Grande Bay (GB).The analysis of 11 years of satellite-derived SST shows that the tidal fronts are persistent throughout the year, except during winter [Bava et al., 2002].
[6] Generally, front creation is linked to two physical mechanisms [Hoskins and Bretherton, 1972]: (1) differential advection, either convergences or shear deformation in the horizontal flow, or (2) differential vertical mixing.In coastal regions, tidal fronts are a clear example of the last case.The biological response to fronts at every trophic level involves the extreme sensitivity of the ocean ecosystem to vertical motion [Olson, 2002].Thus air-sea CO 2 fluxes at both sides of tidal fronts are related to the response of phytoplankton to the frontal environment.
CO 2 Balance
[7] The world ocean plays a major role in the global carbon cycle budget.Carbon in the oceans is unevenly distributed because of complex circulation patterns and biogeochemical cycles, neither of which is completely understood.In addition to circulation patterns, part of the CO 2 is absorbed by the biota, both on land and in the ocean.The ocean CO 2 concentration is related to the biology of the organisms through photosynthesis, respiration, and calcification.The oceans are estimated to hold 38,000 gigatons (GT = 10 12 kg) of carbon, which is 50 times the amount contained in the atmosphere.The annual air-sea exchange is 15 times larger than the amount produced by the burning of fossil fuels, deforestation, and other human activities [Williams, 1990].The magnitude and direction of CO 2 annual net uptake flux by the global ocean is governed by the sea surface and air partial pressure of carbon dioxide (pCO 2 ) differences and the wind speed.However, because pCO 2 spatial and seasonal variations in the ocean surface are much larger than in the atmosphere, the oceanic pCO 2 is the main regulator of the sea-air transfer flux.The ocean surface pCO 2 varies about 25% above and below the current atmospheric pCO 2 level of approximately 360 matm [Takahashi et al., 2002].Small regional sea areas can produce intense sequestration or releasing of CO 2 .Because continental shelves are active regions in biological production, they are believed to play a significant role in the global CO 2 balance.Spring-summer observations from the Bering Sea shelf revealed variability in CO 2 fluxes associated with biological processes, insolation and vertical mixing [Codispoti et al., 1986].Recently, several authors have studied air-sea CO 2 fluxes in continental shelves and marginal seas [e.g., Tsunogai et al., 1999;Frankignoulle and Borges, 2001;Sarma, 2003;Thomas et al., 2004;Hales et al., 2005].The South Atlantic Ocean between 14°S and 50°S is believed to be a sink of about À0.3 to À0.6 GT C yr À1 [Takahashi et al., 2002].On the basis of the research in the East China Sea, Tsunogai et al. [1999] proposed that the cross-shelf circulation enhances the absorption of atmospheric CO 2 over the shelf and transfers these waters to subsurface layers in the open ocean.If the world continental shelf zone absorbed the atmospheric CO 2 at the rate observed in the East China Sea, this mechanism would account for a net oceanic uptake of CO 2 of 1 GT C yr À1 .Extrapolating the CO 2 uptake of the North Sea to the global scale, Thomas et al. [2004] estimated that 0.4 GT C yr À1 would sink in the coastal seas.However, more field data are needed to constrain a worldwide extrapolation, because Figure 1.Climatological sea surface salinity horizontal distribution.The isohalines that separate water masses are shown (33.4 and 33.8).Abbreviations are as follows: lowsalinity coastal water (LSCW), high-salinity coastal water (HSCW), shelf water (SW), Malvinas water (MW) and shelf break front (SBF).tropical continental shelves and river plumes could act as sources of atmospheric CO 2 [Frankignoulle and Borges, 2001].Thus it is useful to evaluate the role of the large Patagonian shelf on the global CO 2 balance of shallow seas.
[8] Large marine ecosystems (LME) are oceanic regions encompassing coastal areas to the seaward boundaries of continental shelves and are characterized by their primary productivity, hydrography, trophically dependent populations, and bathymetry [Bakun, 1993].On a global scale, 64 LME have been identified, and they produce 95% of the world's annual marine fisheries.On the basis of global primary production estimates from satellite images, the Patagonian continental shelf (LME 14) has been classified as a LME Class 1 (primary productivity >300 gC m À2 yr À1 [Behrenfeld and Falkowski, 1997]).Improving our knowledge of the relationship between vertical stratification, nutrient availability, and phytoplankton abundance is crucial for the living resources management.It is well known that the distribution of fisheries is tied to frontal regions and their variability [Bisbal, 1995].Economic and ecologically important species such as anchovy, Patagonian scallop, and atherinids take advantage of the Patagonian tidal zone [Acha et al., 2004;Quintana and Yorio, 1997].
[9] A cooperative research program named ARGAU (Programme de Coope ´ration avec la Argentine pour l'e ´tude de l'oce ´an Atlantique Austral) between the Laboratoire de Bioge ´ochemie et Chemie Marines (LBCM) at the Universite ´Pierre et Marie Curie in Paris, France, the Instituto Anta ´rtico Argentino (IAA) and the Servicio de Hidrografı ´a Naval (SHN), both from Argentina, was launched in 2000.One specific objective is to study the relation between frontal regions and air-sea CO 2 fluxes [Piola et al., 2001].
[10] In this work, we discuss the relation between frontal regions, vertical stratification, phytoplankton biomass (expressed as chlorophyll a), and air-sea CO 2 fluxes.In section 2 the data and methods are described.In section 3 the characterization of the tidal fronts in the Patagonian shelf and the relations between environmental features with the shelf CO 2 budget are presented.Finally, the results are discussed in section 4. The role of planktonic community production and respiration processes in the regulation of CO 2 fluxes will be subject of future works.
Data and Methods
[11] Two different sets of data were used for the present study: historical hydrographic data (Figure 2a) from the Argentine Oceanographic Data Center (CEADO) and data from the ARGAU cruises (Figure 2b and Table 1).In addition, the Instituto Nacional de Investigacio ´n y Desarrollo Pesquero (INIDEP) furnished recent data from 13 stations in the area of GB.The CEADO database holds 433 stations collected during austral summer (January, February, and March) from 1927 to 1996.Vertical sections and horizontal distributions of temperature, salinity, and density were used to characterize frontal systems in four selected regions.To determine the location of ocean fronts, the Simpson parameter was estimated [Simpson, 1981].The Simpson parameter, a measure of the mechanical work required to vertically mix the water column, is defined as where g is the gravity, h is the depth, and r 0 is the mean density of the water column.Small values of F indicate poorly stratified waters while high values are associated with stratified waters.
[12] Near sea surface (9 m depth) temperature, salinity, fluorescence, and ocean and atmosphere pCO 2 were collected underway along 14 transects (Table 1).Data were averaged and recorded at 10 min intervals.The sampling system for pCO 2 in air and seawater and associated parameters was developed at LBCM [Poisson et al., 1993].The system uses an infrared technique described by Takahashi [1961] and Copin-Monte ´gut [1985].It consists of a flowthrough equilibrator and an IR analyzer (SIEMENS, type Ultramat 5F).The analyzer was calibrated every 6 hours with three gas standards containing 270.0, 361.0, and 489.9 ppm mole fraction of CO 2 .Atmospheric pCO 2 was also measured every 6 hours from an intake placed on the bow of the ship.Using temperature data obtained from high-accuracy sensors placed in the equilibrator and in the seawater intake, pCO 2 was corrected for warming effects.Water pCO 2 was also corrected for atmospheric pressure variations, drift, and moisture effects.Except when measurements are in a region of very high variability (e.g., subantarctic front), the standard deviation of water pCO 2 is lower than 0.3%, about 1 matm [Metzl et al., 1995].The observed range of atmospheric pCO 2 is 11 matm, while for the sea pCO 2 it is about 260 matm.Therefore the oceanic pCO 2 is the driving factor in the computation of sea-air pCO 2 difference (DpCO 2 ).Chlorophyll a (Chl a) was determined from 1.5 to 2 liter samples taken every 3 hours from the same water source entering the pCO 2 system and GF/F filtered.The filters were stored in dark at À20°C, and their analysis was carried out 3 months later after adding 8 ml 90% acetone.The extracts were read in a Beckman DU 650 spectrophotometer.Calculations of pigment concentrations were done according to Strickland and Parsons [1972].
[13] To compute CO 2 fluxes, the effect of two variables must be taken into account: the wind and the CO 2 solubility in seawater (k S ).The coefficient of gas transfer velocity (k W ) is obtained as a cubic function of the wind, according to Wanninkhof and McGillis [1999] (hereinafter referred to as WMG99).The CO 2 fluxes were estimated as follows: where the solubility was computed according to Copin-Monte ´gut [1996].We also calculated the CO 2 fluxes using k W based on a quadratic expression by Wanninkhof [1992] (hereinafter referred to as W92).
[14] Since only seven of the transects had in situ wind data, wind speed for the remaining transects was obtained from the QuikScat winds.The QuikScat wind data are available twice a day, with a resolution of 0.25°latitude  0.25°longitude.The grid nodes recovered are those closest to the in situ observations.Cubic spline interpolation was performed to obtain wind data every 2 hours for each grid node.After interpolation, satellite wind data closest in time to the in situ data were selected.These data are located on the grid nodes over the ship transects.To fill spaces between two consecutive grid nodes over a transect, satellite data were linearly interpolated in space.For three transects where in situ wind data were available (transects 6, 7, and 9, see Table 1), the standard deviation of the QuikScat data versus ship wind data was in a range of 1.3 to 2.2 m s À1 .The propagation of this error on the CO 2 flux estimates leads to relative errors ranging from 20 to 50%.
Fronts Characterization
[15] The vertical sections presented in this work are based on stations from transects of the project Pesquerı ´a [Villanueva, 1968].The main characteristics of the studied fronts are presented in Table 2. Four transects were selected to characterize tidal fronts (Figure 2a): SMG, VP, CB, and GB.Though salinity was used to characterize the water masses, in summer and in a synoptic scale, the vertical sections show that temperature dominates the density field since both isotherms and isopycnals describe similar patterns (Figure 3).The density sections reveal the transition between well-mixed near-shore waters and midshelf stratified waters.The low-salinity tongue described in section 1.1 is evident in every transect but only appears to have a significant effect on the density structure in the near coastal region in the CB and GB section.
[16] Horizontal scales of the tidal fronts are not well defined because of the relatively large distance between stations.Analysis of thermal fronts from satellite SST data (NOAA-AVHRR Global Area Coverage, resolution 4.5 km  4.5 km) indicates spatial scales of 36 km in VP and SMG fronts and 12 km to 32 km in CB front [Bava et al., 2002].
[17] At SMG, VP, and GB the quasi-homogeneous waters on the western side of the front are warmer than the midshelf stratified region.This is due to the resolution of the available transects, where the distance between stations is larger than the one appropriate to observe the frontal scale.Furthermore, the positive thermal anomalies of SMG can mask the cooler waters at the front of SMG.The first station of the VP transect is at the mouth of Nuevo Gulf where temperature warmer than offshore has been observed [Rivas and Ripa, 1989;Rivas and Beier, 1990].
[18] At SMG, coastal temperature and salinity (Table 2) are related to the circulation of the SMG which results in an outflow of warm and saline waters [Piola and Scasso, 1988].In this vertical section (Figure 3), at about 340 km from the coast, near bottom high salinity (>33.8) and low temperature (<6°C) suggest mixing with Malvinas waters.Stratification at GB (Figure 3) is weaker than in other transects, throughout the entire section.This is due to the lower mixed layer temperature at higher latitudes.Sabatini et al. [2000], using recent hydrographic data from southern Patagonia, pointed out that south of 51°S the density structure of the water column is nearly homogeneous.Less stratified waters are apparent close to the easternmost station, which presents higher salinity and lower temperature than farther offshore, probably due to the influence of quasi-homogeneous waters of southern origin.
[21] 2. The midshelf is where the thermocline is well developed and stratification reaches the highest values of the shelf (Table 2).The more stratified waters are located in the shelf area between 75 and 100 m isobaths.This region can be represented by two layers, with a $20 m mixed layer separated from the near bottom layer by a strong thermocline.
[22] 3. The outer shelf is where stratification decays in the proximity of the shelf break.
Simpson Parameter
[23] To locate fronts on the continental shelf, the Simpson parameter F was computed using all available summer data.Since tidal fronts are boundaries between quasi-homogeneous and well-stratified waters, they are characterized by intermediate Simpson parameter values.After inspection of several vertical sections a value of F = 50 J m À3 was chosen to estimate the mean frontal position.Similar values (F = 40 J m À3 ) have been used by Sabatini et al. [2000Sabatini et al. [ , 2004]].The Simpson parameter horizontal distribution in the shelf is presented in Figure 4.
[24] In the inner San Jorge Gulf, waters are stratified (values higher than 100 J m À3 ) and present relatively high surface salinity and temperature (up to 33.6 and 17°C, respectively) compared to the midshelf.This can be observed by the position of the F = 50 J m À3 contour that intersects the coast at the northern tip of San Jorge Gulf and reappears in the area of CB. Between 51°S and 54°S, the area of well-mixed waters remarkably broadens, reaching the 150 m isobath at about 200 km from the coast (Figure 4) [see also Sabatini et al., 2004].
Air-Sea CO 2 Fluxes
[25] The frontal scale (about 30 km) [Bava et al., 2002] is not always well resolved by the transects because sometimes the ship track runs closely along the fronts.However, sharp changes in temperature and DpCO 2 across the tidal fronts are observed, i.e., close to VP and CB (Figure 5), in most midshelf transects.At VP, temperature rises from 14.7°C to 15.6°C, DpCO 2 drops from 60 to À90 matm, while the fluorescence increases from 0.4 to 1.8 arbitrary units (a.u.), from the cold to the warm side of the front.Across the front at CB, temperature increases from 10.8°C to 13.4°C, DpCO 2 varies from 0 to À120 matm, and the fluorescence rises from 1.9 to 2.8 a.u.The cross-front temperature increase is much larger at CB, while the DpCO 2 is slightly negative at the unstratified side of the front, and the DpCO 2 and fluorescence changes are larger at VP. Cold waters are associated with a well-mixed water column while the warmer waters are stratified.The high fluorescence in the stratified side of the front is probably linked to increased biological activity, which leads to the very low DpCO 2 (see also Chl a distribution in Figure 7).
[26] The DpCO 2 horizontal distribution in the study region is presented in Figure 6a.The areal mean DpCO 2 corresponding to the Patagonian shelf is À24 matm and increases to approximately À29 matm if the shelf break front is included.It can be observed that most of the continental shelf is a strong CO 2 sink area reaching values of DpCO 2 of À110 mtam (i.e., off San Jorge Gulf), among the strongest negative values in the global ocean [Takahashi et al., 2002].Historical nutrient data (not shown) present a relative maximum off San Jorge Gulf.Moreover, the fluorescence distribution pattern presents a maximum (Figure 5) at the stratified side of the front, probably due to a rapid uptake of nutrients in bloom events, which could cause the very low pCO 2 ocean values.
[27] The horizontal surface distribution of the sea-air CO 2 fluxes (Figure 6b) shows that while the coastal region acts as a source of CO 2 to the atmosphere, the midshelf and outer shelf are sinks, reaching fluxes of À55 mmol of CO 2 m À2 d À1 .The later fluxes correspond to values of DpCO 2 as low as À110 matm.Only one cruise carried out in 2004 sampled the shelf break region, where the highest CO 2 fluxes into the ocean were recorded (À65 mmol of CO 2 m À2 d À1 ).On the other hand, positive DpCO 2 with peaks that exceed 80 matm, leading to fluxes >40 mmol m À2 d À1 , are found northeast of SMG front and east of GB front.The area ($800,000 km 2 ) averaged CO 2 flux over the continental shelf is À4.4 mmol m À2 d À1 and increases to À5.7 mmol m À2 d À1 if the shelf break front is included.The transition between positive and negative CO 2 fluxes closely follows the critical Simpson parameter isoline (50 J m À3 ) suggesting a link between changes in vertical stratification and the regional dynamics of CO 2 fluxes.Although the Simpson parameter and DpCO 2 are not based on the same data sets, throughout the region the climatological values are negatively correlated (r 2 = 0.38) at a 99% confidence level.In addition, an increase in biological activity at the stratified side of the front can reinforce the sink of CO 2 to the ocean.The dissolved oxygen was sampled with a low spatial resolution, and it is therefore not shown.The surface distribution presents supersaturation (110%) in the midshelf and slight undersaturation (95%) in the tidal front coastal regions, further suggesting interaction of biological activity and the physical structure of tidal fronts.
[28] To analyze the influence of biology in the air-sea CO 2 fluxes, the Chl a horizontal distribution obtained from the discrete samples along the transects is shown in Figure 7.Although these samples have lower spatial resolution than the underway data, variations of Chl a across the above described fronts can be observed.At VP Chl a values increased from 0.7 mg m À3 at the homogeneous side of the front to >2 mg m À3 at the stratified side.A remarkable variation is evident at CB, where maximum Chl a values are usually present, averaging close to 3 mg m À3 Chl a at the stratified side of the front, contrasting with much lower values in the vicinities of the well-mixed coastal waters.Similarly, high Chl a is found north of San Jorge Gulf, with maximum Chl a (>2 mg m À3 ) and at GB (3 mg m À3 ).A Chl a maximum, not associated with tidal fronts, is observed at the shelf break front from 47°S to 50°S.
Discussion
[29] Previous works on the tidal fronts in the Patagonian shelf have shown that the abundance of certain plankton species, mainly diatoms [Balech, 1964[Balech, , 1965] and dinofla-gellates [Carreto et al., 1981[Carreto et al., , 1986] is related with stratification.These studies have focused on relatively small frontal areas.The present work further discusses the physical structure of the tidal fronts and the contrasting environmental features of coastal and outer shelf waters on a regional scale.
[30] The CO 2 data set presented here is the first available in the Patagonian continental shelf.In the coastal region the location of tidal fronts closely matches the transition between positive and negative DpCO 2 regions.The DpCO 2 transition across the front can be a result of physical, biological, and combined physical-biological processes.Physical processes include the dissipation of the tidal energy, by means of bottom friction, which vertically mixes the water column leading to enhanced mixing and homogenization of CO 2 .Because the CO 2 concentration generally increases with depth because of demineralization of dissolved organic matter [Broecker and Peng, 1982], the surface CO 2 will increase in the mixed side of the tidal front.In addition, since convection governs the vertical displacements of phytoplankton, intense vertical mixing may remove planktonic algae from the euphotic layer, inhibiting their growth and biological consumption of CO 2 .Carreto et al. [1986] reported the presence of typically benthic diatom species and resting cysts of Gonyaulax excavata in the near-coastal zone off VP that are absent in the stratified side of the front and other tidal fronts (i.e., CB, J. I. Carreto, personal communication, 2003).Biological processes would include the addition of CO 2 to the water column by means of the respiration of benthic organisms in the onshore side of the tidal front.The source of the organic matter respired by these organisms, like in other coastal heterotrophic environments [Duarte and Agustı ´, 1998], must be at least in part allochtonous, from land-derived organic carbon sources.In addition, cross-front exchanges are also a possible source of particulated and dissolved organic matter to the coastal region.Note that the offshore upper layers are isopycnally connected to the inshore lower layers providing a possible pathway for tracer exchange (see Figure 3).
[31] During summer and fall, east of the tidal fronts the midshelf region is a strong CO 2 sink (À4.4 mmol m À2 d À1 ).The shelf break increases the mean absorption rate to À5.7 mmol m À2 d À1 , but only one cruise sampled this area in late Austral summer 2004.The mean is virtually identical when the CO 2 fluxes are calculated with the quadratic parameterization from W92: À4.6 mmol m À2 d À1 and À5.7 mmol m À2 d À1 if the shelf break region is included.Similar results have been obtained by Hales et al. [2005] off the Oregon coast in summertime.They found that surface water over most of the shelf was a strong sink for atmospheric CO 2 , while a near-shore strip was an intense source.Similarly, Thomas et al. [2004] found that during summer in the North Sea the DpCO 2 distribution shows clear differences between stratified (northern) and nonstratified (southern) regions.As in the present work, the stratified area in the North Sea is strongly CO 2 undersaturated (up to À150 ppm), while the homogeneous area is supersaturated (up to 100 ppm).Though the data reveal that overall, the midshelf region is a CO 2 sink (approximately À10 mmol m À2 d À1 ), the magnitude of the CO 2 flux increases significantly (less than À30 mmol m À2 d À1 ) at the Chl a maxima (see Figure 6b), suggesting a modulation of the CO 2 fluxes by biological activity.
[32] The CO 2 fluxes estimated at GB are À45 mmol m À2 d À1 using the W92 parameterization and À35 mmol m À2 d À1 using WMG99.At the shelf break the flux difference is the largest: À50 mmol m À2 d À1 using W92 and À65 mmol m À2 d À1 using WMG99.However, typically CO 2 flux differences between estimates based on the two gas transfer velocity parameterizations are within ±5 mmol m À2 d À1 .
[33] Extrapolation of the overall shelf air-sea CO 2 flux to a whole year is not straightforward.Stratification in winter is removed by heat lost to the atmosphere and wind mixing [Rivas and Piola, 2002], and the incoming solar radiation is weaker than in summer and fall.Both effects must decrease photosynthesis and primary production.In the inner shelf the mean air pCO 2 difference between summer and winter is less than 1 matm, which is the standard error of measurement.On the other hand, cooler surface water increases the solubility, and this will enhance sequestration of CO 2 from the atmosphere.The winter SST decrease over the Patagonian shelf is >3°C [Bianchi et al., 1982].At typical Patagonian shelf temperatures a decrease of 1°C leads to a seawater pCO 2 decrease of more than 10 matm, leading to enhanced sequestration in winter.Furthermore, since the time for equilibration of the CO 2 system with the atmosphere is long ($1 year) compared to the time for thermal equilibration of the mixed layer (a few months), the thermal CO 2 signal may remain measurable even after the thermal forcing has ceased [Watson and Orr, 2002].Consequently, opposed effects will affect the pCO 2 over the shelf, but the lack of observations during winter precludes the extrapolation of CO 2 yearly fluxes.
[34] The largest CO 2 source to the atmosphere (60 mmol m À2 d À1 ) is observed inshore of the GB front.In this region the coastal surface cyclonic circulation may enhance primary production by favoring nutrient trapping [Sabatini et al., 2004].However, the input of deeper CO 2 rich waters to the sea surface will also increase the pCO 2 .Thereby, the food availability leads to a ''hot spot'' of accumulation of zooplankton biomass (with peaks of up to 3500 mg m À3 , wet weight) observed from 1994 to 2000 during summer [Sabatini et al., 2004].The respiration of zooplankton and the increase of CO 2 by convection in this area would explain the maximum CO 2 flux to the atmosphere (Figure 6b).
[35] Studies carried out in the East China Sea have shown that the CO 2 uptake, and export to subsurface layers in the open ocean, may strongly enhance the absorption of CO 2 in continental shelves [Tsunogai et al., 1999].Because throughout the year the Patagonian shelf is occupied by waters less dense that the neighboring Malvinas Current, the so-called continental shelf pump cannot work in this region.The sense of the cross-shelf circulation in the Patagonian shelf is likely opposed to that of the East China Sea, with export of high CO 2 concentration shelf waters in the upper layer and input of open ocean waters in a subsurface layer.Such circulation scheme is supported by high-resolution numerical simulations [Palma et al., 2004].
[36] Phytoplankton blooms generally occur in relatively small spatial scales and are short-lived.As the Argau data were collected in relatively short time intervals (typically 3 days) in the period from 2000 to 2004, it may be argued that they may not adequately display the mean summer-fall situation.However, the historical hydrographic data collected during several decades reveal changes in vertical stratification that closely match the transitions between positive and negative CO 2 fluxes and are also associated with sharp changes in fluorescence and Chl a as observed during the Argau experiment.This observation suggests that the Argau data are representative of the mean summer-fall conditions.Reports of high satellite-derived Chl a at VP, CB, and GB [Acha et al., 2004], where we observed the largest atmospheric CO 2 sinks, provide independent evidence indicating that tidal fronts play a significant role in shaping the sea-air CO 2 fluxes.Though the satellite imagery presents significant interannual variability, during the summer, relatively high-Chl a regions are always evident at the location of tidal fronts (S.I. Romero, personal communication, 2005).Thus the summer and fall CO 2 flux structure associated with spatial changes in vertical stratification appears to be a semipermanent feature of the Patagonian shelf.
Figure 2 .
Figure 2. (a) Historical summer hydrographic data.The locations of the four transects used for the characterization of the fronts are shown: San Matı ´as Gulf (SMG), Valde ´s Peninsula (VP), Cape Blanco (CB), and Grande Bay (GB).(b) Tracks performed by Icebreaker Almirante Irizar during ARGAU cruises from 2000 to 2004.
Figure 4 .
Figure 4. Simpson parameter (J m À3 ) computed from summer hydrographic data.The bold line is the critical Simpson parameter (F C = 50 J m À3 ), shaded contours correspond to values <F C , and contour interval for values >F C is 50 J m À3 .Dots show the four transects used to describe the tidal fronts.
Figure 5 .
Figure 5. DpCO 2 , SST, and fluorescence versus latitude for (a -c) Cape Blanco and (d -f) Valde ´s Peninsula fronts.Cold (well-mixed) and warm (stratified) regions are identified by C and W, respectively.The shaded area corresponds to the front (F), identified as the region between Simpson parameter values of 40 and 60 J m À3 .The inset shows the transects used to build the latitudinal series for Cape Blanco (25 -26 January 2001) and Valde ´s Peninsula (10 April 2002).
Figure 6 .
Figure 6.(a) Surface distribution of DpCO 2 (matm).The bold line is the DpCO 2 zero value (contours every 30 matm).Positive and negative values are east and west of the bold line, respectively.(b) Surface distribution of air-sea CO 2 fluxes.The critical Simpson parameter is represented by the dashed bold line.See color version of this figure at back of this issue.
Figure 7 .
Figure 7. Surface distribution of Chl a (mg m À3 ).Contour lines are every 0.5 mg m À3 .Note high values (>2 mg m À3 ) offshore Valde ´s Peninsula, north and south of San Jorge Gulf and Grande Bay, related to the stratified side of tidal fronts.See color version of this figure at back of this issue.
Figure 6 .
Figure 6.(a) Surface distribution of DpCO 2 (matm).The bold line is the DpCO 2 zero value (contours every 30 matm).Positive and negative values are east and west of the bold line, respectively.(b) Surface distribution of air-sea CO 2 fluxes.The critical Simpson parameter is represented by the dashed bold line.
Figure 7 .
Figure 7. Surface distribution of Chl a (mg m À3 ).Contour lines are every 0.5 mg m À3 .Note high values (>2 mg m À3 ) offshore Valde ´s Peninsula, north and south of San Jorge Gulf and Grande Bay, related to the stratified side of tidal fronts.
Table 1 .
Details of the ARGAU Transects Used a a Cruises were carried out aboard Icebreaker Almirante Irizar.
Table 2 .
Thermohaline Characteristics of the San Matı ´as Gulf a Ds t values are the differences between surface and bottom data.
|
v3-fos-license
|
2019-03-18T14:03:48.442Z
|
2018-05-01T00:00:00.000
|
81345818
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "http://jurnal.unsyiah.ac.id/IJTVBR/article/download/11432/9081",
"pdf_hash": "8bd686b0dc2bd5e7e370cdf6a7e74b7d2a28ca6b",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46620",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "31a97fbc244e85d8757ddfed27634de03c971d5f",
"year": 2018
}
|
pes2o/s2orc
|
Mammary Gland Tumor In Cat And Therapeutic Approach: A Case Report
This report describes a case of mammary gland tumor in a 1-year-old female cat using a mastectomy approach. The tumor was located on dexter side of mammae. Blood analysis showed good condition of the patient and ready for surgery. Mastectomy was conducted to remove the mass, and the tumor size was measured 7x5x4 cm with solid appearance. The cat was given antibiotics to prevent infection, and the wound dried in five days.
Background
Tumor or neoplasm is one of the most emerging issue in human and animal which need to be handled appropiately. The cause of this disease is varied and complex that difficult to handle. It does not show any clinical symptoms in early stage and require regular check up (Soedjono, 2009). Generally, mammary gland tumor is treated with chemotherapy to avoid invasive approach. Unfortunately, chemotherapy itself or in combination with herbal medicine still can not terminate the tumor cell growth. Therefore, mastectomy is the best treatment so far for mammary gland tumor.
Feline mammary gland tumor has been recorded as the most frequent diseases after haemopoetic and skin tumor. As many as 80 percents of mammary gland tumors become adenocarcinoma, which further develop to metastase in lungs, limphoid tissue, and liver (Hughes and Dobson, 2012). This kind of tumor is primarily influenced by age, nutrition status, inbreeding behavior, obesity, lack of vaccination, and administration of certain medicine. Polton (2009), reported other factors such as genetic, level of hormon in blood, virus infection, UV light, carcinogenic subtances, and environment that contribute to triggering this disease.
In cats, mammary gland tumor is more aggressive than in dogs, it is 80% to 90% likely to be malignant, with the majority of these tumors being adenocarcinomas. The cells are more related to progesteron receptor and if cats are treated with progestational drugs, the cells will become mammary carcinoma. The size of tumor ditermines the prognosis status; the smaller size of tumor, the longer free intervals and survival times (Ehrhart, 2008;Morris, 2013). Yulestari et al. (2014) reported the malignancy level of histopathological lesions in canine mammary gland tumor in Bali. they examined 22 histophatological samples of canine mammary gland tumor. However, the reporting case of mammary tumor in cat is very limited in Indonesia. On march 2017, the animal hospital of Syiah Kuala University in Banda Aceh, alleged a case of gland mammary tumor suspect in a thirteen month old female cat that has been given birth. In Syiah Kuala Animal Hospital (SKAH), the rise of this neoplastic disease requires continuous establisment from oncology department. This report is aimed at translating field case into relevant scientific information that may be used as a basis for experimental studies (Salas et al. 2015).
Results and Discussion
A one year old female persian cat was examined at Syiah Kuala University Animal Hospital in March 2017. The cat was brought to hospital with mammary gland tumor Haemathology analysis was caried out to discover the cat health condition.
Signalement and medical history of the cat were evaluated. Physical examination was performed, supported by laboratory test including complete blood count (CBC). Regional lymphonodus palpation and ultrasonography was conducted on abdomen area. In order to remove tumor, mastectomy procedure was applied. Before surgery, the cat was given atrophine sulfat as pre medication with dosage 0.04 mg/kg (bw) subcutan.
Enlarged nodules were found on dexter side of mammary gland. There was anatomy and pathological changes on both caudal side. Therefore, the cat was diagnosed with mammary gland tumor, and mastectomy was necessary to remove tumor tissue. Haemathology analysis was caried out to discover the cat health condition. The result was provided in Table. Parameter Before surgery, the cat was given atrophine sulfat as pre medication with dosage 0.04 mg/kg (bw) subcutan. After 10 minutes anesthesi (zoletile) was administered intramuscular with dosage 10-15 mg/kg (bw). During anesthesi stadium, heart rate and respiration was monitored every 5 minutes. The tumor image is presented in figure 1.
Figure 1. Mammary gland tumor in feline
Incision was conducted on caudal side of mammary gland, blood vessel was ligated using bipolar cautter. The area around the tumor was cleaned from lipid tissue and tumor was removed by using forceps and metzembaum scissor. The tumor size was 7x5x4 cm with solid appearance. Combination of penicilline and streptomycine was sprayed around the incision area and the wound was sutured using chromic catgut size 2/0 metric by subcuticular. Simple interupted stitching was used to tight up the wound using silk 2/0 metric. On the outside of the wound iodine tincture 3% and gentamycine 0.1% were rubbed to minimize secondary infection. The surgery process is provided in figure 2. Figure 2. the removal process of mammary gland tumor in feline After surgery, elizabeth collar was covered to avoid scratching and biting. Medication namely meloxicam, clindamycine, vitamine C and calnex were prescribed. Three days post surgery the wound has not dried completely and still swollen. Five days later, the wound was dried and recovered beautifully. The scar was almost dim entirely ( Figure 3). Seixas et al. (2011), Munson and Moresco (2007), Misdorp (2002), and Mayr et al. (1990 stated that mammary tumor in cat has several causal agents, and the most common is tubulopapillary carcinoma. The incidence of this disease increased with age and most cases occur in animals over 8 years old, as in other malignant mammary gland tumors. Mammary gland carcinoma in cats tends to be aggressive and locally invasive, and in many occasions metastasizes to other organs (Avci and Toplu, 2012). In this case, further observation should be conducted to find out whether the tumor was metastasized to other tissues. Figure 3. the recovery process after surgery In recent years, the case of mammal or other tissue sarcomas and/or carcinomas in cats and dogs is increased significantly, probably associated with exogenous and endogenous factors such as age, environmental pollutions, viruses, helminths and other carcinogenic substances (Misdorp 2002, Munson andMoresco 2007). However, in this case, the cause of the tumor was unknown, probably related to environmental pollutions.
|
v3-fos-license
|
2018-12-06T03:59:03.596Z
|
2017-01-01T00:00:00.000
|
54895559
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/ijde/2017/2464759.pdf",
"pdf_hash": "a70847d5553926bce2e876b619e9a0271e5919a3",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46623",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "a70847d5553926bce2e876b619e9a0271e5919a3",
"year": 2017
}
|
pes2o/s2orc
|
A Family of Boundary Value Methods for Systems of Second-Order Boundary Value Problems
A family of boundary value methods (BVMs) with continuous coefficients is derived and used to obtain methods which are applied via the block unification approach. The methods obtained from these continuous BVMs are weighted the same and are used to simultaneously generate approximations to the exact solution of systems of second-order boundary value problems (BVPs) on the entire interval of integration. The convergence of the methods is analyzed. Numerical experiments were performed to show efficiency and accuracy advantages.
Introduction
In what follows, we consider the general system of secondorder boundary value problems: where : R × R 2 → R are continuous functions, , , and ∈ R , and is the dimension of the system.These second-order boundary value problems are encountered in several areas of engineering and applied sciences such as celestial mechanics, circuit theory, astrophysics, chemical kinetics, and biology.Most of these problems cannot be solved analytically, thus the need for a numerical approach.In practice, (1) is solved by the multiple shooting technique and the finite difference methods.The construction and implementation of higher order methods for the latter approach are difficult while the former approach suffers from numerical instability if the BVP is stiff [1][2][3] and singularly perturbed.
In the past few decades, the boundary value methods (BVMs) have been used to solve first-order initial and boundary value problems [4][5][6][7][8].Their stability and convergence properties have been fully discussed in [5].These BVMs are also used to solve higher order initial and boundary value problems by first reducing the higher order differential equations into an equivalent first-order system.This approach increases the computational costs and time and also does not utilize additional information associated with specific differential equations such as the oscillatory nature of some solutions [9,10].
Lambert and Watson [11] have derived symmetric schemes for periodic initial value problems of the special second-order = (, ).Brugnano and Trigiante [4][5][6] have also derived BVMs for the first-order initial and boundary value problems.Amodio and Iavernaro [12] used BVMs to solve the special second-order problem = (, ).Biala, Biala and Jator, Jator and Li [13][14][15] applied the BVMs to solve the general second-order problem = (, , ) and Aceto et al. [16] constructed symmetric linear multistep methods (LMMs) which were used as BVMs for the special second-order problem = (, ).In this paper, we have derived a class of BVMs and given a general framework via the block unification approach on how to use the BVMs on systems of BVPs for the general second-order differential equations (ODEs).
International Journal of Differential Equations
The boundary value technique simultaneously generates approximate solution ( 1 , 2 , . . ., −1 ) to the exact solution (( 1 ), ( 2 ), . . ., ( −1 )) of (1) on the entire interval of integration.The BVMs can only be successfully implemented if used together with appropriate additional methods [5].In this regard, we have proposed methods which are obtained from the same continuous scheme and are derived via the interpolation and collocation approach [15,[17][18][19].
The paper is organised as follows.In Section 2, we derive a continuous approximation () of the exact solution ().Section 3 gives the specification of the methods.The convergence of the methods is discussed in Section 4. The use and implementation of the methods on ODEs and partial differential equations (PDEs) are detailed in Section 5. Numerical tests and concluding remarks are given in Sections 6 and 7, respectively.
Derivation of Methods
In this section, we shall use the interpolation and collocation approach [17] to construct a 2]-step continuous LMM (CLMM) which will be used to produce the main and additional formulas for solving (1).
Our starting point is to construct the CLMM which has the form where 0 (), ] (), and () are continuous coefficients and ] is chosen to be half the step number so that each formula, derived from (2), satisfies the root condition.The main and additional methods are then obtained by evaluating (2) at + ( = 1(1)2], ̸ = ]) to obtain the formulas of the form obtained from the first derivative of (2).Next, we discuss the construction of (2) in the theorem that follows.
Using Crammer's rule, the elements of are determined and given as where is obtained by replacing the th column of by .
We rewrite (12) using the newly found elements of as in (6); that is,
Specification of Methods
In this section, we specify the family of methods by evaluating the CLMM (2) at + , = , ] − 1, ] + 1, . . ., 2], which is also used to obtain the derivative formula given by which is effectively applied by imposing that to produce derivative formulas of the form (4). Orders 4,6,and 8.For ] = 1, the BVM of order 4 is given as follows (where we have denoted a BVM with step number as BVM):
BVM of
with the derivative formulas For ] = 2, we obtain the BVM of order 6 given as follows: with the derivative formulas For ] = 3, we obtain the BVM of order 8 given as follows: with the derivatives
We introduce the matrices and such that systems (24) and 4 are given by − () + = 0, (26) and the exact form of the system is where 22 is an identity matrix so that ‖ −1 22 ‖ = 1.Thus, to obtain an estimate for ‖ −1 ‖, it suffices to show the existence of the inverse of 11 .Now, we define where 11 = diag( 11 ) so that Proof.
Theorem 4. Let be an approximation of the solution vector for the system obtained on a partition Proof.Subtracting (27) from (26), we obtain Under the conditions of Lemma 3, −1 exists and is nonnegative.Therefore, International Journal of Differential Equations provided ℎ 2 ‖ −1 ‖‖‖‖‖ < 1.Hence,
Use of Methods
In this section, we discuss the use of methods in ( 16) and ( 17) for = 0(2])( − 2]), where is a multiple of 2].
We emphasize that the methods in ( 16) and ( 17) are all main methods since they are weighted the same and their use leads to a single matrix equation which can be solved for the unknowns.For example, for BVM6, we make use of each of the methods above in steps of 6; that is, = 0, 6, . . ., − 6.This results in a system of 2 equations in 2 unknowns which can be easily solved for the unknowns.Below is an algorithm for the use of the methods.The methods are implemented as BVMs by efficiently using the following steps.
Step 1. Use the methods in ( 16) and ( 17 Step 2. The unified block given by the system Step 1 results in a system of 2 equations in 2 unknowns which can be easily solved. Step 3. The values of the solution and the first derivatives of (1) are generated by the sequence { }, { }, = 0, . . ., , obtained as the solution in Step 2.
We note that all computations were carried out in Mathematica 10.0 enhanced by the feature FindRoot[ ].
Numerical Examples
In this section, we consider seven numerical examples.Examples 1 to 5 were solved using the BVMs = 4, = 6, and = 8 (derived in this paper) of orders 6, 8, and 10, respectively.Also, these examples were solved using the Extended Trapezoidal Methods of the second kind (ETRs) and the Top Order Methods (TOMs) given in [5] of orders 6 and 10, respectively.Comparisons are made between the BVM = 4 and the ETRs [5] as well as between the BVM = 8 and the TOMs [5] by obtaining the maximum errors in the interval of integration.We also compared our methods with the Sinc-Collocation method [20].Examples 6 and 7 were solved using the BVMs of order 6.We note that the number of function evaluations (NFEs) involved in implementing the BVMs is × 2] in the entire range of integration.The code was based on Newton's method which uses the feature FindRoot[ ] or NSolve[ ] for linear problems in Mathematica.The efficiency curves show the plot of the logarithm of against the number of function evaluations for each method.
Example 1.We consider the linear system of second-order boundary value problems given in [20] where This problem was solved using the ETRs and BVM of order 6 as well as the TOMs and BVM of order 10.The maximum Euclidean norm of the absolute errors in 1 and 2 was obtained in the entire interval of integration.In Table 1, we compared the Sinc-Collocation method [20] with the BVM of order 8. Table 2 shows the comparison between the ETRs, BVM4, TOMs, and BVM8.While the results of these methods are of approximate accuracy, we emphasize that the TOMs and ETRs use 20 function evaluations per step while the BVM4 and BVM8 use 8 and 16 function evaluations for this system.Hence, the BVMs are quite accurate and efficient.We also calculated the Rate of Convergence (ROC) using the formula log 2 ( 2ℎ / ℎ ), where ℎ is the error obtained using step size ℎ.The ROC of the BVM4 and ETRs shows that these methods are consistent with the theoretical order (order 6) behavior of the methods.We omit the ROC of the TOMs and BVM8 because their errors are mainly due to round-off errors rather than to truncation errors.Figure 1 also shows the efficiency curves of these methods.
Example 2. Consider the nonlinear BVP given in [22]
where 2 () = 3 (1 − ) The maximum Euclidean norm of the absolute errors in 1 and 2 was obtained in the range of integration.Table 3 shows the comparison between the ETRs, BVM4, TOMs, and BVM8.While the results of these methods are of approximate accuracy, we emphasize that the TOMs and ETRs use 20 function evaluations per step while the BVM4 and BVM8 use 8 and 16 function evaluations for this system.We also calculated the ROC of the BVM4 and ETRs which shows that these methods are consistent with the theoretical order (order 6) behavior of the methods.We do not calculate the ROC of the TOMs and BVM8 because their errors are mainly due to round-off errors rather than to truncation errors.Figure 2 shows the efficiency curves of these methods.
Example 3. We consider the nonlinear BVP with mixed boundary conditions given in [23] This problem was chosen to demonstrate the performance of the BVMs on nonlinear BVPs with mixed boundary conditions.The maximum absolute errors were obtained in the range of integration.Table 4 shows the comparison between the ETRs, BVM4, TOMs, and BVM8. Figure 3 shows the efficiency curves of these methods.
Example 4. Consider the second-order BVP given in [24] (bvpT17) In order to assess the efficiency of our methods, we solve the boundary layer problem given in [24] (bvpT17).The maximum absolute errors were obtained in the range of integration.Tables 5 and 6 show the comparison between the ETRs, BVM4, TOMs, and BVM8 with = 1 and 0.1, respectively.Figure 4 (45) Also, the efficiency of the scheme is shown by solving the problem given in [24] (bvpT20).The maximum absolute errors were obtained in the range of integration.Tables 7 and 8 show the comparison between the ETRs, BVM4, TOMs, and BVM8 with = 1 and 0.1, respectively.This example shows the performance of the BVMs on the Poisson equation.In order to solve the equation using the BVMs, we carry out the semidiscretization of the spatial variable using the second-order finite difference method International Journal of Differential Equations to obtain the following second-order system in the second variable : where Δ = (−)/, = +Δ, = 0, 1, . . ., , u = [ 1 (), . . ., ()] , g = [ 1 (), . . ., ()] , () ≈ ( , ), and () ≈ ( , ) which can be written in the form subject to the boundary conditions u( 0 ) = u( /2 ), u( ) = u , where f(, u) = Au + g and A is an − 1 × − 1 matrix arising from the semidiscretized system and g is a vector of constants.Table 9 shows the comparison between the BVM and the method in [25].Figure 6 shows the plot of the exact, approximate, and error function of the problem.
Conclusions
This paper is concerned with the solution of systems of second-order boundary value problems.This has been achieved by the construction and implementation of a family of BVMs.The methods are applied as a block unification method to obtain the solution on the entire interval of integration.We established the convergence of the methods.
We have also shown that the methods are competitive with existing methods cited in the literature.
In the future, we would like to develop a variable step size version of the BVMs with an automatic error estimation.
Example 5 .
shows the plot of the solution for values of = 1, 0.1, 0.01, 0.001 and the solution has a boundary layer at = 0. Consider the second-order BVP given in[24]
Figure 6 :
Figure 6: Plot of solution for the Poisson equation.
Figure 7 :
Figure 7: Plot of solution for the Sine-Gordon equation.
Table 1 :
Maximum errors for Example 1.
Table 2 :
Maximum Errors for different stepsizes for Example 1.
Table 3 :
Maximum Errors for different stepsizes for Example 2.
Table 4 :
Maximum errors for different step sizes for Example 3.
Table 5 :
Maximum Errors for different stepsizes for Example 4 for = 1.
Table 6 :
Maximum Errors for different stepsizes for Example 4 for = 0.1.
Table 7 :
Maximum Errors for different stepsizes for Example 5 for = 1.
Table 8 :
Maximum Errors for different stepsizes for Example 5 for = 0.1.
Table 9 :
Maximum error for the Poisson equation on = 1.
Table 10 :
Maximum errors for the Sine-Gordon equation.
|
v3-fos-license
|
2019-03-12T13:06:32.250Z
|
2015-09-15T00:00:00.000
|
74268886
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://ijcto.org/index.php/IJCTO/article/download/ijcto.33.27/ijcto.3327pdf",
"pdf_hash": "1ae2cfb15588b6bb13c7f10fcc599e89d76fda95",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46624",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "cc0d5881a43ff2eefd7a7b544400d8f7bb7c714a",
"year": 2015
}
|
pes2o/s2orc
|
A survey of paediatric CT radiation doses in two selected hospitals in Kampala, Uganda: a radiation safety concern
Purpose: We describe radiation doses imparted to pediatric patients during Computed Tomography (CT) scan examinations by estimation weighted CT dose index (CTDIw) and dose length product (DLP) and compare these doses with the International dose reference values. Methods: Demographic data and acquisition parameters of 257 pediatric CT scans done using multi-slice CT (MSCT) and dual slice CT (DSCT) were collected from request forms and CT scan consoles. The values of CTDIw, CTDIvol and DLP were calculated using ImPACT (imaging performance and assessment of computed tomography) dosimetry software for Philips MX-1800 scanner and GE Hispeed Dual scanner. Data was analyzed using mean, range, 3rd quartile, as well as chi square. Results: The commonest indication was head injury with the majority patient aged 0-4 years and 10-14 years for MSCT and DSCT, respectively. There were significantly higher doses imparted by MSCT compared to DSCT on both the head CTDIw (mGy) (40 vs 22, p = 0.000), CTDIvol (mGy) (60 vs 7, p = 0.000), DLP mGy.cm (1022 vs 114, p = 0.000) and body CTDIw (mGy) (41 vs 18, p = 0.000), CTDIvol (mGy) (27 vs 6 p = 0.000) and DLP (782 vs 73 p = 0.001) respectively. Pediatric 3rd quartile values for CTDIvol (mGy) (57.7 vs 31) 0-1 year, (74.5 vs 47) 4-7 years and DLP mGy.cm (1068 vs 333) 0-1 year and (1168 vs 374) 4-6 years respectively for MSCT were higher than the recommended international values. The calculated CTDIvol for the head were significantly higher than the values displayed on the console (p = 0.000, 95% Confidence Interval) for MSCT. Conclusion: The radiation dose values for CTDIw, CTDIvol and DLP for MSCT were significantly higher than those for DSCT and other countries which raise a radiation safety concern. Studies to establish the factors responsible for these high doses are recommended.
Introduction
Optimization of radiation doses during Computed Tomography (CT) examinations in paediatrics is a major radiation protection concern. There is increased use of CT examinations in clinical practice due to its short examination times, user friendliness and superior contrast resolution. A study conducted in developing countries estimated an increase in pediatric CT scan examination in Africa when compared to Asia and Eastern Europe. 1 CT scan examinations have a higher effective dose 2 and yet children have increased sensitivity to radiation 3,4 . This group of patients have more life years remaining to develop cancer and as well have proportionally more growing and developing tissue and organ systems than adults which calls for a special radiation protection concern. 5 Children are 10 times more sensitive to the effects of radiation than middle-aged adults and girls are more radiosensitive than boys. 6,7 The 2010-2011 annual report of one of the study sites showed that 8% of all diagnostic imaging procedures were CT scans, out of which 15% are pediatric CT scan examinations.
There are several strategies to limit CT radiation doses, which include performing necessary examinations, limiting the region of coverage and adjusting individual CT settings based on indication, region imaged and size of the child. 8 Contrary to this, studies showed a lack of awareness and malpractice leading to higher radiation doses to children. 9 Hollingsworth et al. noted 20-25% of the CT scan operators did not know the scan parameters that they use for scanning children. 10 Similar findings were noted in several African countries as noted by Muhogora et al. 9 All the mentioned discrepancies call for the establishment of standard operating parameters during CT scan procedure which is a major concern in pediatric patient's radiation protection. This begins by assessing the magnitude of radiation doses imparted on pediatric patients during CT scan examinations in the form of CT dosimetry. The assessment of CT radiation doses imparted during CT scan examinations entails the knowledge of essential radiation dose descriptors as described below: 11 Computed tomography dose index (CTDI) CTDI is the average absorbed dose, along the z axis, from a series of continuous irradiations. 12
Weighted computed tomography dose index (CTDIw)
CTDIw is the average CTDI across the field of view.
Volume computed tomography dose index (CTDIvol)
CTDIvol is the average absorbed radiation dose over the x, y and z directions as shown in
Dose length product (DLP)
DLP is the total energy absorbed (and thus the potential biological effect) attributable to the complete scan acquisition.
The aim of this survey was to determine radiation doses imparted on pediatric patients during CT examination in two selected hospitals and compare these doses with the International Dose Reference values.
Study design and setting
This was a retrospective cross-sectional survey carried out in two hospitals. One was a 1,500 bed capacity National Referral and Teaching Hospital, with16 slice Philips MX1800 CT scan, i.e., MSCT. The second one was private for profit hospital with 100 beds, with GE Hispeed Dual slice CT scanner, ie, DSCT. Review of records at the two centers showed approximately 3722 CT scan and 1200 CT examinations were performed annually at the MSCT and DSCT, out of which approximately 400 and 150 were performed on pediatric patients, respectively (Figure 2).
FIG. 2:
Flow diagram of recruited paediatric patients who underwent CT examination.
Data collection
All CT scan examinations performed on paediatric patients at the two study centres between 1 st December 2012 and 29 th February 2013 were reviewed and those that met the inclusion criteria were recruited in the study. The following information was collected from the request forms and recorded in standardized data collecting sheets: age, sex, clinical indication, anatomical site scanned, scanner models and acquisition parameters from the CT scan console.
The MSCT scan machine calculates and stores automatically the CTDIvol and DLP doses of each CT examination carried out in the CT console. This data can be retrieved from the console retrospectively. In contrast the DSCT scan machine calculates but does not archive the values of the CTDIvol and DLP on the CT console. Hence the comparison of the calculated CTDIvol and DLP with that of the displayed values was not possible for DSCT. Those examinations which lacked information on age, sex, anatomical sites and clinical indications were excluded from this study. CTDIw, CTDIvol as well as the DLP were calculated by using internet based software developed by the imaging performance assessment of CT scanners (ImPACT) group. The acquisitions parameters were used to generate CTDIw and DLP by employing Imaging Performance and Assessment of Computed Tomography (ImPACT) patient dosimetry calculator (Version 1.0.4 27/05/201) work sheet software and entered into the data collection form. All CT dosimetry values were computed separately for the two hospitals.
Data management and analysis
Frequency tables and graphs were used to measure and summarize the relative frequency of sex and anatomic sites, and used range to measure their distribution by age. The arithmetic mean of all information was entered on the data
Volume 3 • Number 3 • 2015
International Journal of Cancer Therapy and Oncology 3 www.ijcto.org collection forms. The forms were cross-checked and edited for errors. The data was then entered into a computer using EPI INFO, version 7.0.8.0 (2011), software for storage and initial analysis. Further analysis was done using SPSS software version 19.0 (2011).
Ethical consideration
The University research and ethics committee, the committee on human research, the hospital institutional review board and the national council for science and technology approved the protocol and gave a waiver of consent. Informed consent was obtained for screening and for enrolment into the study.
Background characteristics
A total of 268 patients met inclusion criteria. Eleven cases were excluded as nine cases did not have clear indications and two cases were difficult to clearly categorize the region scanned ( Figure 2). The patients' age ranged from 1 week-17 years (mean of 8.1 years) and 3 weeks-17 years (mean 9.1 years), Table 1. The male to female ratio in both hospitals was 1:1.
The majority of CT scan examinations were done on children of 0-4 (33%) and 10-14 (29%) year age groups for DSCT and MSCT, respectively ( Table 1). The most common clinical indication for pediatrics CT scan examination in both hospitals was head injury (Table1). Table 2 and Table 3 the mean values of CTDIw and CTDIvol for the head were more than for the body and these increased with age as demonstrated in Table 2 and 3 at both hospitals. Table 4). There were significantly higher doses imparted by MSCT compared to DSCT on both the head and body respectively ( Table 5). The 3 rd quartile values of CTDIvol (mGy) and DLP mGy.cm for age group of 0-1 year and 4-6 years for MSCT were higher than the values of quality criteria 2004 survey ( Table 6). When the MSCT 3 rd quartile CTDIvol (mGy) values were compared to other countries, they were higher doses ( Table 7). The CTDIw, CTDIvol and DLP of the DSCT scanner were less than those from UK 2003 survey ( Table 8). (Shrimpton et al. 2003) survey.
Discussion
The use of CT scan as an imaging modality for examining children is on the increase due to short examination times, user friendliness, superior spatial resolution and high-quality contrast. The need for adjustments in parameters in this population has been of great importance lately due to the serious radiation risk involved. There were slightly more girls than boys (52% vs. 48%) for DSCT. Similar findings were noted in a study by Nasoor et al. 15 In contrast, more boys were scanned by the MSCT. Boys tend to be more physically active than girls hence prone to head injuries.
Head CT scans were more common than body CT scan in both hospitals. Similar findings were noted by Mark et al. 13 and Buls et al. 16 This is because of the frequency of head injury which warrants CT scans. In developing countries,
Volume 3 • Number 3 • 2015
International Journal of Cancer Therapy and Oncology 5 www.ijcto.org most abdominal imaging and cranial imaging in the very young is done by ultrasound scans.
The findings from our study and other studies showed CT dose values increase as the age of the patient increases ( Table 2 and 3). 17 This finding agrees with the principles of radiation protection in paediatrics patient which entails the use of less radiation doses in the very young children. The mean values of CTDIw, CTDIvol and DLP were higher for males when compared to females in both hospitals. Brenner et al 5 noted the estimated lifetime cancer risk from CT radiation is greater for girls than for boys. Thus, the findings of lesser radiation dose given to girls in our setup may be an encouraging finding.
The calculated CTDIvol and the Scanner CTDIvol values for head using the MSCT were significantly higher than those from the DSCT (p = 0.000, 95% CI), The CTDIvol and DLP values (mean, range and 3 rd quartile) for the MSCT were significantly higher as compared to the DSCT (p = 0.00), Table 5 and other countries ( Table 7). Radiation doses increases as the number of slices acquired per tube rotation increases. Children that were scanned by the MSCT were exposed to higher doses as compared to adults in the western world while those scanned using the DSCT doses received radiation doses that were within acceptable range ( Table 6). This study didn't explore the likely causes for these differences, which will be the next stage of the research. However, inappropriate technical parameters, lack of appropriately calibrated parameters in the existing protocols according to the age, weight and size of the children led to the use of adult protocols. This is a radiation safety concern during the use of medical exposures which should be appropriately addressed. This high lights the concern by previous studies of children being more sensitive to radiation than adults. 5,18 There were limitations to this study. First, the values of CTDIw were not displayed on the MSCT console. Therefore comparison of the calculated values with the displayed scanner values was not possible. Second, CTDIw, CTDIvol and DLP values were not displayed on the DSCT console. Therefore comparison of the calculated values with the displayed scanner values was not possible. Third, the majority of the CT scan examinations were performed on the head. Thus it was difficult to draw statistically significant conclusion from the values obtained for body CT scan examinations.
In conclusion, the calculated CTDIvol value for CT of the head performed by the MSCT was significantly higher than the displayed value /on the console. This could be due to lack of appropriately calibrated acquisition parameters in the existing protocols.
Furthermore, the radiation dose values for CTDIw, CTDIvol and DLP for MSCT were significantly higher than that of the DSCT and relatively higher when compared to values of other countries as well the National Radiation Protection Board UK.
Spiral pitch
The scanning pitch (table travel per rotation/total collimated slice width). For axial scanning, (couch increment)/(collimated slice width) should be used mAs/rotation The total mAs per gantry rotation. Do not enter data in this box -it is calculated automatically.
Effective mAs
The mAs/per rotation divided by the spiral pitch. This is a calculated value that provides a basis for comparison of spiral protocols with different pitches Collimation The total nominal x-ray beam width along the z-axis, selected from a range of possible values in the drop down box. This determines the relative CTDI compared to the reference (usually 10 mm) collimation.
Rel. CTDI
The CTDI at the selected collimated x-ray beam thickness, relative to the CTDI at the reference collimation (usually 10 mm)
CTDI (air)
The free in air CTDI100 value (in mGy/100mAs), as defined in EUR 16262: European Guidelines on Quality Criteria for Computed Tomography, pub. European Commission (link to this document at bottom of page). CTDI values for most of the scanners are listed on the Scanner Worksheet. Pressing the 'Look up' button will enter the value in this cell. The value in this cell is corrected for the relative CTDI value in the cell above.
Start Position
The start position of the scan series. The diagram on the Phantom worksheet shows the position of the phantom's organs relative to the number scale, which is 0 at the base of the trunk. This value can be entered manually in the worksheet, or can be taken from the shaded area on the Phantom worksheet diagram. This can be adjusted using the up and down arrows. Pressing the 'Get From Phantom Diagram' button enters these values into the start and end position boxes in Scan Calculation.
End Position
The end position of the scan series -Note that this should include the slice thickness, so, for example, a single 5mm slice 20cm from the base of the trunk would have a start position of 20, and an end position of 20.5cm. Start and End position values are interchangeable. The weighted CTDI (CTDIw), volume (CTDIvol) and dose length product (DLP) are also displayed.
|
v3-fos-license
|
2023-03-11T15:48:32.645Z
|
2023-03-01T00:00:00.000
|
257433660
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11009-023-10020-7.pdf",
"pdf_hash": "5a2c9e86d46029f22a11e0bafecb3f94e065efe3",
"pdf_src": "Springer",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46625",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "5a2c9e86d46029f22a11e0bafecb3f94e065efe3",
"year": 2023
}
|
pes2o/s2orc
|
The Inverse First-passage Time Problem as Hydrodynamic Limit of a Particle System
We study a particle system without branching but with selection at timepoints depending on a given probability distribution on the positive real line. The hydrodynamic limit of the particle system is identified as the distribution of a Brownian motion conditioned to not having passed the solution of the so-called inverse first-passage time problem. As application we extract a Monte-Carlo method to simulate solutions of the inverse first-passage time problem.
Introduction
Given a random variable with values in (0, ∞) the inverse first-passage time problem for reflected Brownian motion consists of finding a lower semi-continuous function b ∶ [0, ∞] → [0, ∞] such that the first-passage time of b by a Brownian motion (B t ) t≥0 has distribution according to . A first application was proposed by Hull and White (2001) and Avellaneda and Zhu (2001) in the context of credit risk, in order to use the solutions to model the default time of a company as first-passage time, when data about the distribution of the default time is given. Since then, several methods have been found in order to simulate the unknown solutions of the inverse first-passage time problem, e.g. see Zucca and Sacerdote (2009), Song and Zipkin (2011). Regarding this, another computational objective is often to sample from the conditional distribution If the boundary b was known, a possible approach would be acceptance-rejection-sampling or a particle system similar to the model in Burdzy et al. (2000). But in the inverse first-passage time problem the boundary b is unknown, and it is natural to ask, whether given one can construct an interacting particle system only depending on the distribution of , which yields, as macroscopic limit, the unique solution to the inverse first-passage time problem or the corresponding conditioned distribution Eq. (1). In the special case that is exponential, the distribution Eq. (1) was found in De Masi et al. (2019a) as the hydrodynamic limit of the so-called N-branching Brownian motion (N-BBM) in terms of the solution of a free boundary problem. In the N-BBM finitely many particles evolve as independent Brownian motions but branch individually with rate 1. At each branching time the rightmost particle is removed from the system, in such a way that the population size is kept constant. A natural way to obtain a particle system corresponding to more general distributions of would be to adjust the branching rate of the system in De Masi et al. (2019a) into the hazard rate of , if it exists. But from a computational point of view a more efficient way would be to dismiss the branching and to only keep the selection mechanism at certain removal times.
In our approach we aim to choose these removal times in such a way, that the particle system macroscopically behaves in the desired way Eq. (1). In order to motivate this let us present one of the two main situations for which our main result Theorem 1 is designed. Let be a random variable with values in (0, ∞) . For N ∈ ℕ let be the order statistics corresponding to N independent samples from . Let be the process, which results from the following scheme. We start with N independent Brownian motions on ℝ . At every timepoint T i we remove the particle with the largest absolute value from the system. Between the timepoints T i the particles perform independent Brownian motion. We define the index set A(t) of surviving particles up to a time t, as the particles which have not been removed up to this time. See for instance Fig. 1 for a realization of the process initialized with 4 particles. A more formal definition will be given in Section 2. The consequence of our main result Theorem 1 is the following.
Let b be the solution to the inverse first-passage time problem corresponding to . For every t > 0 with ℙ( > t) > 0 holds almost surely for every a ≥ 0 . This means that, roughly speaking, drawing the removal times from the distribution of results in removing the particles which pass the boundary b. Correspondingly, this matches with the property that the distribution of the first-passage time b is equal to . The particle system from Eq. (2) can be seen as a very simple case of a more general class of particle systems with topological interactions. Prototypes of this generic class are the basic model presented in Carinci et al. (2016) and the N-BBM model of De Masi et al. (2019a) discussed above, in which removed particles are re-injected into the system. The latter model has been further modified in De Masi et al. (2019b) and generalized in Groisman and Soprano-Loto (2021), where the generalization consists of a branching rate dependent on the position of the particles. On top of that, the work of Atar (2020) presents two very general systems, namely the so-called RAB model (removal at boundary) and RAQ model (removal at quantile). In the RAB model, the injection of new particles is governed by a given function and a so-called injection measure, and the removal of particles is also governed by a given function, but restricted to the right-most particle. Under suitable conditions existence of the hydrodynamic limit is proven, where it is identified as a solution to a partial differential equation with an additional so-called order-respecting absorption condition. In the RAB model it is possible to set the injections to zero and to choose specific removal times such that the RAB model becomes a special case of the particle system in our main result Theorem 1. For details see Remark 3.
The inverse first-passage time problem is a well studied problem in probability. The existence of solutions was established in Anulova (1980) and general uniqueness results were shown in Ekström and Janson (2016) and Klump and Kolb (2022). Likewise, qualitative properties of solutions were studied, such as the behavior at zero in Cheng et al. (2006), continuity in Chen et al. (2011), Ekström andJanson (2016), Potiron (2021), higher regularity in Chen et al. (2022) or the shape in Klump and Kolb (2022). For further references see Klump (2022). For an overview of the methods for simulating the boundary see Section 4. This paper is organized as follows. In Section 2 we introduce the formal definition of the particle system and present our main result. The proof of the main result Theorem 1 is to be found in Section 3. In Section 4 we present a method for the simulation of the unknown solutions of the inverse first-passage time problem, which is related to the particle system of Theorem 1.
Notation and Main Result
We call a function g ∶ [0, ∞] → [0, 1] survival distribution, if g(t) = ℙ( > t) for a random variable with values in (0, ∞) . We denote Let P denote the space of probability measures on ℝ . Given ∈ P and a standard Brownian motion (W t ) t≥0 independent from X 0 , denote by ℙ a measure under which is a Brownian motion with initial state X 0 ∼ . For ∈ P and a survival distribution g we denote the set of solutions of the inverse first-passage time problem by and abbreviate by abuse of notation ifpt(g, 0) ∶= ifpt(g, 0 ) , where 0 is the Dirac measure. Anulova showed existence of solutions in (1980). Uniqueness of the solution was established in Ekström and Janson (2016) and Klump and Kolb (2022), in the sense that all solutions coincide on (0, t g ).
Let us prepare the formal definitions of the particle system. For a particle number N and random vector of starting points (X 1 0 , … , . Furthermore, from now on let be N + 1 fixed timepoints. Let the number of timepoints up to a time t be denoted by Set A 0 ∶= {1, … , N} and define inductively for ∈ {1, … , N} The continuous time particle system we want to consider is then the system with empirical measure In words, we do no more than remove the particle with largest absolute value from the system at the removal times Eq. (3). Recall that, given a survival distribution g, our goal is to choose the removal times in such a way that the empirical measure behaves as in Eq. (2). Observe that then, for b ∈ ifpt(g, ) , it is necessary that almost surely. Regarding this, note that we have |A k N (t) | = N − k N (t) . Our following main result shows that the necessary behavior Eq. (5) actually gives rise to a sufficient condition, where it is worth mentioning that we do not impose conditions on g.
Theorem 1 Let g be a survival distribution. Assume that, for every t ∈ (0, t g ) , the sequence of Eq. (3) fulfills Let ∈ P be symmetric with finite first absolute moment. Let b ∈ ifpt(g, ) and define for t ∈ (0, t g ) . Let (X 1 0 , … , X N 0 ) ∼ ⊗N . Then almost surely for every a ≥ 0.
Remark 1
The condition that shall be symmetric with finite first absolute moment is needed for the application of the convergence result of Theorem 2.2 of Klump and Kolb (2022) and is expected to be a technical condition.
Remark 2
The assumption Eq. (6) is to be understood as an assumption on the sequence of ordered timepoints from Eq. (3). The two main situations, which we want to cover up with this assumption are the following.
• As in the situation of Eq.
(2), let T k denote the kth order statistic of N independent samples 1 , … , N of the distribution given by g. Then if we choose t N k ∶= T k for fixed t ∈ (0, t g ) we have almost surely by the law of the large numbers.
• A deterministic choice of removal times is given by where g −1 denotes the generalized inverse as defined in Eq. (21). The property lim N→∞ N −1 k N (t) = 1 − g(t) is established in Lemma 14 in the appendix.
Before we begin with the preparation for the proof of Theorem 1 we give an overview of the relation between this particle system and the two particle systems in Atar (2020).
Remark 3
In the RAB model of Atar (2020), the injection of new particles is governed by a given function I ∶ [0, ∞) → [0, ∞) and a so-called injection measure, and the removal of particles is also governed by a function J ∶ [0, ∞) → [0, ∞) , but restricted to the rightmost particle. Under suitable conditions the hydrodynamic limit is identified as a solution to a partial differential equation with an additional so-called order-respecting absorption condition.
Let g be a given survival distribution. Then, if we choose in the RAB model of Atar (2020) the injections to be zero, i.e. I ≡ 0 , and the removal function to be J(t) ∶= 1 − g(t) , we end up with the one-sided version of the system from Theorem 1 with the specific removal times which corresponds to the second point of Remark 2 and meet the condition of Theorem 1 by Lemma 14. The conditions of the result of Atar (2020) regarding the hydrodynamic limit of the RAB model are fulfilled if g is absolutely continuous and Hölder continuous with exponent strictly larger than 1/2. In the RAQ model of Atar (2020) there are also no injections, but the removal is restricted to empirical quantiles among the particles, where the target quantile does not depend on the number of particles. Hence, the system cannot be adjusted to remove the right-most particle.
Thus, the common statement of the work of Atar (2020) and this article is the result that for survival distributions, which are absolutely continuous and Hölder continuous with exponent larger than 1/2, the hydrodynamic limit of the system with the quantile timepoints of the second point of Remark 2 exists. However, in Atar (2020) the hydrodynamic limit is described by means of a partial differential equation, whereas we interpret it probabilistically in terms of the inverse first-passage time problem.
Construction of Stochastic Barriers
We denote the index of the timepoint at the left of time t ∈ (0, t g ) by and the corresponding timepoint by (t) m ∶= t (m) . Parametrized with m ∈ ℕ we will construct two processes whose empirical measures serve as almost sure lower and upper bounds of N t in the two-sided stochastic order for every N ≥ m . The following technique of notation and construction for the particle system is inspired by De Masi et al. (2019a).
In words, for the construction of A + k , from timepoint t (m) k−1 to t (m) k we count the number of particles, which would have been removed in between these timepoints in the non-branching process and remove this number at time t (m) k at once from the system by cutting off the particles with largest absolute value at time t (m) k . For an illustration see Fig. 2. Further let A − 1 be a uniformly chosen random subset of {1, … , N} with N − k N (t (m) 1 ) elements and for k ∈ {2, … , m} define inductively where In words, for the construction of A − k , we again count the number of particles which would be removed from the non-branching process between t (m) k and t (m) k−1 , but in contrary to A + k we remove this number of particles from the system by cutting off the particles with largest absolute value at time t (m) k−1 . For an illustration see Fig. 3. Define the empirical measures and Fig. 2 Illustration of the A + -process for the non-branching system, N = 4 We want to compare the empirical measures of the processes at the timepoints (t (m) k ) k∈{1,…,n m } by a suitable coupling, which in our case demands that the particle numbers of both processes are equal. Note that by the definitions it follows immediately that and Therefore, we have at time t (m) k that It is more convenient to couple rather the discrete time step processes instead of just the continuous time process, even if it is the case that for a k ∈ {1, … , n m − 1} holds , which then implies that in the discrete time process from k to k + 1 does not happen anything. We define and
Coupling of Stochastic Barriers
for every a ≥ 0 . In the following we call this ordering the two-sided stochastic order. There exists a coupling (̃+ ,N , m,N ,̃− ,N ) of the tripel of random measures ( +,N , m,N , −,N ) such that, for every k, almost surely. Thus, for t ∈ (0, t g ) there exists also a coupling ( Before proving Lemma 2 we give some preliminaries. Our strategy for the first part of the lemma will be an induction over k. In each single step, the only thing we have to ensure about our constructed coupling is then, that the ordering property is preserved by the dynamics of the involved processes. In order to achieve this in a rigorous way, we will first introduce notation which enables us to crystallize out the dynamics of the involved processes in one step. Then we state some auxiliary statements about set orderings, which will help us to construct the desired couplings. Let where For x ∈ ℝ n and a subset A ⊆ ℝ let us introduce the notatioñ Definition 1 For x = (x 1 , … , x n ) ∈ ℝ n and y = (y 1 , … , y m ) ∈ ℝ m define the partial order In this case, call x dominated by y.
Note that, if x ⪯ y , it directly follows that n ≥ m.
Proof Let a ∈ ℝ . We have Thus which shows the statement.
Lemma 4 Let x ∈ ℝ n and y ∈ ℝ m . Then the following conditions are equivalent: If |y i | and |x i | are non-decreasing as functions of i then can be chosen as the identity. Proof Without loss of generality we can assume that |y i | and |x i | are non-decreasing in i. Then the implication from (ii) to (i) follows from Lemma 3 above. For the remaining direction note that (i) directly implies n ≥ m . By our initial assumption it is left to show that |x j | ≤ |y j | for every j ∈ {1, … , m} . For this we will carry out an induction over m. For m = 1 the statement is clear. Let now m ≥ 2 , x ⪯ y , and assume the implication from (i) to (ii) holds for all n-tuples x and (m − 1)-tuples ỹ . We will first show that |x 1 | ≤ |y 1 | . In order to see this consider where the last equality holds since |x 1 | ≤ |y 1 | ≤ a . This means x = {x 2 , … , x n } ⪯ỹ = {y 2 , … , y m } . But by the assumption of the induction for (m − 1)-tuples this means that for j ∈ {2, … , m} we have |x j | ≤ |y j | . All in all we have shown that |x j | ≤ |y j | for all j ∈ {1, … , m}.
Definition 2 For a m-tuple z = (z 1 , … , z m ) ∈ ℝ m define k with k ≤ m as the function, which assigns to z the k-tuple consisting of the first k entries of z, which has the smallest absolute value. This means that k (z) is defined by Lemma 5 Let x ∈ ℝ n and y ∈ ℝ m with n ≥ m and x ⪯ y . Then, for every m ≤ k ≤ n , holds k (x) ⪯ y.
Proof By Lemma 4 we can assume without loss of generality that |x i | and |y i | are nondecreasing in i. Note that then k (x) = (x 1 , … , x k ) . For every j ∈ {1, … , m} we have by the lemma above that |x j | ≤ |y j | . By Lemma 3 above the statement follows.
In the following situation the set ordering can be expressed by the two-sided stochastic order. The proof is immediate.
Lemma 6 Let x ∈ ℝ n and y ∈ ℝ m and n = m . Then x ⪯ y if and only if Now we begin with the preparation of the coupling, where the key idea is to imitate the dynamics of the particle process with coupled Brownian paths. The existence of the required couplings of Brownian paths in the following statements can be seen from the explicit construction in the proof of Lemma 3.2 in Klump and Kolb (2022). For the formulation of the following lemmas we use the partial order defined in Eq. (10). Recall We have then, by Lemma 4, that for all i ∈ A −,z . By Lemma 5 follows that where k is defined in Definition 2. By induction the statement follows.
With these coupling lemmas we are now ready to prove Lemma 2.
Proof of Lemma 2 Assume the statement for
By the assumption of the induction we have y ⪯ x ⪯ z . The underlying process from m,N k to N k+1 has its particles removed at the timepoints with altogether particles removed. (As mentioned before j = 0 is possible.) To achieve a coupling between m,N k+1 and −,N k+1 we can take an arbitrary Brownian motion B x and corresponding to that, B z as produced in Lemma 8 with starting points x and z. For the coupling between m,N k+1 and +,N k+1 we take a Brownian motion B y with starting point y coupled to B x in the way required by Lemma 7. We obtain a coupling of +,N k+1 , m,N k+1 , −,N k+1 by defining where we use the notation of Lemma 8 and Lemma 7. Since we have by Lemmas 7 and 8 combined with Lemma 6 that this coupling fulfills the desired ordering property, namely that The statement for the discrete time processes follows therefore by induction. Now observe that, if for t ≥ 0 we have t > t (m) k (m) (t) we can start with the coupled configura- and choose the increments of B y and B z in such a way that the particle systems stay ordered up to time t. See for example the proof of Lemma 3.2 in Klump and Kolb (2022) for such a coupling.
Hydrodynamic Limit of Stochastic Barriers
As next step we will establish a hydrodynamic limit for the lower and upper stochastic barriers ±,N t . We define P t as the operator of convolution of measures with the Gaussian probability kernel, this is for t ≥ 0 and ∈ P with the convention P 0 ( ) = . Furthermore, we define the quantiletruncation T of measures by for ∈ (0, 1] and ∈ P , where and for k such that t (m) Theorem 9 Assume that, for every t ∈ (0, t g ) , it holds Let ∶ ℝ → ℝ be a measurable and bounded function and t ∈ (0, t g ) . Then almost surely as N → ∞.
Proof In order to bring the empirical process together with the deterministic quantiles, in the following we use ideas from the proof of Proposition 3 in De Masi et al. (2019a).
We claim that, for every k ∈ {0, … , k (m) (t)} , we have that almost surely as N → ∞ . With this at hand the statement would follow since we have for any s ≥ t (m) k that (11) This sum tends to zero as N → ∞ by observing that and noting that, by the law of large numbers, almost surely as N → ∞ in the F + -case. For the F − -case first note that by definition and thus, by Lemma 13 and the law of large numbers, it follows that almost surely. Now assume that Eq. (13) holds true for fixed k ∈ {0, … , k (m) (t) − 1} . We have The last term is tending to zero by assumption while the remaining term can be written as follows.
Again by assumption the last term tends to zero. An analogous reasoning can be made for the upper barrier. Thus the statement left to show is as N → ∞ . But on the one hand we have almost surely On the other hand it holds almost surely which can be seen from the arguments in Eqs. (14) and (15) for ≡ 1.
Proof We have that which converges to 0 as m → ∞ by assumption.
Proof of Theorem 1
Now we will put the results together to yield Theorem 1.
Proof of Theorem 1 Let t ∈ (0, t g ) . Without loss of generality we assume that g(t) < 1 .
Using the coupling of Lemma 2 we get that for a number a ≥ 0 we have yielding by Theorem 9 that Now we take the specific choice t (m) k ∶= k2 −n t . Note that then (t) m = t and S ±,m k ( ) coincide, respectively, with +,m k and ̃− ,m k from Theorem 2.2 in Klump and Kolb (2022). By the convergence of Theorem 2.2 in Klump and Kolb (2022) we have then, since t is nonatomic, that As a consequence of Eq. (16) and Lemma 10 we obtain that almost surely.
Application: Simulation of Inverse First-passage Time Solutions
This section is devoted to present simulations of solutions of the inverse first-passage time problem. This is done by a Monte-Carlo method, which is extracted from the proof of Theorem 1.
We give a short overview of the existing methods to simulate the solutions of the inverse first-passage time problem. In the context of credit risk modeling Hull and White (2001) and Avellaneda and Zhu (2001) proposed approximation approaches in the case of very regular survival distributions g. In Avellaneda and Zhu (2001) the idea is based on a free boundary problem related to the problem in Cheng et al. (2006). The idea from Hull and White (2001) is based on a numerical approximation of a discrete scheme of quantiles, which is related to the our sequence of quantiles from Eq. (12). The work of Zucca and Sacerdote (2009) presents two methods for the one-sided inverse first-passage time problem. The socalled PLMC method is based on a continuous, piecewise linear approximation, which is estimated by a Monte-Carlo method. The so-called VIE method numerically approximates the solution of a Volterra integral equation, the so-called Master equation (cf. Section 14, Peskir and Shiryaev (2006)). The author of Abundo (2006) transfers the latter method to the case of reflected Brownian motion. The authors of Gür and Pötzelberger (2021) propose a modified VIE method by estimating the integral equation by using the empirical distribution of g. In Civallero and Zucca (2019) an approach related to the VIE method is used to obtain numerical solutions if the underlying process is a component of a two dimensional Ornstein-Uhlenbeck process instead of Brownian motion. A further approach can be found in Song and Zipkin (2011), which is related to the tangent-method for the first-passage time problem. Regarding the literature on numerical solutions to the first-passage time problem see for example Herrmann and Tanré (2016), Herrmann and Zucca (2019), Herrmann and Zucca (2020).
In the context of the inverse problem, bounds for the discretization error of the methods of Zucca and Sacerdote (2009) were given therein, but all in all a rigorous study and comparison of the existing methods for the solutions of the inverse first-passage time problem has yet to be provided.
Here, for a given survival distribution g and m ∈ ℕ , we consider the sequence of lower semicontinuous functions where q +,m k is the quantile from Eq. (12) and t (m) k , k ∈ {1, … , n m } , are the timepoints from Eq. (7). Note that the quantiles q +,m k , k ∈ {1, … , n m } , are given by the following inductive scheme: As long as g(t (m) k ) > 0 , if q +,m 1 , … , q +,m k−1 are already given, q +,m k is the unique element from [0, ∞] such that Note that, in terms of Eq. (17), this means that This discretization scheme was already used in the existence result by Anulova (1980) and the uniqueness results of Ekström and Janson (2016), Klump and Kolb (2022), see Remark 4 for details on the convergence. In our setting, heuristically, a Monte-Carlo approximation is given by the random functions where q +,N,m k is the empirical quantile from Eq. (8) and was given by the following inductive scheme: For N ∈ ℕ , timepoints as in Eq. (3) were given. For q +,N,m 1 , … , q +,N,m k−1 already known, we defined where k N is the function defined in Eq. (4) and We begin with the following statement.
Lemma 11 Let g be a survival distribution. Assume that the timepoints from Eq.
Proof Without loss of generality we can assume that k = k (m) (t (m) k ) . By assumption we have that, for every t ∈ (0, t g ) , holds Assume that lim inf N→∞ q +,N,m k < R < q +,m k . We obtain, by Theorem 9, that If we assume that lim sup N→∞ q +,N,m k > R > q +,m k , we analogously obtain a contradiction. These contradictions show that almost surely.
Remark 4 Note that the discrete scheme of Eq. (17) is a deterministic approximation of the inverse first-passage time solution. For this type of approximation the author of Anulova (1980) uses a notion of convergence, which is equivalent to the Γ-convergence of lower semicontinuous functions as it was used in Klump and Kolb (2022) and Klump (2022). Let b be unique solution to the inverse first-passage time problem which vanishes off (0, t g ) . If t (m) k , k ∈ {1, … , n m } , are suitably chosen, the statement of Lemma 2.3.35 in Klump (2022) yields that where Γ → denotes the Γ-convergence. This includes the choices in Eq. (19) and Fig. 6 below.
For an implementation, we have to make specific choices of the sequence of timepoints (t (m) k ) k∈{1,…,n m },m∈ℕ . In the following we will work with two choices.
Timesteps as Quantiles of the Survival Distribution
In this subsection we make the specific choice of quantiles of the survival distribution g, where g −1 denotes the generalized inverse from Eq. (21). This choice is motivated by Remark 5.
Remark 5 If we take as removal times then, by Lemma 14, we have that, for every t ∈ (0, t g ) , it holds The specific choice of t (m) for some ∈ ℕ , using the definition Eq. (8) we have Hence, in the procedure at every time step the constant number of particles is removed. Note that for the implementation, the choice of t N k becomes irrelevant.
As first example we use a known solution of the two-sided first-passage time problem.
Example 1 (Lerche (1986)) We will call b L (t) = (0,1) (t) √ −t log(t) Lerche's boundary. If the Brownian motion has initial distribution 0 the corresponding survival distribution of the first-passage time of b L is given by For examples for unknown boundaries see Fig. 5. The timesteps given by the quantiles of the survival distribution will avoid the regions with comparatively sparse probability mass. This can result in an inappropriate simulation of the unknown boundary as it would be the case for the log-logistic distribution from Fig. 6.
Timesteps as Equidistant Points
As alternative to the quantiles of the survival distribution we could also use any other form of timesteps. Note that in general we have for the Weibull distribution g Weibull(1,2) (t) = e −t 2 and the Gamma distribution g Γ(2,1) (t) = 1 − (2, t) , where is the lower incomplete gamma function, for m = 10 5 , N = 10 7 If, for every t ∈ (0, t g ) , it holds then we have Therefore, it is reasonable to substitute b N m (t (m) k ) by the simpler empirical quantile for k ∈ {1, … , n m } , which is heuristically another form of Monte-Carlo approximation. From the inductive scheme it follows that for k ∈ {1, … , n m } , which implies by induction and |Ã + 0 | = N that The following statement is proved analogously as Lemma 11 by using Theorem 15. For examples with the use of equidistant timesteps t (m) k = k m see Fig. 6.
Appendix A: Auxiliary Law-of-Large-Numbers-Lemma
The following statement is an auxiliary lemma for the application in the context of the nonbranching particle system with selection and will be used in the proof of Theorem 9.
Lemma 13 For N ∈ ℕ let A N ⊂ {1, … , N} be a random set of deterministic cardinality a N with N −1 a N → a > 0 and X N 1 , … , X N N real-valued random variables. We assume that there exists M < ∞ such that |X N i | ≤ M for every N and i and that almost surely as N → ∞ , where c ∈ ℝ . Let Z 1 , Z 2 , … be independent from each other and from all other randomness and identically distributed random variables with Z 4 1 < ∞ . Then it holds that almost surely as N → ∞.
Proof Set p ∶= Z 1 . In the proof we drop the dependency of X N i on N in the notation. If we can prove that S N ∶= S N − a −1 N ∑ i∈A N pX i converges almost surely to zero the statement is proven. For this we define Z i ∶= Z i − p and calculate We have on the one hand and on the other hand and Since for > 0 we have the probability ℙ |S N | > is summable over N which shows that S N → 0 almost surely by the Borel-Cantelli lemma.
|
v3-fos-license
|
2021-02-14T06:16:17.370Z
|
2021-02-12T00:00:00.000
|
231910184
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0246135&type=printable",
"pdf_hash": "3eea673e38eca2ae1a9348c305f0605f22732aef",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46626",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0b5dd60e2420968bea0ac2c04e642ea3ec12f608",
"year": 2021
}
|
pes2o/s2orc
|
Estimation of time of HIV seroconversion using a modified CD4 depletion model.
INTRODUCTION
Several methods have been proposed to estimate the time of HIV seroconversion, including those based on CD4 cell depletion models. However, previous models have failed to consider the heterogeneity that exists in CD4 trajectories among different sub-populations. Our objective was to estimate the time from HIV seroconversion relative to the HIV diagnosis date in a population-based cohort of people living with HIV (PLWH) in the province of British Columbia, Canada.
METHODS
We used linked administrative and clinical data from the British Columbia Seek and Treat for Optimal Prevention of HIV/AIDS (STOP HIV/AIDS) cohort, which contains longitudinal individual-level data on all PLWH ever diagnosed in the province. Eligible participants were aged ≥18 years and diagnosed with HIV between 1989 and 2013. The outcome was pre-antiretroviral treatment CD4 cell count measurements assessed every six months. Models were stratified by age and stage of HIV infection at diagnosis. Several explanatory variables were considered including longitudinal viral load measurements. Longitudinal CD4, square root transformed, was modeled via a non-linear mixed effects model; time was modeled using an exponential decay function. We assumed a Gaussian distribution (identity link), an AR(1) correlation structure, and a random intercept and slope for the longitudinal viral load measurements. Due to the population variation in CD4 count among uninfected individuals, we assumed 500 to 1500 cells/mm3 as the normal range when estimating the time of HIV seroconversion.
RESULTS
Longitudinal data on 1,253 individuals were analysed: 80% male, 33% White, and the median age at diagnosis was 38 years (25th-75th percentile [Q1-Q3], 31 to 45). CD4 decay differed by stage of infection at diagnosis and age, with those ≥50 years in Stages 1 and 2 experiencing a faster decline in CD4 over time. The median duration of infection from seroconversion until HIV diagnosis was 6.9 (Q1-Q3, 3.9 to 10.1) years.
CONCLUSIONS
Considering the heterogeneity that exists in individual CD4 cell trajectories in a population, we presented a methodology that only relies on routinely collected HIV-related data, which can be further extended to estimate other epidemic measures.
Introduction
In Canada, as in other high-resource countries, people living with HIV (PLWH) are living longer than ever before due to the success of antiretroviral treatment (ART) [1,2]. In addition to preventing morbidity and mortality due to HIV, ART has also been shown to stop HIV transmission [3][4][5]. In order to assess whether countries are moving towards HIV epidemic control [6], it is important to find reliable methods that can be used to estimate important epidemic measures of morbidity, including the time that it takes for an individual infected with HIV to be diagnosed. This information is key to enhancing HIV testing programs and linkage to care, and, therefore, to decreasing health disparities across population subgroups.
The method that we propose in this study is based on CD4 cell count (hereafter referred to CD4) depletion [7][8][9][10][11]. This methodology is flexible enough to consider inherent heterogeneities that exist in a population and it can be further applied to estimate other epidemic measures such as HIV incidence and prevalence. This model is based on longitudinal individuallevel information on biomarkers (e.g., CD4, HIV viral load) and demographic factors (e.g., age, sex), and it can be extended to include information on determinants of health (e.g., biological, behaviour and environmental factors) and other factors that are known to influence the natural history of HIV [11]. Therefore, we propose to estimate the duration of infection from HIV seroconversion until diagnosis, age at seroconversion, and year of seroconversion in a population-based cohort of PLWH in the province of British Columbia (BC), Canada using a CD4 depletion model while considering different demographic, clinical and behavioural variables.
Data source and study population
In BC, through the provincial Seek and Treat for Optimal Prevention of HIV/AIDS (STOP HIV/AIDS) population-based retrospective cohort, we have access to longitudinal individuallevel data on all PLWH since their date of HIV diagnosis [12,13]. The STOP HIV/AIDS cohort is based on a data linkage from the BC Centre for Excellence in HIV/AIDS Drug Treatment Program (DTP) clinical registry and several administrative databases containing health information on all diagnosed PLWH (regardless of whether they are accessing ART in BC or not) [13][14][15][16][17][18]. Since 1992, BC residents living with HIV have had access to centralized and publicly funded ART (through the DTP) and specialized HIV laboratory monitoring, in accordance with the BC Centre for Excellence in HIV/AIDS HIV therapeutic guidelines [19]. Data captured in the STOP HIV/AIDS cohort includes socio-demographic (e.g., sex, age, ethnicity, geographic location of residence), clinical (e.g., CD4, plasma HIV viral load, AIDS-defining illness, mortality), healthcare utilization (e.g., hospitalization, non-ART prescriptions, physician visits) and treatment variables (e.g., antiretroviral regimen information, date of ART initiation). The databases included in the STOP HIV/AIDS cohort, along with their corresponding data capture are comprehensively detailed in the Supplement.
In our study, eligible individuals were aged �18 years at HIV diagnosis, which happened between 1989 and 2013, and they were followed until they started ART treatment between 1996 and 2015, the last contact date with the provincial healthcare system (e.g., a physician visit, hospitalization, laboratory test), the date of death, study end, or the date in which they moved out of BC. Additionally, individuals were required to have at least two measurements of CD4 and viral load during follow-up. All viral load measurements in BC are centrally done at the St. Paul's Hospital virology laboratory. Since the quantification range of viral load assays has evolved over time, for analytical purposes, we truncated our measurements to range from <500 (coded as 499) to >100,000 (coded as 100,010) copies/mL [20][21][22][23]. CD4 is measured by flow cytometry, followed by fluorescent monoclonal antibody analysis (Beckman Coulter, Inc., Mississauga, Ontario, Canada). The CD4 data are measured at different laboratories across BC; however, we capture >85% of all CD4 tests done in BC in our database. In addition, we removed CD4 values that were outside the normal range for this biomarker (i.e., >1500 cells/mm 3 ) [24].
Statistical analysis
CD4 and viral load measurements were obtained every six months from the time since HIV diagnosis until ART initiation, in order to model the CD4 depletion trajectory in our study. Longitudinal CD4, square root transformed, was modeled via a non-linear mixed effects model; follow-up time was modeled using an exponential decay function [25], assuming a Gaussian distribution (identity link), an AR(1) correlation structure, and a random intercept and slope for the longitudinal viral load measurements: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi where t represents each of the 6-month intervals; i represents each individual in the study; ε i is the random error distributed as N(0,K i ), where K is the covariance matrix and independent of α 0,i and α 1,i , which are the random intercept and slope that varies across individuals, and they follow a bivariable normal distribution N a 0 ; a 1 ð Þ; ðs a 0 ; s a 1 Þ; P s a 0 ;s a 1 h i , where P is the covariance matrix. We also assumed an AR(1) correlation structure for the error term, and an unstructured variance-covariance matrix for the bivariable normal distribution. The coefficients β 1 , � � � β n are for the fixed explanatory variables x 1 , � � � x 2 . The exponential decay function was modeled via the term γe -R � t , where γ and R are coefficients in this function. If the exponential decay function was not justified based on model selection, we fitted instead a linear mixed effects model. Model selection was based on a published method by our group, based on Akaike Information Criterion and significance level [26,27]. Goodness-of-fit was based on residual diagnostic plots (Supplement). Analyses were performed in R© version 3.6.3 using the libraries nlme, mgcv, and ggplot2. As the CD4 trajectories are expected to be different across population subgroups, the models were stratified by age (<50 versus �50 years) and stage of HIV infection at diagnosis (Stage 1: CD4�500, Stage 2a: CD4 350-499, Stages 2b&3: CD4 <350 cells/mm 3 ) [11]. In the HIV field, we usually define older PLWH as those �50 years [28][29][30]. In addition, studies have shown that age is an important factor when estimating HIV disease progression using CD4 depletion models [11,31]. Thus, based on the heterogeneity in CD4 trajectories associated with age, we decided to conduct an age-stratified analysis (<50 versus �50 years). Note that we did not model individuals whose laboratory criteria indicated acute or recent HIV infection [32]. This decision was made "a priori" based on the established fact that the CD4 during acute HIV infection experience a sharp temporary decline, and estimating time of infection using a methodology based on CD4 would not yield valid results [11,33]. Explanatory variables included longitudinal viral load measurements (in log 10 ), sex (female/male), year of HIV diagnosis (<1996, 1996-1999, 2000-2003, 2004-2007, 2008-2013), ethnicity (White, non-White, unknown), HIV transmission risk group (gay, bisexual and other men who have sex with men [gbMSM], people who have ever injected drugs [PWID], GBMSM/PWID, heterosexual, other/ unknown), AIDS at HIV diagnosis, and follow-up time (years) from HIV diagnosis until ART initiation. Note that we do not have individual-level information on HIV subtype in the STOP HIV/AIDS cohort, but a recent publication using the DTP data estimated that 86% of PLWH in BC have HIV subtype B [34].
The main outcomes from this analysis were the duration of infection from HIV seroconversion until diagnosis as well as age at and year of seroconversion. These outcomes were estimated for each individual based on the statistical model for each age group and stage of HIV infection (six models in total) as follows: i. For each model, we estimated the CD4 cell loss per six months from HIV diagnosis until the start of ART. We calculated summary statistics, stratified by key variables, including medians and quartiles.
ii. Due to the population variation in CD4 cell count among HIV-negative individuals, we assumed 500 to 1500 cells/mm 3 as the normal range [24,35]. Thus, based on the CD4 value at diagnosis and for each CD4 value in the normal range, we calculated the duration of infection from HIV seroconversion until diagnosis as illustrated in Fig 1. iii. Estimates for the year of and age at HIV seroconversion below the lower boundary (i.1980, which we assumed to be the year of the first possible infection case in Canada [36, 37]; or ii. minimum age of infection of 16 years [38,39], as very few infections are acquired perinatally in BC and this is the minimum age of sexual consent to sexual activity in Canada) were discarded.
iv. Given that we had a different estimate for each of our outcomes for each value in the range 500 to 1500 cells/mm 3 , the final results were summarized by medians and quartiles.
Ethics approval
This study was approved by the University of British Columbia ethics review committee at the St. Paul's Hospital, Providence Health Care site (H18-02208 and H05-50123). As per the Research Ethics Board approval for this study and in compliance with relevant local legislation, informed consent was not required for this analysis, which was an approved secondary use of the administrative data involved. 3 , 320 to 610) cells/mm 3 , viral load was 4.6 (Q 1 -Q 3 , 4.0 to 5.0), and these individuals were followed for a median of 2.1 (Q 1 -Q 3 , 1.1 to 4.0) years until ART initiation.
The fitted models and goodness-of-fit assessments can be found in the Supplement. The estimated statistics for CD4 cell depletion per six months are shown in Fig 2. We observed that the median CD4 cell loss for each stratum was 44.2 (Q 1 -Q 3 , 30.5 to 56.5), 49.1 (Q 1 -Q 3 , 19.0 to 2.3), 26.1 (Q 1 -Q 3 , 21.5 to 29.6), 33.4 (Q 1 -Q 3 , 21.3 to 46.6), 15.0 (Q 1 -Q 3 , 13.6 to 16.3) and 12.8 (Q 1 -Q 3 , 11.4 to 14.1) cells/mm 3 , respectively. In Stages 1 and 2a, individuals aged �50 years lost CD4 cell counts faster than those <50 years, and this association was reversed in the last stages (2b & 3). Please note that the values for mean and associated 95% confidence interval were very similar.
Thus, using the median CD4 depletion in each stratum, applying the formula in Fig 1, and assuming the normal CD4 range of 500 to 1500 cells/mm 3 , the results for duration of infection from HIV seroconversion until diagnosis as well as age at and year of HIV seroconversion are presented in Fig 3. Note that we could not estimate these outcomes for 13 individuals since their estimated values fell below the stated lower boundary. Based on the estimated outcomes for each CD4 value in the normal range, we obtained that the median year of seroconversion was 1995 (Q 1 -Q 3 , 1991 to 2000), age at seroconversion was 31 (Q 1 -Q 3 , 25 to 39) years and duration of infection from seroconversion until HIV diagnosis was 6.9 (Q 1 -Q 3 , 3.9 to 10.1) years. Those in Stages 1, 2a and 2b-3 were diagnosed after a median of 4.6 (Q 1 -Q 3 , 3.8 to 5.2), 6.9 (Q 1 -Q 3 , 6.1 to 7.7), and 11.0 (Q 1 -Q 3 , 9.3 to 13.4) years after the estimated HIV seroconversion date, respectively. In Fig 4,
Discussion
In this study, we demonstrated that a model for CD4 cell depletion could be used to estimate the duration of infection from HIV seroconversion until diagnosis, as well as age at and year of seroconversion. We also showed that in order to estimate these parameters, it is important to take into consideration the stage of HIV infection at diagnosis and different demographic, clinical and behavioural variables. The proposed methodology and parameters that we have estimated can be further used to estimate the date of seroconversion of PLWH, in similar settings and demographic profiles as the one in this study, without having to fit this model again. Another advantage of this method is the possibility of establishing uncertainty around the estimates. However, in order to use this methodology, we stress the importance of obtaining specific information at the time of a HIV diagnosis, especially CD4 cell count and age. Based on our study, individuals aged �50 years in Stages 1 and 2a (CD4 �350 cells/mm 3 ) lost CD4 faster than those <50 years, and this association was reversed in the Stages 2b & 3 (CD4 <350 cells/mm 3 ). We believe this result was due to the smaller sample size for those aged �50 years in Stages 2b & 3. We also showed that the time to diagnosis varied significantly by year of estimated seroconversion, with a median ranging from 5.0 to 7.3 years before 2008 and 5.3 years thereafter. It is important to remember that during this time, there were key changes in HIV treatment guidelines, shortening the period of time an individual needs to wait between diagnosis and treatment initiation, which could explain these estimates. For example, in 2010 there was the launch of the STOP HIV/AIDS program in 2010, which aims to expand HIV testing, treatment, and support services to all PLWH in BC [2,40].
As mentioned, our methodology and outputs can be further extrapolated to assist in the estimation of HIV prevalence, which is crucial to monitor the United Nations 90-90-90 Targets [41,42]. These targets propose that, by 2020, at least 90% of PLWH should be diagnosed and aware of their HIV status; at least 90% of those diagnosed be on ART; and at least 90% of those on ART be virologically suppressed. If this target is reached, a 73% virologic suppression coverage will be achieved among all PLWH [41]. Estimating HIV prevalence, which depends on HIV incidence, is not trivial and different methods, depending on data availability, have been proposed. These methods include back-calculations [43][44][45], next-generation sequencing [46], prevalence surveys [47][48][49], mathematical modeling [50,51], and CD4 cell count depletion-based approaches [7][8][9][10][11].
The Public Health Agency of Canada generates biennial national estimates of HIV incidence and prevalence, for the country and for each province and territory, utilizing sophisticated methodologies based on a back-calculation method [43]. Their method relies heavily on routine HIV/AIDS surveillance data (i.e., information on HIV testing, mortality, and recency of HIV infection). Unfortunately, this methodology does not consider demographic, clinical and behavioural heterogeneity that exists among individuals in a population. In addition, this methodology is not very sensitive to changes in policies regarding HIV testing and treatment initiation or the effect of ART in prolonging survival. Consequently, this method can yield estimates with substantial uncertainty, especially in more recent years [52,53]. Thus, based on a broader population than the one in analyzed in our study, we can use the estimated parameters to reconstruct the HIV epidemic curve, and, ultimately, estimate the HIV prevalence and incidence over time. It would also be possible to estimate the percent of undiagnosed infections for the Canadian population (as well as for provinces and territories) as shown by Song et al. for the United States [7].
There are some potential limitations in this study. First, our estimates relied on individuals with at least two measurements of CD4 and viral load. Thus, we did not include individuals with one measurement nor did we use imputation methods to overcome this issue. However, by using the methodology in this study, which considers both intra-and inter-individual variation in the CD4 depletion trajectories (while also adjusting for key explanatory variables and providing goodness-of-fit assessments), we were able to fit robust models for CD4 cell depletion. Second, an important and complicated challenge in this type of analysis is the presence of right truncation of the CD4 data since we stopped following individuals when they initiated treatment. Thus, some individuals would have a shorter follow-up than others, and therefore, fewer CD4 cell count measurements. While mixed effects models largely address this issue, by applying a methodology such as the one by Liu et al. or Wu et al., we can further assess the robustness of our findings by examining biases that we may have in our analysis [54,55]. Third, when estimating our outcomes, we discarded information below the lower boundary established for our estimates. This included assuming a minimum age of infection of 16 years, which excluded 1% of the data not being used to estimate the outcomes. In BC, between 1993 and 2017 there has been less than 40 perinatally acquired HIV infections [38]. In addition, although the minimum age at HIV diagnosis is between 15 and 19 years old, most HIV new diagnoses occurred after the age of 20 years [38]. Thus, although this restriction could have biased our results, we believe that this bias was minimal. Finally, our estimates for CD4 cell depletion considered ethnicity as a variable in the models, despite a large number of individuals in our study being classified as ethnicity unknown. Klein et al. showed that individuals of Black ethnicity had a slower rate of CD4 cell depletion that other ethnicities, even after controlling for HIV viral subtype [56]. In our database, only 23 (1.8%) individuals reported having Black ethnicity, and we do not expect that this number will be much higher due the historical data available during the study period. [38] Although, we could have left out this variable from our model, we let the variable selection method inform whether this variable should be included or not in our final model.
Conclusions
Considering the heterogeneity that exists in individual CD4 cell trajectories in a population, we presented a methodology that only relies on routinely collected HIV-related data. This methodology yielded robust estimates that can be used in the future to retrospectively estimate other epidemic measures of morbidity, including the proportion of undiagnosed infections, that can be used to assess our progress towards HIV epidemic control in BC.
|
v3-fos-license
|
2022-06-06T23:23:34.314Z
|
2013-01-01T00:00:00.000
|
7675717
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://amb-express.springeropen.com/track/pdf/10.1186/2191-0855-3-66",
"pdf_hash": "1673306b61caee206f97171354785a8cde183441",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46631",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"sha1": "1673306b61caee206f97171354785a8cde183441",
"year": 2013
}
|
pes2o/s2orc
|
University of Birmingham Optimisation of engineered Escherichia coli biofilms for enzymatic biosynthesis of l-halotryptophans
Engineered biofilms comprising a single recombinant species have demonstrated remarkable activity as novel biocatalysts for a range of applications. In this work, we focused on the biotransformation of 5-haloindole into 5-halotryptophan, a pharmaceutical intermediate, using Escherichia coli expressing a recombinant tryptophan synthase enzyme encoded by plasmid pSTB7. To optimise the reaction we compared two E. coli K-12 strains (MC4100 and MG1655) and their ompR234 mutants, which overproduce the adhesin curli (PHL644 and PHL628). The ompR234 mutation increased the quantity of biofilm in both MG1655 and MC4100 backgrounds. In all cases, no conversion of 5-haloindoles was observed using cells without the pSTB7 plasmid. Engineered biofilms of strains PHL628 pSTB7 and PHL644 pSTB7 generated more 5-halotryptophan than their corresponding planktonic cells. Flow cytometry revealed that the vast majority of cells were alive after 24 hour biotransformation reactions, both in planktonic and biofilm forms, suggesting that cell viability was not a major factor in the greater performance of biofilm reactions. Monitoring 5-haloindole depletion, 5-halotryptophan synthesis and the percentage conversion of the biotransformation reaction suggested that there were inherent differences between strains MG1655 and MC4100, and between planktonic and biofilm cells, in terms of tryptophan and indole metabolism and transport. The study has reinforced the need to thoroughly investigate bacterial physiology and make informed strain selections when developing biotransformation reactions.
O R I G I N A L A R T I C L E Open Access
Optimisation of engineered Escherichia coli biofilms for enzymatic biosynthesis of L-halotryptophans Introduction Bacterial biofilms are renowned for their enhanced resistance to environmental and chemical stresses such as antibiotics, metal ions and organic solvents when compared to planktonic bacteria. This property of biofilms is a cause of clinical concern, especially with implantable medical devices (such as catheters), since biofilm-mediated infections are frequently harder to treat than those caused by planktonic bacteria (Smith and Hunter, 2008). However, the increased robustness of biofilms can be exploited in bioprocesses where cells are exposed to harsh reaction conditions . Biofilms, generally multispecies, have been used for waste water treatment (biofilters) (Purswani et al., 2011;Iwamoto and Nasu, 2001;Cortes-Lorenzo et al., 2012), air filters (Rene et al., 2009) and in soil bioremediation (Zhang et al., 1995;Singh and Cameotra, 2004). Most recently, single species biofilms have found applications in microbial fuel cells (Yuan et al., 2011a;Yuan et al., 2011b) and for specific biocatalytic reactions (Tsoligkas et al., 2011;Gross et al., 2010;Kunduru and Pometto, 1996). Recent examples of biotransformations catalysed by single-species biofilms include the conversion of benzaldehyde to benzyl alcohol (Zymomonas mobilis; Li et al., 2006), ethanol production (Z. mobilis and Saccharomyces cerevisiae; Kunduru and Pometto, 1996), production of (S)-styrene oxide (Pseudomonas sp.; Halan et al., 2011;Halan et al., 2010) and dihydroxyacetone production (Gluconobacter oxydans; Hekmat et al., 2007;Hu et al., 2011).
When compared to biotransformation reactions catalysed by purified enzymes, whole cell biocatalysis permits protection of the enzyme within the cell and also production of new enzyme molecules. Furthermore, it does not require the extraction, purification and immobilisation involved in the use of enzymes, often making it a more costeffective approach, particularly upon scale-up . Biofilm-mediated reactions extend these benefits by increasing protection of enzymes against harsh reaction conditions (such as extremes of pH or organic solvents) and offering simplified downstream processing since the bacteria are immobilised and do not require separating from reaction products. These factors often result in higher conversions when biotransformations are carried out using biofilms when compared to purified enzymes Halan et al., 2012;Gross et al., 2012).
To generate a biofilm biocatalyst, bacteria must be deposited on a substrate, either by natural or artificial means, then allowed to mature into a biofilm. Deposition and maturation determine the structure of the biofilm and thus the mass transfer of chemical species through the biofilm extracellular matrix, therefore defining its overall performance as a biocatalyst (Tsoligkas et al., 2011;. We have recently developed methods to generate engineered biofilms, utilising centrifugation of recombinant E. coli onto poly-L-lysine coated glass supports instead of waiting for natural attachment to occur (Tsoligkas et al., 2011;. These biofilms were used to catalyse the biotransformation of 5-haloindole plus serine to 5halotryptophan (Figure 1a), an important class of pharmaceutical intermediates; this reaction is catalysed by a recombinant tryptophan synthase TrpBA expressed constitutively from plasmid pSTB7 (Tsoligkas et al., 2011;Kawasaki et al. 1987). We previously demonstrated that these engineered biofilms are more efficient in converting 5-haloindole to 5-halotryptophan than either immobilised TrpBA enzyme or planktonic cells expressing recombinant TrpBA (Tsoligkas et al., 2011).
In this study, we further optimised this biotransformation system by investigating the effect of using different strains to generate engineered biofilms and perform the biotransformation of 5-haloindoles to 5-halotryptophans. Engineered biofilm generation was tested for four E. coli strains: wild type K-12 strains MG1655 and MC4100; and their isogenic ompR234 mutants, which overproduce curli (adhesive protein filaments) and thus accelerate biofilm formation (Vidal et al. 1998). Biofilms were generated using each strain with and without pSTB7 to assess whether the plasmid is required for these biotransformations as E. coli naturally produces a tryptophan synthase. The viability of bacteria during biotransformation reactions was monitored using flow cytometry. We also studied the biotransformation reaction with regard to substrate utilisation, product synthesis and conversion efficiency to allow optimisation of conversion and yield. This constitutes an essential step forward which will provide knowledge to future practitioners wishing to scale up this reaction.
Biotransformations
Biotransformation reactions were carried out as previously described (Tsoligkas et al., 2011; full details in Additional file 1) using either planktonic cells or engineered biofilms in a potassium phosphate reaction buffer (0.1 M KH 2 PO 4 , 7 mM Serine, 0.1 mM Pyridoxal 5′-phosphate (PLP), adjusted to pH 7.0) supplemented with 5% (v/v) DMSO and either 2 mM 5-fluoroindole (270 mg L -1 ), 2 mM 5-chloroindone (303 mg L -1 ), or 2 mM 5-bromoindole (392 mg L -1 ). 5-chloroindole and 5-bromoindole are less soluble than 5-fluoroindole, so lower concentrations were present in the reaction buffer; around 0.7 mM for 5-chloroindole and 0.4 mM for 5-bromoindole (Additional file 1: Table S1). In each case, reaction buffer was made with an initial quantity of haloindole equivalent to 2 mM and decanted into biotransformation vessels, preventing any undissolved haloindole from entering the biotransformation. No attempt has been made to carry out the reactions at the same starting concentrations since an in-depth kinetic analysis was not the focus of this study. All biotransformations, irrespectively of the cells' physiological state, were conducted on two or three independent cultures. Since 5fluoroindole biotransformations were the most active, biotransformations were performed with all strain combinations. Biotransformations with 5-chloroindole and 5-bromoindole were performed with selected strains to generate indicative data.
HPLC analysis
Haloindole and halotryptophan concentrations were measured in biotransformation samples by HPLC using a Shimadzu HPLC with a ZORBAX (SB-C18 4.6 mm × 15 cm) column resolved with methanol versus water at a rate of 0.7 mL min -1 ; a UV detector at 280 nm was used throughout the analysis (Additional file 1: Figure S1). Both solvents were acidified with 0.1% formic acid and run using the gradient described in the supplementary data. Linear standard curves (Additional file 1: Figure S2; peak area versus concentration) were generated for 5-fluoro-, 5chloro-and 5-bromoindole and each corresponding 5halotryptophan using standards of known concentration (0.125 mM to 2 mM) in triplicate and used to correlate sample peak area to concentration. Biotransformation data are presented as three percentages of halotryptophan yield (Y), haloindole depletion (D) and selectivity of conversion (S) for each timepoint: Quantification of the dry cell biomass and Crystal Violet staining The total biofilm biomass was determined for 5 slides that had been coated with E. coli biofilms and matured for 7 days. The glass slides were washed twice in phosphate buffer. In a pre-weighed centrifuge tube kept at 100°C overnight, the biofilm was disrupted in sterile water using a vortex mixer for 30 minutes; the glass slide was removed and the cells centrifuged at 1851 g for 10 minutes. The supernatant was removed and the biomass dried at 100°C for at least 24 hrs. The dry biomass was determined when the mass stopped decreasing.
The quantification of dry cell biomass of planktonic cells was performed directly on 10 mL of three independent cell suspensions in pre-weighed centrifuge tubes kept at 100°C overnight. Following centrifugation (1851 g for 10 minutes) and washing in sterile water, the cells were centrifuged again (1851 g for 10 minutes) and, after removing the liquid, allowed to dry at 100°C for at least 24 hours until a constant mass was reached.
Biofilms on glass slides were also quantified using Crystal Violet staining; after washing in sterile phosphate buffer the slides were coated with 1 mL of Crystal Violet solution (0.1% (w/v) for 15 min). The slides were washed in water three times and placed in Duran bottles with 20 mL of ethanol. The crystal violet on the glass slides was allowed to dissolve for 1 hour and the optical density of the ethanol solution determined at 570 nm using a UV-vis spectrophotometer.
Flow cytometry
Cell membrane potential and membrane integrity were analysed by flow cytometry after 2 and 24 hours in each reaction condition using staining with 5 μg mL -1 propidium iodide (PI, which enters cells with compromised membrane integrity) and 0.1 mg mL -1 Bis (1,3-dibarbituric acid) trimethine oxanol (BOX, which enters cells with depolarised membranes) as previously described by Whitehead et al. (2011). Cells were analysed using an Accuri C6 flow cytometer (BD, UK) as described in the Additional file 1.
Biofilm formation by different E. coli strains
Crystal Violet staining was used to compare the biomass within biofilms generated using the spin-down method with four E. coli strains: MG1655 and MC4100; and their ompR234 derivatives PHL628 and PHL644 ( Figure 2). MG1655 generated more biofilm than MC4100, and the ompR234 mutation increased the amount of biofilm formed by both strains. The presence of pSTB7 decreased biofilm formation by PHL628 but did not significantly affect biofilm formation by the other strains. The corresponding dry mass of each biofilm was 1.5 ± 0.2 mg for PHL644 pSTB7 and 2.3 ± 0.3 mg for PHL628 pSTB7.
Biotransformation by planktonic cells
The ability of planktonic cells to convert 5-haloindoles to 5-halotryptophans was assessed by measuring 5-haloindole depletion, 5-halotryptophan synthesis and the selectivity of conversion of 5-haloindole to 5-halotryptophan as defined in equations 1-3. These three measurements are required since, although the conversion of haloindole plus serine to halotryptophan is catalysed by the TrpBA enzyme, halotryptophan is a potential substrate for tryptophanase (TnaA) which would convert it to haloindole, pyruvate and ammonium ( Figure 1b). Alternatively, halotryptophans could be sequestered for protein synthesis (Crowley et al., 2012). Thus, selectivity of conversion to halotryptophan is a critical parameter for the reaction to be considered as a Figure 2 Crystal Violet staining of E. coli engineered biofilms. Biofilms were generated from strains MG1655 and PHL628 (a) or MC4100 and PHL644 (b) with and without pSTB7 using the spin-down method, matured for 7 days in M63 medium and biomass was estimated using crystal violet staining.
viable route for production of these compounds. Neither depletion of haloindole nor production of halotryptophan was detected when biotransformations were performed using bacteria without the pSTB7 plasmid, either planktonically or in biofilms, confirming that the constitutively expressed recombinant tryptophan synthase is required for the reaction (data not shown). Figure 3a shows that the concentrations of 5fluorotryptophan increased over the reaction period with the rate of generation decreasing as the reaction proceeded. No significant difference was noticed in synthesis rate or overall yield between MG1655 pSTB7 and PHL628 pSTB7; the rate and yield were higher for MC4100 pSTB7, and higher still for PHL644 pSTB7. The profile of 5-fluoroindole depletion (Figure 3b) appeared similar to that of 5-fluorotryptophan generation in strains MG1655 pSTB7 and PHL628 pSTB7, but displayed a rapid increase (to nearly 20%) in MC4100 pSTB7 and PHL644 pSTB7 in the first hour of the reaction. This suggests that indole efflux is much more rapid in MC4100 than in MG1655, and reflects an inherent difference between the strains. Selectivity of conversion of 5-fluoroindole to 5-fluorotryptophan increased rapidly in PHL628 pSTB7, PHL644 pSTB7 and MG1655 pSTB7, although MG1655 pSTB7 selectivity was highest after 8 hours (Figure 3c). Planktonic biotransformation reactions (in 10 mL of culture volume) contained a dry mass of 1.1 ± 0.1 mg for PHL644 pSTB7 and 1.2 ± 0.2 mg for PHL628 pSTB7.
The same parameters are shown for the biotransformation of 5-chloroindole to 5-chlorotryptophan in Figure 4. Unlike the 5-fluoroindole reaction, strains PHL628, PHL644 and MG1655 showed similar overall percentage chlorotryptophan yields. As with the fluoroindole reactions (Figure 3), strains MC4100 pSTB7 and PHL644 pSTB7 both showed rapid chloroindole depletion in the first hour of the reaction whereas MG1655 pSTB7 and PHL628 pSTB7 displayed more gradual depletion. As a result, the selectivity of the reaction was initially higher in MG1655 pSTB7 and PHL628 pSTB7, peaking at around 75% at 4 hours, although the selectivity of these two strains decreased to around 50% over the course of the reaction. PHL644 pSTB7 selectivity increased over time to around 50% after 25 hours. As with the fluoroindole reaction, the selectivity of MC4100 pSTB7 was lowest throughout. Planktonic biotransformations yielded extremely low production of 5-bromotryptophan (>10%; Additional file 1: Figure S3). 5-bromoindole was depleted in these biotransformation reactions (although not to the same extent as fluorindole and chloroindole), but the rate of conversion to 5-bromotryptophan was very low. As with the 5fluoroindole and 5-chloroindole reactions, 5-bromoindole was rapidly taken up by strains PHL644 and MC4100.
Biofilm-mediated biotransformation
Results for the biotransformation of 5-fluoroindole to 5fluorotryptophan using engineered biofilms that had been matured for 7 days in M63 medium are shown in Figure 5. Biofilm-mediated reactions were dramatically different to planktonic reactions, both in terms of each strain's relative activity but also in overall reaction kinetics. The rapid import of haloindole observed in planktonic MC4100 strains (Figures 3 and 4) was not observed in biofilm reactions, probably a consequence of the changes in indole transport and metabolism upon biofilm formation (Lee & Lee, 2010). Strains containing the ompR234 mutation were all more catalytically active than their wild type counterparts; this is probably due in part to the lower entrapment of wild type cells (Figure 1). Unlike reactions performed with the cells in the planktonic state, the PHL628 pSTB7 biofilm outperformed PHL644 pSTB7 in terms of overall fluorotryptophan yield, rate of conversion and selectivity. MG1655 pSTB7 and MC4100 pSTB7 displayed minimal conversion of metabolised fluoroindole to fluorotryptophan until after 24 hours incubation (Figure 5c). For the biofilm-mediated conversion of 5-chloroindole to 5-chlorotryptophan (Figure 6), PHL628 pSTB7 displayed rapid 5-chloroindole import (similar to MC4100 planktonic cells). Conversion was higher in PHL644 pSTB7 than PHL628 pSTB7, probably a consequence of the earlier exhaustion of 5-chloroindole in the latter strain. As with the planktonic 5-bromotryptophan reactions, the yields of biofilm-catalysed 5-bromotryptophan biotransformations were very low; 5-bromoindole was taken up by cells, but converted to 5-bromotryptophan at a very low rate (Additional file 1: Figure S4).
In order to compare the biotransformation reaction on an equivalent basis between different strains and haloindoles, initial reaction rate data normalised by cell dry mass (expressed in units of μmol halotryptophan (mg dry cells) -1 h -1 ) are presented in Table 1. As previously observed (Tsoligkas et al., 2011), reaction rates followed the trend fluoroindole > chloroindole > bromoindole. Biofilms and planktonic cells had very similar initial reaction rates except for MG1655 pSTB7 and PHL628 pSTB7 for fluoroindole when the initial conversion rate using biofilms was three to four times that of planktonic cells. It should be noted that initial rates do not necessarily relate to overall reaction yields, and these data should be consulted in conjunction with Figures 3, 4, 5 and 6.
Cell physiology during biotransformation reactions
To eliminate the possibility that differences in biotransformation yields were due to changes in bacterial viability or physiology, flow cytometry was used to determine the proportion of PHL644 pSTB7 cells with membrane potential and membrane integrity (i.e. live cells) after 2 and 24 hours of biotransformation reactions (Table 2). In all conditions, the vast majority of the cell population were live cells. Neither the presence of DMSO or any 5-haloindole had any detrimental effect on cell viability in planktonic biotransformations, even after 24 hours (p < 0.05). The presence of 5-haloindoles did not have a statistically significant effect on the percentage of biofilm cells alive after either 2 or 24 hours (p < 0.05); however, the proportion of live biofilm cells decreased between 2 and 24 hours (p < 0.05). Examples of plots obtained through flow cytometry are shown in Additional file 1: Figure S5.
Biofilm formation
Biofilm formation is a complex process governed by many environmental cues, detected and coordinated through a complex regulatory network (Beloin et al., 2008). The osmolarity-sensing two component regulatory system EnvZ-OmpR is crucial to the regulation of biofilm formation in E. coli (Shala et al., 2011;Vidal et al., 1998). OmpR transcriptionally activates the csgDEFG operon; CsgD in turn activates transcription of the csgBAC operon, encoding the curli structural proteins which enable initial attachment of bacteria to surfaces (Prigent-Combaret et al., 2001;Ogasawara et al., 2010;Brombacher et al., 2003). In addition, CsgD also activates transcription of adrA, encoding a putative diguanylate cyclase which is predicted to generate c-di-GMP and thus activate cellulose production (Bhowmick et al., 2011). The ompR234 mutation carried in strains PHL628 and PHL644 comprises a point mutation (L43R) located within the receiver domain, which enhances activation of csgDEFG (Prigent-Combaret et al., 2001;Prigent-Combaret et al., 1999;Vidal et al. 1998). It was, therefore, expected that the ompR234 strains would form biofilm more readily than MC4100 and MG1655 ( Figure 2). Indole has previously been shown capable of enhancing biofilm formation (Chu et al., 2012;Pinero-Fernandez et al., 2011), whereas tryptophan has been shown to decrease biofilm formation (Shimazaki et al., 2012). Therefore the presence of pSTB7 could result in decreased biofilm formation since tryptophan concentrations (both intracellular and extracellular) could be predicted to be higher in cells containing pSTB7. E. coli MC4100 and MG1655 did not form substantial biofilms, hence the presence of pSTB7 Figure 6 Biotransformation of 5-chloroindole to 5chlorotryptophan using engineered biofilms comprising two strains. Concentrations of 5-chlorotryptophan and 5-chloroindole were measured using HPLC and percentage 5-chlorotryptophan accumulation (a), percentage 5-chloroindole depletion (b) and the selectivity of the 5-chloroindole to 5-chlorotryptophan reaction (c) were plotted against time. All cells contained pSTB7.
did not have a significant effect on these strains (Figure 2). pSTB7 decreased the biomass of PHL628 biofilms, although it did not decrease biofilm formation in PHL644. This was possibly a consequence of the higher activity of tryptophan synthase in biofilms of PHL628 pSTB7 compared to PHL644 pSTB7 (Table 1), which would deplete intracellular indole.
Biotransformation rates and efficiencies
As previously noted (Tsoligkas et al., 2011), the initial rate of biotransformation reactions followed the trend 5fluorotryptophan > 5-chlorotryptophan > 5-bromotryptophan, irrespective of strain (Table 1); this has been ascribed to steric hindrance of the TrpBA enzyme by bulky halogen adducts (Goss and Newill, 2006). The selectivity of the haloindole to halotryptophan reaction was not 100% in any of the cases studied. In most cases, the reaction stopped due to haloindole depletion. Since, in the absence of pSTB7, haloindole concentrations did not decrease over the course of 30-hour biotransformation reactions, it can be concluded that all haloindole consumed by pSTB7 transformants was initially converted to halotryptophan by the recombinant TrpBA, and that haloindole influx into cells was driven by this conversion. Indole is thought to predominantly enter bacteria via diffusion through the membrane, a process which would probably be aided by the presence of DMSO in the reaction buffer (Pinero-Fernandez et al., 2011). Haloindole utilisation data (Figures 3b and 4b) reveal that MC4100 and its ompR234 derivative PHL644 display an extremely rapid initial influx of haloindole within the first hour of planktonic reactions. This is not observed in planktonic reactions with MG1655 or PHL628, where indole influx is steadier. Initial halotryptophan production rates reflect these data (Table 1). Biofilm reactions display a different trend; rapid indole influx is only seen in PHL628 chloroindole reactions (Figure 6b), and indole influx is slower in PHL644 than PHL628. Again, this is probably due to the higher rate of halotryptophan production in biofilms of PHL628 than PHL644 (Table 1), driving haloindole influx via diffusion.
Since halotryptophan concentrations were measured here by HPLC in the cell-free extracellular buffer, all measured halotryptophan must have been released from the bacteria, either by active or passive processes. Therefore, conversion ratios of less than 100% must derive either from failure of halotryptophan to leave bacteria or alternative halotryptophan utilisation; the latter could be due to incorporation into proteins (Crowley et al., 2012) or degradation to haloindole, pyruvate and ammonia mediated by tryptophanase TnaA (Figure 1). Although regenerating haloindole, allowing the TrpBA-catalysed reaction to proceed again, this reaction would effectively deplete serine in the reaction buffer and so potentially limit total conversion. The concentration of serine could not be monitored and it was not possible to determine the influence of this reverse reaction. Deletion of tnaA would remove the reverse reaction, but since TnaA is required for biofilm production (Shimazaki et al., 2012) this would unfortunately also eliminate biofilm formation so is not a remedy in this system. Synthesis of TnaA is induced by tryptophan, which could explain the decrease in conversion selectivity over time observed in planktonic MG1655 and PHL628 chlorotryptophan reactions ( Figure 4c); chlorotryptophan synthesis could potentially induce TnaA production and thus increase the rate of the reverse reaction. In other reactions, selectivity gradually increased over time to a plateau, suggesting that initial rates of halotryptophan synthesis and export were slower than that of conversion back to haloindole. Taken together, these observations are likely due to underlying differences between strains MG1655 and MC4100 and between planktonic and biofilm cells in terms of: indole and tryptophan metabolism, mediated by TrpBA and TnaA; cell wall permeability to indole; and transport of tryptophan, which is imported and exported from the cell by means of transport proteins whose expression is regulated by several environmental stimuli. They underline the requirement to assess biotransformation effectiveness, both in terms of substrate utilisation and product formation, in multiple strains, in order that the optimal strain might be selected.
We had previously hypothesised that biofilms were better catalysts than planktonic cells for this reaction due to their enhanced viability in these reaction conditions, allowing the reaction to proceed for longer; however, flow cytometry reveals this to be untrue. Therefore, the reasons for extended reaction times in biofilms as compared to planktonic cells must be more complicated. A second possible reason for such behaviour could the higher plasmid retention of biofilm cells (O'Connell et al., 2007) that could allow greater trpBA expression and thus more enzyme in biofilm cells. However, the initial rate of halotryptophan production per mass of dry cells were very similar in most of the cases apart from PHL628 pSTB7 and MG1655 pSTB7 for fluoroindole; therefore it appears that such hypothesis could be disregarded. Furthermore the similarity between the initial conversion rates between the two physiological states (biofilms and planktonic) suggests that mass transfer of haloindole through the biofilm was not the limiting step in the biotransformation because, if this was the case, lower initial conversion rates would have been found for biofilm reactions. Future studies will focus on the increased longevity of the reaction in biofilms when compared to planktonic cells, and the differences in tryptophan and indole metabolism in biofilms and planktonic cells.
In conclusion, in order to be used as engineered biofilms E. coli strains need to be able to readily generate biofilms, which can be achieved through the use of ompR234 mutants. Despite the presence of native tryptophan synthase in E. coli, a plasmid carrying the trpBA genes under the control of a non tryptophan-repressed promoter was required to achieve detectable conversions of 5-haloindole to 5-halotryptophan. PHL644 pSTB7 returned the highest conversion when planktonic cells were employed in biotransformations but PHL628 pSTB7 gave the highest production of fluorotryptophan when biofilms were used.
Higher viability is not the reason for biofilms' greater performance than planktonic cells; complex differences in indole and tryptophan metabolism and halotryptophan transport in biofilm and planktonic cells probably determine reaction efficiency. The results underline that biotransformation reactions need to be optimised in terms of host strain choice, recombinant enzyme production and method of growth for the chosen biocatalyst.
|
v3-fos-license
|
2018-11-15T18:38:58.456Z
|
2018-11-01T00:00:00.000
|
53249292
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/s18113837",
"pdf_hash": "5c4fcbe9533d2308e97d289ad2e9e1f441acfe27",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46633",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "5c4fcbe9533d2308e97d289ad2e9e1f441acfe27",
"year": 2018
}
|
pes2o/s2orc
|
Robust Powerline Equipment Inspection System Based on a Convolutional Neural Network
Electric power line equipment such as insulators, cut-out-switches, and lightning-arresters play important roles in ensuring a safe and uninterrupted power supply. Unfortunately, their continuous exposure to rugged environmental conditions may cause physical or electrical defects in them which may lead to the failure to the electrical system. In this paper, we present an automatic real-time electrical equipment detection and defect analysis system. Unlike previous handcrafted feature-based approaches, the proposed system utilizes a Convolutional Neural Network (CNN)-based equipment detection framework, making it possible to detect 17 different types of powerline insulators in a highly cluttered environment. We also propose a novel rotation normalization and ellipse detection method that play vital roles in the defect analysis process. Finally, we present a novel defect analyzer that is capable of detecting gunshot defects occurring in electrical equipment. The proposed system uses two cameras; a low-resolution camera that detects insulators from long-shot images, and a high-resolution camera which captures close-shot images of the equipment at high-resolution that helps for effective defect analysis. We demonstrate the performances of the proposed real-time equipment detection with up to 93% recall with 92% precision, and defect analysis system with up to 98% accuracy, on a large evaluation dataset. Experimental results show that the proposed system achieves state-of-the-art performance in automatic powerline equipment inspection.
Introduction
Powerline equipment such as insulators, cut-out-switches, and lightning arresters plays vital roles, on either the power generation or the distribution side, for the safe delivery of electricity to the end user [1]. Since these equipment has to bear severe weather conditions, high mechanical tension, and extreme voltage power, they are easily damaged, in which case they must be replaced or repaired before their electrical life end [2]. Due to the uncertainty of the life of these insulators, electric companies must take preventive measures to monitor these insulators. Traditional monitoring methods require electric repairmen to climb the pole and visually or electrically analyze the defects, insulators from good ones. However, the insulator detection mechanisms in both studies [27,28] were not clearly described. Moreover, the pole detection method in [28] is based on simple edge template matching, which only works well when the poles are located against a clear background.
Oberweger et al. [30] presented a novel approach to detect insulators in aerial images based on discriminative training of local gradient-based descriptors and a RANSAC-based voting scheme. However, their scheme cannot detect multiple insulators from a single image. Jabid and Uddin [29] used a classical sliding window-based detection method with local directional pattern (LDP) feature and SVM classifier. Their method not only scales the input image to multiple sizes but also rotates the input image at multiple orientations in order to address the size and rotation variations, which slows down the detection process.
Developing a successful insulator monitoring system is a challenging problem due to the large variations in the appearances of insulators caused by scale, viewpoint, color and occlusion [22]. Cluttered backgrounds also increase the complexity of the problem and often increase the computational cost and decrease the detection rate. Most of the existing insulator detection methods address only a subset of the variations without having the capability of handling all of them. In contrast, features extracted from pre-trained CNN such as OverFeat have been successfully used in computer vision tasks such as scene recognition, object attribute detection and achieves improved results compared to handcrafted features [31]. This is the major motivation of using CNN for the detection of electrical equipment in our work. Given the power of CNN features, our work aims to automatically select the best features for various types of equipment in the CNN training routine.
In this paper, we propose a complete real-time electrical equipment monitoring system which can detect various types of insulators from live video, taken from a dual camera system mounted on the rooftop of a ground vehicle, and subsequently analyze the images for potential defects. Since most of the electrical equipment discussed in this paper are insulators, we will use the term insulator to represent all other electrical equipment throughout this paper. The camera system contains two cameras inside its casing: (1) a low-resolution (LR) camera that takes a long shot image covering many objects of interests (i.e., insulators), and (2) a high-resolution (HR) camera that focus on to these objects of interest by panning, tilting and zooming. In order to assist the camera system to detect insulators in low-resolution and high-resolution images, we trained multiple CNN-based insulator detectors. To the best of our knowledge, there has been no study where CNN was effectively used to detect and classify different types of powerline insulators (reference [32] used CNN for defect analysis, not insulator detection). The CNN-based insulator detector first detects many insulators from the low-resolution images by reading frames from the memory of the LR-camera and then passes the locations of those insulators to the HR-camera. The HR-camera then zooms into those locations and takes high-resolution images of the insulators. Another CNN-based insulator detector helps to accurately crop the high-resolution images of the insulator and passes those images to the defect analysis module. The proposed insulator detection and defect analysis system is highly robust against the cluttered background, occlusion, arbitrary orientations, diverse lighting conditions, and viewpoint changes.
To contrast the proposed scheme, the pros and cons of the previous work are summarized in Table 1. We list the number of electrical equipment that can be detected, variations and amount of complexity in the background of the images that can be handled, and the major drawbacks inherited by the previous work.
One of the most salient features of the proposed scheme is the ability to detect 17 different types of electrical equipment. Moreover, most of the existing methods present naïve approaches of thresholding the color or intensity image using a single threshold [1,[14][15][16][17]25,26,[33][34][35][36], and hence these methods are sensitive to color and lighting variations. In order to show the robustness of the our proposed method, we evaluated our method on a large dataset of 644 cluttered insulator images, while the methods described by [17,22,24,26,33,36] used small datasets of 2, 3, 4, 5, 10, and 74 images, respectively, and the methods in [2,21] tested their algorithm on a single image. Although [23] used 100 images for training their model, they tested their system on the same 100 images, hence the robustness of these algorithms cannot be guaranteed. Saliency only works when the main object is close to the camera. Single image used for training and test which cannot prove the robustness of the system [14,15] ' 11 1 High dependency upon color pixel values [16] ' 10 1 • Simple template matching based approach [17] ' 10 1 Works only when there is low intensity bg and high intensity fg [25] '12 1 Works with close shot images only. Whole technique depends upon naïve approach of binarizing the intensity image.
[26] ' 16 1 Whole technique depends upon color thresholding. Single image for testing and training which cannot prove the robustness of the system. [21] '06 1 Single image is used to train and test, so robustness of the scheme cannot be proved [38] ' 15 1 Active contour model only works when the main object is very close to camera and bg color is discriminant with respect to fg. Testing dataset contains only 12 images. [22] '12 2 Algorithm completely depends upon structural symmetry, which is lost when viewing angle changes, Detection results are shown on only two images [34] ' 18 1 Saliency only works when main object is close to the camera and bg is far from the main object. Lastly, by looking at Table 1, we can observe that only the proposed scheme provides robustness against all four types of variations, while the schemes in [29,30,37] guarantee partial robustness against out-of-plane rotation. Out of these four, the proposed scheme is the one that provides the highest precision and recall. Moreover, the scheme in [30] cannot detect multiple insulators in one image, while the proposed method can detect 17 different types of insulators in a single image. The scheme in [29] can only detect one type of insulator, and also suffers from low detection speed as discussed earlier. The scheme in [37] requires a camera with LED illuminators to illuminate extra light on the insulators so that they can be easily distinguished from the background. Moreover, their setup is only feasible for railway systems, where the distance between camera and insulators is small as compared with the distance between ground vehicle and powerline insulators.
After the detailed review of the related work and introduction of the proposed method, we highlight the main contributions of this study as the following: (1) Unlike earlier studies that use handcraft features, we explore the robustness of the CNN features and use them for the task of multi-type insulator detection in a highly cluttered environment. (2) We present an ellipse detection method specifically designed for segmenting the caps from various types of insulators. (3) We propose a novel insulator rotation normalization method that normalizes the in-plane rotation of insulators, irrespective of their types. (4) We propose a novel defect analysis method that can detect gunshot defect in polymer insulators. (5) We present a complete automatic real-time multi-type insulator detection and defect analysis system.
The rest of this paper is organized as follows: in Section 2, we provide a detailed overview of the proposed system and discuss the multi-type insulator detection mechanism and the experimental setups used to train those detectors. We also present a novel rotation normalization method for various types of insulators, whereas in Section 3 we describe the proposed defect analysis system and its components, along with the ellipse detector algorithm. With the comparison to the state-of-the-arts in Section 4, we provide the experimental results and performance analysis of the complete insulator detection and defect analysis system. Finally, we draw the conclusion of our study with future directions in Section 5.
Overview of the Proposed System
This section presents a detailed overview of the proposed system. A brief introduction to the types of insulators our system can detect is given in Figure 1. In every insulator type, the repeating circular shaped part is called the "Cap" and the rod passing through the center of these caps is called the "Core" or "Sheath." As shown in Figure 1, the proposed system can detect 17 different types of electrical equipment, having different shapes, sizes, and colors. Depending upon the availability of image data, the proposed system can be trained to detect more equipment with minor modifications in the training and detection routines. Example images of various types of electrical equipment that can be detected by our proposed system. Insulators of (a) polymer and (b) porcelain; Lightning-Arrester (LA) of (c-1) porcelain with uniform cap sizes, (c-2) polymer, (c-3) porcelain with non-uniform cap sizes; Cut-out-Switches (COS) of (d-1) porcelain, (d-2) polymer with uniform cap sizes, (d-3) polymer with nonuniform cap sizes; Line-Post (LP) of (e-1) porcelain with uniform cap sizes, (e-2) porcelain with nonuniform cap sizes, (e-3) polymer, (e-4) porcelain of white color; and COS or LA's add-ons of (f-1) polymer, (f-2) porcelain with four caps, (f-3) square shaped, (f-4) porcelain with two caps and (f-5) polymer with two caps. For training CNN, we grouped the equipment into 6 base classes, i.e., (a) polymer insulator, (b) porcelain insulator, (c) LA, (d) COS, (e) LP, and (f) add-on.
The overall system diagram of the proposed real-time multi-type insulator detection and defect analysis system is shown in Figure 2. In the first stage, the video acquisition module (ground vehicle with the proposed camera system mounted on its rooftop) captures the video of the top part of the poles (where the insulators are installed) on the streets. This video is captured by the low-resolution camera. In the LR detection stage, the frames of this video are processed by a CNN-based rotation invariant multi-type insulator detector. As the insulator images captured by the fixed camera are of low-resolution, hence we refer to this detector as LR (low-resolution) detector throughout this paper. The LR detector passes the coordinates of the bounding boxes of the detected insulators to the HRcamera module, which takes the high-resolution close shot of all the insulators. As the vehicle is continuously moving, and the camera system has some delay in taking pictures, the HR-camera covers some extra area around the detected insulator to compensate for vehicle movement and capturing delay. For defect analysis, we need a tighter window around the insulators, so we pass the high-resolution images to another CNN-based rotation-invariant multi-type insulator detector named HR_1 (high-resolution) detector which provides more precise bounding box around the insulator body, as shown in Figure 2. The detected insulators can appear in any arbitrary orientation, hence in order to normalize the rotation of the insulators, we apply a rotation normalization method, which rotates (all types of) insulators such that the line passing through the core/shed of the insulator becomes parallel to the horizontal axis. The overall system diagram of the proposed real-time multi-type insulator detection and defect analysis system is shown in Figure 2. In the first stage, the video acquisition module (ground vehicle with the proposed camera system mounted on its rooftop) captures the video of the top part of the poles (where the insulators are installed) on the streets. This video is captured by the low-resolution camera. In the LR detection stage, the frames of this video are processed by a CNN-based rotation invariant multi-type insulator detector. As the insulator images captured by the fixed camera are of low-resolution, hence we refer to this detector as LR (low-resolution) detector throughout this paper. The LR detector passes the coordinates of the bounding boxes of the detected insulators to the HR-camera module, which takes the high-resolution close shot of all the insulators. As the vehicle is continuously moving, and the camera system has some delay in taking pictures, the HR-camera covers some extra area around the detected insulator to compensate for vehicle movement and capturing delay. For defect analysis, we need a tighter window around the insulators, so we pass the high-resolution images to another CNN-based rotation-invariant multi-type insulator detector named HR_1 (high-resolution) detector which provides more precise bounding box around the insulator body, as shown in Figure 2. The detected insulators can appear in any arbitrary orientation, hence in order to normalize the rotation of the insulators, we apply a rotation normalization method, which rotates (all types of) insulators such that the line passing through the core/shed of the insulator becomes parallel to the horizontal axis. The rotated high-resolution images still contain extra space around the insulators, therefore, another CNN-based multi-type insulator detector named HR_2 is applied to the rotated highresolution images, which finally give the precise bounding boxes around different types of insulators, as shown in Figure 2. The tightly cropped insulator is passed to the ellipse detector for segmenting each cap from the insulator, as shown in Figure 2. Finally, the cropped caps are passed to the defect analysis module which computes the percentage of defects in each cap. In the later subsections, we give the implementation details of the various steps shown in the system diagram of Figure 2.
CNN-Based Robust Insulator Detector
In contrast with the handcraft features used in the previous studies, the proposed system uses CNN for the detection of different types of powerline insulators. We believe that the lack of availability of the annotated insulator images is one of the reasons why CNN was not used in previous studies. In our work, we acquired and annotated a large and diverse set of insulator images (i.e., both, long-shot low-resolution and close-shot high-resolution images) to train the CNN-based multi-type insulator detectors.
We used Darknet's open source neural network framework and object detection system, You Look Only Once (YOLO) version 2 [39], which is the state-of-the-art, real-time object detection method, written in C and CUDA (Compute Unified Device Architecture) programming languages. The high detection speed of YOLO reaches 50-60 frames per second on NVIDIA GeForce 1080 GPU (NVIDIA Corporation, Santa Clara, CA, USA), which makes it a suitable candidate to develop realtime applications. The high detection speed comes from the fact that YOLO divides the whole image into fixed sized regions and predicts bounding boxes and probabilities or confidence scores for each region, rather than generating the region proposals first and then applying the detection network to those regions separately [40]. As described earlier, the proposed system contains three CNN-based multi-type insulator detectors, LR, HR_1, and HR_2. In the following subsections, we explain the training and detection processes of each detector.
Rotation Invariant Multi-Type LR Insulator Detector
The LR detector is trained to detect different types of insulators from long shot low-resolution images taken by the LR camera. The detector divides the entire image into fixed sized B × B regions to detect objects. We clustered the ground truth bounding box sizes into 5 groups and set the size of The rotated high-resolution images still contain extra space around the insulators, therefore, another CNN-based multi-type insulator detector named HR_2 is applied to the rotated high-resolution images, which finally give the precise bounding boxes around different types of insulators, as shown in Figure 2. The tightly cropped insulator is passed to the ellipse detector for segmenting each cap from the insulator, as shown in Figure 2. Finally, the cropped caps are passed to the defect analysis module which computes the percentage of defects in each cap. In the later subsections, we give the implementation details of the various steps shown in the system diagram of Figure 2.
CNN-Based Robust Insulator Detector
In contrast with the handcraft features used in the previous studies, the proposed system uses CNN for the detection of different types of powerline insulators. We believe that the lack of availability of the annotated insulator images is one of the reasons why CNN was not used in previous studies. In our work, we acquired and annotated a large and diverse set of insulator images (i.e., both, long-shot low-resolution and close-shot high-resolution images) to train the CNN-based multi-type insulator detectors.
We used Darknet's open source neural network framework and object detection system, You Look Only Once (YOLO) version 2 [39], which is the state-of-the-art, real-time object detection method, written in C and CUDA (Compute Unified Device Architecture) programming languages. The high detection speed of YOLO reaches 50-60 frames per second on NVIDIA GeForce 1080 GPU (NVIDIA Corporation, Santa Clara, CA, USA), which makes it a suitable candidate to develop real-time applications. The high detection speed comes from the fact that YOLO divides the whole image into fixed sized regions and predicts bounding boxes and probabilities or confidence scores for each region, rather than generating the region proposals first and then applying the detection network to those regions separately [40]. As described earlier, the proposed system contains three CNN-based multi-type insulator detectors, LR, HR_1, and HR_2. In the following subsections, we explain the training and detection processes of each detector.
Rotation Invariant Multi-Type LR Insulator Detector
The LR detector is trained to detect different types of insulators from long shot low-resolution images taken by the LR camera. The detector divides the entire image into fixed sized B × B regions to detect objects. We clustered the ground truth bounding box sizes into 5 groups and set the size of B = 7 based on the average size of the bounding boxes in the major clusters. Different types of COS insulators (i.e., porcelain type, polymer type, etc.) appear very similar in the long shot images of low-resolution, as different types of LP, LA and add-on insulators. Moreover, the class information is ignored by the next step in the pipeline (as shown in Figure 2), because the class information is only used at the defect analysis step. Therefore, we combined the sub-classes of COS ( Figure 1 into their base classes, respectively. Finally, LR detector is trained with 6 classes, i.e., polymer insulator, porcelain insulator, COS, LA, LP, and add-on, denoted by Ins_L, C_Ins_L, COS_L, LA_L, LP_L, and COS_Ins_L, respectively in Figure 3. We divide the entire image set into training and test sets. We kept a steady learning rate of 0.001 throughout the 45,000 training epochs. Figure 3 shows some of the example detection results in long-shot low-resolution images.
Sensors 2018, 18, x FOR PEER REVIEW 8 of 25 B = 7 based on the average size of the bounding boxes in the major clusters. Different types of COS insulators (i.e., porcelain type, polymer type, etc.) appear very similar in the long shot images of lowresolution, as different types of LP, LA and add-on insulators. Moreover, the class information is ignored by the next step in the pipeline (as shown in Figure 2), because the class information is only used at the defect analysis step. Therefore, we combined the sub-classes of COS ( Figure 3. We divide the entire image set into training and test sets. We kept a steady learning rate of 0.001 throughout the 45,000 training epochs. Figure 3 shows some of the example detection results in long-shot low-resolution images. It can be seen in Figure 3, the CNN-based rotation invariant multi-type LR insulator detector is robust against various viewing angles, scales, aspect ratios, partial occlusions, lighting variations, and cluttered environment. The background becomes more complex as we deal with close-shot highresolution images due to the presence of other types of insulators in the background, which is discussed next.
Rotation Invariant Multi-Type HR_1 Insulator Detector
The image resolutions of the insulators detected by the LR detector is not sufficiently high to be used for defect analysis, hence the bounding boxes detected by the LR detector are passed to the HR camera module, which in turn guides the HR camera to pan and/or tilt and/or zoom in order to take high-resolution images of different types of insulators. Although HR_1 detector is trained to detect all the 17 different types of insulators shown in Figure 1, we combine the detection information (i.e., class labels) of each insulator into their base class (similar to what we did for training the LR detector) because the class information is ignored by the next step in the pipeline (see Figure 2). The HR_1 detector is learned with a faster learning rate for the first half of the network training process so that the network can quickly learn to distinguish between different types of insulators. In the second half of the training process, the learning rate is slowed down by a factor of 10 in order for the network to slowly learn the details of the shape, color, and context of the different types of insulators. The HR_1 detector is also robust against lighting conditions, viewpoint variations, partial occlusion, rotation, It can be seen in Figure 3, the CNN-based rotation invariant multi-type LR insulator detector is robust against various viewing angles, scales, aspect ratios, partial occlusions, lighting variations, and cluttered environment. The background becomes more complex as we deal with close-shot high-resolution images due to the presence of other types of insulators in the background, which is discussed next.
Rotation Invariant Multi-Type HR_1 Insulator Detector
The image resolutions of the insulators detected by the LR detector is not sufficiently high to be used for defect analysis, hence the bounding boxes detected by the LR detector are passed to the HR camera module, which in turn guides the HR camera to pan and/or tilt and/or zoom in order to take high-resolution images of different types of insulators. Although HR_1 detector is trained to detect all the 17 different types of insulators shown in Figure 1, we combine the detection information (i.e., class labels) of each insulator into their base class (similar to what we did for training the LR detector) because the class information is ignored by the next step in the pipeline (see Figure 2). The HR_1 detector is learned with a faster learning rate for the first half of the network training process so that the network can quickly learn to distinguish between different types of insulators. In the second half of the training process, the learning rate is slowed down by a factor of 10 in order for the network to slowly learn the details of the shape, color, and context of the different types of insulators. The HR_1 detector is also robust against lighting conditions, viewpoint variations, partial occlusion, rotation, insulator sizes, and cluttered background, which is evident from the example detection results shown in Figure 4. insulator sizes, and cluttered background, which is evident from the example detection results shown in Figure 4.
Multi-Type HR_2 Insulator Detector
As can be seen in Figure 4, the bounding boxes around the detected insulators do not tightly enclose the insulators because the insulators appear at arbitrary orientations. The proposed ellipse detection method (see Figure 2) requires that; (a) the insulator bodies must lie horizontal and, (b) the bounding boxes around the insulators' bodies are as tight as possible for accurate ellipse detection. Hence, we apply a novel rotation normalization method (covered in Section 2.2), capable of estimating and normalizing the in-plane rotations of insulators of all types. Once the in-plane rotations of insulators are normalized, the HR_2 detector detects the bounding boxes that tightly enclose insulators. In order to train HR_2, we use rotation normalized insulator images. During annotation of the training images, we rotate and tag the bounding boxes around the insulators. The amount of rotation is used to generate the rotation normalized images for training HR_2 detector, whereas the same training images without rotation normalization are utilized for training HR_1 detector. We kept the value of parameter B = 5 for training HR_2 detector and trained the network with the same method as we used to train HR_1 detector. Each type of CNN-based detector returns a confidence score ∈ [0, 1] along with the bounding box coordinates of the detected equipment. In order to reject detection results with a low confidence score, we applied a detection threshold for each type of CNN-based detectors, i.e., LR, HR_1, and HR_2. The detection results of the HR_2 detector are shown in Figure 5. The average detection time of the HR_2 detector (20 ms) is negligible, so it does not negatively affect the real-time performance of the proposed system. The novel rotation normalization approach is explained in the following subsection.
Multi-Type HR_2 Insulator Detector
As can be seen in Figure 4, the bounding boxes around the detected insulators do not tightly enclose the insulators because the insulators appear at arbitrary orientations. The proposed ellipse detection method (see Figure 2) requires that; (a) the insulator bodies must lie horizontal and, (b) the bounding boxes around the insulators' bodies are as tight as possible for accurate ellipse detection. Hence, we apply a novel rotation normalization method (covered in Section 2.2), capable of estimating and normalizing the in-plane rotations of insulators of all types. Once the in-plane rotations of insulators are normalized, the HR_2 detector detects the bounding boxes that tightly enclose insulators. In order to train HR_2, we use rotation normalized insulator images. During annotation of the training images, we rotate and tag the bounding boxes around the insulators. The amount of rotation is used to generate the rotation normalized images for training HR_2 detector, whereas the same training images without rotation normalization are utilized for training HR_1 detector. We kept the value of parameter B = 5 for training HR_2 detector and trained the network with the same method as we used to train HR_1 detector. Each type of CNN-based detector returns a confidence score ∈ [0, 1] along with the bounding box coordinates of the detected equipment. In order to reject detection results with a low confidence score, we applied a detection threshold for each type of CNN-based detectors, i.e., LR, HR_1, and HR_2. The detection results of the HR_2 detector are shown in Figure 5. The average detection time of the HR_2 detector (20 ms) is negligible, so it does not negatively affect the real-time performance of the proposed system. The novel rotation normalization approach is explained in the following subsection. Detection results of the CNN-based multi-type HR_2 detector. The orientation of the insulator image is normalized before feeding it to the HR_2 detector. The tight and precise detection of the insulators helps in better cap segmentation by our novel ellipse detector, which is a robust alternative to the color-based segmentation [26].
Insulator Rotation Normalization Method
Rotation normalization of insulator images is an essential process in our proposed system. The novel rotation normalization algorithm is designed in such a way that it can estimate the orientation of the insulators regardless of their types. We exploited the appearance symmetry property of the insulators' shape and the spatial context information to design a robust algorithm for estimating the orientation of the insulators. The algorithm is comprised of the following steps: (1) In order to find the best rotation angle, the edge map of the detected insulator in the high-resolution image is computed using the Canny edge detector. The edge map is rotated to all possible angles between 1 to 180 degrees. (2) Low-level visual features of all the edge maps are analyzed exhaustively; feature points are extracted and clustered by their appearance similarities. (3) The appearance similarity between any two feature points is computed by folding the edge map with respect to the center of the two feature points and then applying convolution. (4) High convolution score implies that the feature points in the two folds have high appearance similarity. Hence, we do max voting on convolution scores to find candidate point cluster which is consistent with the geometrical relationship of the insulator shape. (5) Finally, a tight bounding box encapsulating the maximum voted point cluster is returned, whose longest side represents the final orientation of the insulator.
The rotation normalized images are fed to HR_2 detector, which returns a tighter bounding box around the insulator body. The tightly cropped insulator image is then passed to the novel ellipse detector module for individual cap segmentation, which is covered in the next section.
The Ellipse Detector
In order to find the defects on insulator caps, we need to analyze each and every cap separately, for which the caps of the insulator are segmented. In contrast with the color [26] based segmentation schemes which is sensitive to color and lighting variations, we propose the segmentation of the caps of the insulators using ellipse detector. Ellipse detection based cap segmentation is not only robust against color and lighting variations but also able to detect caps under major occlusion which is also unhandled by the scheme in [26]. However, traditional ellipse detection methods tend to suffer from high false positive detections due to the noisy edge map of the insulator images, and they do not take into account the appearance symmetry (same sized, equally spaced caps/sheds) prior inherited by the different insulators which can be utilized for more accurate ellipse detection. We present an ellipse detection method which takes into account the structural symmetry, number of caps, shape and other prior information for the cap detection. The overall algorithm can be broken down into Figure 5. Detection results of the CNN-based multi-type HR_2 detector. The orientation of the insulator image is normalized before feeding it to the HR_2 detector. The tight and precise detection of the insulators helps in better cap segmentation by our novel ellipse detector, which is a robust alternative to the color-based segmentation [26].
Insulator Rotation Normalization Method
Rotation normalization of insulator images is an essential process in our proposed system. The novel rotation normalization algorithm is designed in such a way that it can estimate the orientation of the insulators regardless of their types. We exploited the appearance symmetry property of the insulators' shape and the spatial context information to design a robust algorithm for estimating the orientation of the insulators. The algorithm is comprised of the following steps: (1) In order to find the best rotation angle, the edge map of the detected insulator in the high-resolution image is computed using the Canny edge detector. The edge map is rotated to all possible angles between 1 to 180 degrees. (2) Low-level visual features of all the edge maps are analyzed exhaustively; feature points are extracted and clustered by their appearance similarities. (3) The appearance similarity between any two feature points is computed by folding the edge map with respect to the center of the two feature points and then applying convolution. (4) High convolution score implies that the feature points in the two folds have high appearance similarity. Hence, we do max voting on convolution scores to find candidate point cluster which is consistent with the geometrical relationship of the insulator shape. (5) Finally, a tight bounding box encapsulating the maximum voted point cluster is returned, whose longest side represents the final orientation of the insulator.
The rotation normalized images are fed to HR_2 detector, which returns a tighter bounding box around the insulator body. The tightly cropped insulator image is then passed to the novel ellipse detector module for individual cap segmentation, which is covered in the next section.
The Ellipse Detector
In order to find the defects on insulator caps, we need to analyze each and every cap separately, for which the caps of the insulator are segmented. In contrast with the color [26] based segmentation schemes which is sensitive to color and lighting variations, we propose the segmentation of the caps of the insulators using ellipse detector. Ellipse detection based cap segmentation is not only robust against color and lighting variations but also able to detect caps under major occlusion which is also unhandled by the scheme in [26]. However, traditional ellipse detection methods tend to suffer from high false positive detections due to the noisy edge map of the insulator images, and they do not take into account the appearance symmetry (same sized, equally spaced caps/sheds) prior inherited by the different insulators which can be utilized for more accurate ellipse detection. We present an ellipse detection method which takes into account the structural symmetry, number of caps, shape and other prior information for the cap detection. The overall algorithm can be broken down into three steps, i.e., (i) pre-processing the edge map and labeling the arcs, (ii) selecting proper arcs and fitting ellipse onto them, and finally, (iii) some post-processing steps to refine detection results. Even though these three basic steps are similar to the one proposed in [41,42], but we made significant changes in the underlying algorithm. The detailed steps involved in the ellipse detection algorithm are described in the following subsections.
Adaptive Thresholding
Our ellipse detection algorithm starts with finding the edge map of the insulator image. We use the Canny Edge Detector with adaptive thresholding. In contrast with automatic thresholding used by [41,42], we formulate the adaptive threshold as a function of two terms, i.e., the total number of edges remaining in the edge map, and the minimum allowed length of the edges. Let the total number of edges in the edge map be N, l i be the length of the ith edge, th el be the minimum allowed edge length, th canny be the threshold in Canny edge detection, and N be the number of remaining edges when an edge length threshold is applied to the edge map, then: It is obvious that N depends on th canny and th el . We fixed the value of th el = 30 pixels based on the prior information of the sizes of caps in various insulators, hence N only depends on th canny . For ellipse detection in insulator images, we must have a minimum number of remaining edges (N min ) in the edge map, for better ellipse detection. Hence, in order to adaptively adjust the th canny , we iteratively search the proper value of th canny in a gradient decent manner, until N ∼ = N min . Figure 6a shows the intermediate results of the iteration process. A different automatic thresholding method is used in [41,42], which is removing 20% of the edge pixels first, and then finding the bin index of maximum count from the histogram of edge gradients.
Insulator Core Removal
One prior information about all the types of insulators is that a core passes through the center of the caps. The core has horizontal edges in the edge map that are sometimes connected with the edges of the caps, as shown in Figure 6a, which negatively affects the ellipse/cap detection. Hence, we remove the center part of the insulator that contains the core from edge map, which consequently removes the noisy edges at the center as shown in Figure 6b. The prior information about the sizes of insulators decides the amount of area to be removed from the center part of the edge map. Neither of [41,42] removes parts of the object for noise removal, before ellipse detection.
Edge Refinement
Next we apply edge refinement steps to obtain arcs of good quality for robust ellipse detection. Let e i be the ith edge point in the edge map, characterized by its position and edge gradient as e i = (x i , y i ; θ i ). Also, let I x , I y , I xy represent the first order derivatives in x, y, and xy directions, respectively, then D x (e i ) = sign(I x ) represents the direction of the edge gradient. Firstly, we remove the edges that are bigger in size but their contour is flat, i.e., D xy = |constant|. This type of edge normally shows up when insulator is partly occluded by a powerline. Secondly, we remove the edge points that lie on horizontal (D y = 0) or vertical (D x = 0) edge gradients. Thirdly, we remove the edge points that form a local minima, and local maxima as shown in Figure 6c,d, respectively. Finally, we remove the edge points that forms a junction between three or more edges, as shown in Figure 6e. Edge refinement method given in [41,42] only removes smaller edges and edges forming horizontal and/or vertical lines.
Arc Labeling
Next, we compute the convexities and directions of all the arcs according to [41], and label all the arcs with the quadrants they belong to, based on the values of their convexities and edge gradients directions. Let : → (+, −), then ( ) = + denotes that the kth arc represented by is upper convex, and vice versa. Let the quadratic labels are represented by , , and , and consider the edge gradient directions are computed in an anti-clock wise manner, then the quadratic labels are assigned using Equation (2) where is a function that maps to one of the four quadrants, i.e., : → , , , .
Arc Triplet Selection-Constraint 1
It is clear from Figure 6f that an ellipse can be formed if and only if we combine the arcs having non-overlapping quadratic labels. Let us define the arc triplet as = ( , , ), then the arc triple is selected, if and only if, ( , , ) maps to one of the triplets of ( , , ) , ( , , ), ( , , ), or ( , , ).
Arc Triplet Selection-Constraint 2
The tightly cropped insulator image returned by the HR_2 detector helps in the implementation of the second constraint which allows the selection of the arc triplet, present within a possible geometrical proximity. As shown in Figure 6g, depending upon the viewpoint of the image, value of the length of the minor axis of the caps can take values in a certain range, which gives an upper limit of the possible geometrical proximity of the arcs triplet. As the methods in [41,42] are proposed for ellipse detection in general cases, they do not consider constraint 2 as in our case.
Arc Labeling
Next, we compute the convexities and directions of all the arcs according to [41], and label all the arcs with the quadrants they belong to, based on the values of their convexities and edge gradients directions. Let C : α k → (+, −), then C α k = + denotes that the kth arc represented by α k is upper convex, and vice versa. Let the quadratic labels are represented by Q I , Q I I , Q I I I and Q IV , and consider the edge gradient directions are computed in an anti-clock wise manner, then the quadratic labels are assigned using Equation (2) as shown in Figure 6f: where Q is a function that maps α k to one of the four quadrants, i.e., Q : α k → Q I , Q I I , Q I I I , Q IV .
Arc Triplet Selection-Constraint 1
It is clear from Figure 6f that an ellipse can be formed if and only if we combine the arcs having non-overlapping quadratic labels. Let us define the arc triplet as ζ abc = (α a , α b , α c ), then the arc triple ζ abc is selected, if and only if, (α a , α b , α c ) maps to one of the triplets of (Q I , Q I I , Q I I I ), (Q I , Q I I , Q IV ), (Q I , Q I I I , Q IV ), or (Q I I , Q I I I , Q IV ).
Arc Triplet Selection-Constraint 2
The tightly cropped insulator image returned by the HR_2 detector helps in the implementation of the second constraint which allows the selection of the arc triplet, present within a possible geometrical proximity. As shown in Figure 6g, depending upon the viewpoint of the image, value of the length of the minor axis of the caps can take values in a certain range, which gives an upper limit of the possible geometrical proximity of the arcs triplet. As the methods in [41,42] are proposed for ellipse detection in general cases, they do not consider constraint 2 as in our case. Let e a and e b denote the lengths of major and minor axes of an ellipse. Due to the tight cropping on insulator image by the HR_2 detector, h i = e a , where h i represents the height of the image. Also the maximum possible e b , e max b , is equal to h i or e a . Let P a (x a , y a ), P b (x b , y b ) and P c (x c , y c ) be the center points of the arcs α a , α b and α c , respectively, then according to the second constraint, we select the candidate arc triplet ζ abc if and only if, it satisfies one of the following conditions: The two constraints imposed by the algorithm not only enable better arc selection but also cut down tremendous processing time.
Ellipse Parameter Estimation and Candidate Ellipse Selection
Next, we fit an ellipse with direct least square fitting of ellipses method [43] using a set of points P from the three selected arcs. Let us represent a general conic by a second order polynomial: where ∝ = [a b c d e f ] T and x = x 2 xy y 2 x y 1 T . F(∝; x i ) is called the algebraic distance of a point (x, y) to the conic F(∝, x) = 0. The fitting of the general conic may be realized by minimizing the sum of the squared distances of the curve to the M data points, x i . Once the proposed algorithm selects arc triplet, we pick five equally spaced points from each of the three arcs. Let S p and T P be the sampled and true points of an ellipse respectively, and S p ⊆ T P , where p = 15 (selected points) and P is the total number of points in all the three arcs, and compute an ellipse using Equation (3). Readers are encouraged to refer [43] for more details about solving Equation (3).
If the set S p does not form a single ellipse or includes non-arc parts (e.g., noisy edges), the ellipse will not fit all the points T P on the three arcs as shown with red markers in Figure 6h. This gives us a final constraint to validate the ellipse. In order to find the overlap between the predicted ellipse E and the points T P , we compute the goodness-of-fit. Let T n i be the ith point on nth arc triple, E o represent the oth predicted ellipse, then we define a function f T n i , E o that returns a point (x i , y i ) on the predicted ellipse E o , that has the minimum distance from T n i such that the Euclidian distance f T n i , E o , T n i 2 ≤ th dist . The goodness-of-fit can be computed as: We empirically set th dist = 5 pixels. We reject the ellipse if GoF ≤ th GoF , where th GoF is the threshold for goodness-of-fit. We fixed the value of th GoF = 0.7. This step is repeated for all the candidate arc triplets.
Combining Multiple Detections
Next, we combine multiple detections by computing the similarity between the ellipses, as: where Avg denotes averaging operation, and: ≤ th e b ∧ e n1 center − e n2 center ≤ th center (6) η represents the measure of similarity between candidate ellipses n 1 and n 2 , and th e a , th e b and th center represent the thresholds for difference between; the length of major axis (e a ), length of minor axis (e b ) and centers of the two ellipses (e center ), respectively.
Clustering Detected Ellipses
The final step again exploits the symmetric structure of insulators. As same sized caps are regularly spaced to form an insulator, we cluster the candidate ellipses into a number of groups based on their similarity intervals. The cluster with the largest number of candidate ellipses is selected as the final cluster, and candidate ellipses in the cluster are considered as the detected ellipses. Again, the methods in [41,42] do not cluster the ellipses based on their physical arrangement.
In Figure 7, we show some of the example ellipse detection results that prove the robustness of the proposed ellipse detection algorithm against the color and lighting variations, viewpoint changes, shadows, occlusion, and cluttered environment. The detected caps are segmented and then passed to the defect analyzer, which is presented next. The final step again exploits the symmetric structure of insulators. As same sized caps are regularly spaced to form an insulator, we cluster the candidate ellipses into a number of groups based on their similarity intervals. The cluster with the largest number of candidate ellipses is selected as the final cluster, and candidate ellipses in the cluster are considered as the detected ellipses. Again, the methods in [41,42] do not cluster the ellipses based on their physical arrangement.
In Figure 7, we show some of the example ellipse detection results that prove the robustness of the proposed ellipse detection algorithm against the color and lighting variations, viewpoint changes, shadows, occlusion, and cluttered environment. The detected caps are segmented and then passed to the defect analyzer, which is presented next.
The Novel Bullet Shot Defect Analyzer
Defects in powerline insulators can occur due to inferior design, use of low quality materials in manufacturing, improper manufacturing processes, misapplication of the insulator or extreme stresses from weather (e.g., rain, storms, snow, hails, humidity, extreme cold or hot temperature, UV rays, etc.), vandalism, wildlife, extreme electrical activity or mishandling. These defects can alter the appearances of insulators, such, as color, shape and texture. The proposed defect analyzer provides quantitative measures of defects with insulators. Since we use optical images for the defect analysis, we only address the defects that alter the appearance of the insulators. To be more specific, those defects occurring internally without changing the outer appearance are out of the scope of this paper. Figure 8 shows example images of insulators with the bullet shot defects. As the name suggests, this type of defect is caused when an insulator is hit by a bullet due to aerial gunshots. This defect reduces the electrical properties of the insulator, may lead to flashovers. Reduced electrical strengths and concerns for potential reduction of the mechanical strength are justifications for insulator replacement [44]. With the help of Figure 9, we illustrate the proposed bullet shot defect analyzer.
The Novel Bullet Shot Defect Analyzer
Defects in powerline insulators can occur due to inferior design, use of low quality materials in manufacturing, improper manufacturing processes, misapplication of the insulator or extreme stresses from weather (e.g., rain, storms, snow, hails, humidity, extreme cold or hot temperature, UV rays, etc.), vandalism, wildlife, extreme electrical activity or mishandling. These defects can alter the appearances of insulators, such, as color, shape and texture. The proposed defect analyzer provides quantitative measures of defects with insulators. Since we use optical images for the defect analysis, we only address the defects that alter the appearance of the insulators. To be more specific, those defects occurring internally without changing the outer appearance are out of the scope of this paper. Figure 8 shows example images of insulators with the bullet shot defects. As the name suggests, this type of defect is caused when an insulator is hit by a bullet due to aerial gunshots. This defect reduces the electrical properties of the insulator, may lead to flashovers. Reduced electrical strengths and concerns for potential reduction of the mechanical strength are justifications for insulator replacement [44]. With the help of Figure 9, we illustrate the proposed bullet shot defect analyzer. The defect analyzer performs the following steps to detect gunshot damage: (1) Ellipse detection to segment caps from insulators.
(2) Masking to remove the extra background.
(3) Edge detection followed by some morphological operations to remove noisy edges.
(4) Labeling all the edges inside the cap. (5) Connected component analysis for blob detection. (6) The bullet shots normally create circular shaped holes on insulator caps. Hence, in order to differentiate between noisy edges and edges caused by gunshots, we compute the circularity [46] of every closed contour as: where and Ψ are the area and perimeter of a closed contour, respectively. The circularity of a perfect circle is equal to 1. The perimeter Ψ is a scalar quantity defined as the distance around the boundary of the contour, computed by summing together the positive distance between each adjoining pair of pixels around the border of the contour. If there are total I points on the curve, then Ψ can be computed as: (7) A priori information related to the color intensity of the hole created by bullet shot puts a final constraint in classifying the detected contour as bullet shot or not. We observed that in every insulator image with bullet shot damage, the color intensity of the area inside the hole of bullet shot is lower than the surroundings. (8) Let and represent the mean intensity of the hole and boundary pixels, respectively, then the final detected contour is classified as a bullet shot if: where ℎ represents the minimum ratio of the area of bullet shot hole divided by the size of the cap in pixels, and ℎ denotes the minimum value of circularity of the detected hole to be considered as a bullet shot defect. (9) If there are total G contours classified as gunshot damage, then the final percentage of damage per cap due to the gunshot is computed as: where represents the area of ith gunshot contour. The defect analyzer performs the following steps to detect gunshot damage: (1) Ellipse detection to segment caps from insulators.
(2) Masking to remove the extra background.
(3) Edge detection followed by some morphological operations to remove noisy edges.
(4) Labeling all the edges inside the cap. (5) Connected component analysis for blob detection. (6) The bullet shots normally create circular shaped holes on insulator caps. Hence, in order to differentiate between noisy edges and edges caused by gunshots, we compute the circularity [46] of every closed contour as: where A contour and Ψ are the area and perimeter of a closed contour, respectively. The circularity of a perfect circle is equal to 1. The perimeter Ψ is a scalar quantity defined as the distance around the boundary of the contour, computed by summing together the positive distance between each adjoining pair of pixels around the border of the contour. If there are total I points on the curve, then Ψ can be computed as: (7) A priori information related to the color intensity of the hole created by bullet shot puts a final constraint in classifying the detected contour as bullet shot or not. We observed that in every insulator image with bullet shot damage, the color intensity of the area inside the hole of bullet shot is lower than the surroundings. (8) Let µ I hole and µ I boundary represent the mean intensity of the hole and boundary pixels, respectively, then the final detected contour is classified as a bullet shot if: where th Area represents the minimum ratio of the area of bullet shot hole divided by the size of the cap in pixels, and th ρ denotes the minimum value of circularity of the detected hole to be considered as a bullet shot defect.
(9) If there are total G contours classified as gunshot damage, then the final percentage of damage per cap due to the gunshot is computed as: where A i contour represents the area of ith gunshot contour.
Experimental Results and Discussion
In this section, we present experimental results of the proposed insulator detection and defect analyzer system.
Database Acquisition
There is no publicly available insulator image dataset and only few studies presented quantitative evaluation result of the system performance using self-acquired small databases. In our work, we gathered a large, unconstrained dataset of insulator images and presented our evaluation results based on that. We believe that the dataset is unbiased, unconstrained and sufficiently large enough to validate the reliability and effectiveness of the proposed system. We acquired a large dataset of long-shot low resolution and close-shot high-resolution images of equipment. Among them, we manually annotated 667 low resolution and 5533 high-resolution images (6200 images, in total). Detailed numbers of each insulator types used for training and test are provided in Table 2.
Experimental Results and Discussion
In this section, we present experimental results of the proposed insulator detection and defect analyzer system.
Database Acquisition
There is no publicly available insulator image dataset and only few studies presented quantitative evaluation result of the system performance using self-acquired small databases. In our work, we gathered a large, unconstrained dataset of insulator images and presented our evaluation results based on that. We believe that the dataset is unbiased, unconstrained and sufficiently large enough to validate the reliability and effectiveness of the proposed system. We acquired a large dataset of long-shot low resolution and close-shot high-resolution images of equipment. Among them, we manually annotated 667 low resolution and 5533 high-resolution images (6200 images, in total). Detailed numbers of each insulator types used for training and test are provided in Table 2.
Insulator Detection Results
We used the well-known Pascal Score [47] to evaluate the performances of the three CNN-based multi-type insulator detectors. The Pascal score is calculated by taking the Intersection-over-Union (IoU) of the detected bounding box BB detected and the ground-truth bounding box BB gt as: According to the criteria in [47], an object is considered correctly detected if P BB detected , BB gt ≥ th IoU where th IoU is typically set as 0.5. In the following subsections, we present the detection accuracies of the different CNN-based detectors and the proposed rotation normalizer.
Rotation Invariant Multi-Type Low Resolution Insulator Detector (LR)
For long-shot low-resolution images, our objective is to detect as many insulators as possible. We keep the detection threshold lower (= 0.10) to allow some false positive detections because the false positives are removed at later stages by the high-resolution detectors. Moreover, even if the detection returns a wrong classification label, the detected insulator has a high chance to be correctly classified by the high-resolution classifiers. Since the high-resolution classifiers are trained with high-resolution images, they learn a better shape model as compared with low-resolution classifiers. We found that most of the false positives detected by LR detector are corrected by HR_1 detector, and only 0.001% of the false positives detected by LR detector remain after HR_2 detector. We used 70% of images of each type of equipment for the training and the rest for the test. Table 2 summarizes the detection performance of the CNN-based low-resolution (LR) detector. As can be seen in Table 2, the LR detector performs best for polymer insulator. The performances of other types of equipment appear relatively lower, possibly due to the smaller number of training samples. As LR detector is trained with low-resolution images, it requires more training data to develop a better shape model for different types of the insulators.
Moreover, we found that the P BB detected , BB gt > 0.5 criteria is very strict for the LR detector as the trained CNN model struggles with the localization of smaller objects in low-resolution images [40]. Considering the aforementioned aspects, the results in Table 2 can be considered as a promising towards the ultimate goal.
Rotation Invariant Multi-Type High-Resolution Insulator Detector 1 (HR_1)
The purpose of rotation invariant HR_1 detector is to detect insulators in the close-shot high-resolution images taken by the camera module. Similar to LR detector, we keep the detection threshold low (= 0.30) to allow more detections. As our rotation normalization method utilizes symmetry property of the shape of insulators, the false positives that contain asymmetric shapes, such as trees, pole edges, wires, insulator covers, etc. can also be suppressed in the rotation normalization step. We verified that 95% of the false positives detected by HR_1 detector are suppressed by our rotation normalization step, while out of the all false positives detected by HR_1 detector, none is passed to the defect analyzer. Table 2 also summarizes the statistics of the detection results of the HR_1 detector. As compared with the performance of LR detector, the HR_1 performs much better even with the smaller number of training samples. We believe that this is due to the fact that HR_1 detector is trained with high-resolution images, and hence it learns the shape of the insulators in more detail compared to the LR detector.
Multi-Type High-Resolution Insulator Detector 2 (HR_2)
The final CNN-based detector is responsible to detect insulators in rotation normalized images. The performance of HR_2 detector is summarized in Table 2. Training images used to train HR_1 detector are reutilized to train HR_2 detector, with their orientations are normalized. The average performance of HR_2 detector is slightly lower than the average performance of HR_1 detector due to the reason that Pascal criterion is very strict for rotated bounding boxes, which is not originally intended [47]. As can be observed from Figure 5, the HR_2 detector returns much tighter bounding boxes (intended to be used in post-processing steps) which sometimes fails to fulfill the P BB detected , BB gt > 0.5 criteria. Regardless of this limitation, average precision and recall values of the HR_2 detector are sufficiently high enough to be used in the insulator inspection system.
In order to find the accuracy of the rotation normalizer, we used the ground truth orientation information saved during the annotation process of close-short high-resolution images. We allow a tolerance of ±5 • degrees, under which we consider the estimated orientation as correct estimation. We evaluated our insulator normalizer on all the samples shown in Table 2 and found an average accuracy of 88.20%. The average time to compute the rotation is 18.56 milliseconds.
Comparison with the State-of-the-Art
Unavailability of publicly available dataset restricts the comparison of our results with those of state-of-the-art methods. Among the published studies, only [29,30] presented their detection performance on polymer insulator using the same standard metric as used in this paper, i.e., Pascal score. The Precision-Recall curve (PR curve) given in [29,30] represents the detection evaluation of only one type of insulator, i.e., polymer insulator, hence for the fair comparison, we also compute the PR curve by evaluating the proposed CNN-based detectors on polymer insulator only.
As mentioned in Section 3, all three types of CNN-based detectors return confidence values ∈ [0, 1] along with the bounding box coordinates of the detected insulator, hence we generate the precision and recall values of the three types of detectors by sliding a threshold over all confidence values to obtain different true positives/false negative rates. In addition, we also compared the performance of the published insulator detector algorithms (wherever reported in terms of precision and recall) with the proposed CNN-based detectors and summarized them in Table 3.
The PR curve shown in Figure 10 and the performance statistics in Table 3 clearly depicts the superior performance of all the three proposed CNN-based insulator detectors. Although the size of the evaluation dataset we used (high-resolution images) is bigger than the evaluation datasets used by [3,23,29,30], yet the robust CNN-based detectors outperform the handcraft features used in [3,23,29,30]. Figure 10. The precision-recall curve for insulator detection. The comparison is drawn between Uddin [29], Oberweger et al. [30], and the three types of CNN-based insulator detectors proposed in this paper.
Defect Analyzer Results
We present the performance of the defect analyzer in Table 4. The evaluation dataset consists of 70 polymer insulator images, including four images of defective insulators. After ellipse detection, 547 caps were retrieved including 24 caps with bullet shot defects. Table 4 shows the values of precision and recall at different values of ℎ and ℎ . The two thresholds also prove to be a strong geometrical constraint in the detection of true bullet holes and significant rejection the false positives (noisy edges), because, usually the bullet shots leave a circular hole in caps, and the size of that hole is relatively small as compared with the size of the cap. If bullet shots break/tear off the major portion of the cap, then we consider this case as a separate defect, called broken shed/cap, which is not covered in this manuscript. As shown in Table 4, reducing the values of these thresholds allow more detections at the cost of reduced precision. In Figure 10. The precision-recall curve for insulator detection. The comparison is drawn between Uddin [29], Oberweger et al. [30], and the three types of CNN-based insulator detectors proposed in this paper.
Defect Analyzer Results
We present the performance of the defect analyzer in Table 4. The evaluation dataset consists of 70 polymer insulator images, including four images of defective insulators. After ellipse detection, 547 caps were retrieved including 24 caps with bullet shot defects. Table 4 shows the values of precision and recall at different values of th Area and th ρ . The two thresholds also prove to be a strong geometrical constraint in the detection of true bullet holes and significant rejection the false positives (noisy edges), because, usually the bullet shots leave a circular hole in caps, and the size of that hole is relatively small as compared with the size of the cap. If bullet shots break/tear off the major portion of the cap, then we consider this case as a separate defect, called broken shed/cap, which is not covered in this manuscript. As shown in Table 4, reducing the values of these thresholds allow more detections at the cost of reduced precision. In general, a high recall value is desired in the defect analyzing systems designed for electrical safety to prevent electric failure. Subsequent visual verification can be carried out to remove false alarms manually. In either case, the sensitivity of the defect analyzer can be tweaked with the help of the two thresholds, as mentioned in Table 4. Table 4 shows that the proposed bullet shot defect analyzer is robust against noisy edges due to shadows, notches on caps surface, diverse illumination conditions and improper cap detections.
Discussion about the Results and Challenges of the Proposed System
Even though, the proposed CNN-based electrical equipment inspection system present state-of-the-art detection performances, it contains room for improvement. Hence in the following subsections, we discuss the presented results along with the challenges and limitations faced by the proposed system.
Availability of Annotated Data
The CNN-based detection frameworks usually (and other machine learning framework in general) require a large amount of training data to properly learn the convolution filters. We observed an overall performance improvement of 3% in precision and 0.7% in recall when using 50% of the augmented data (by horizontally flipping the images) [48] in our experiments. The amount of required train data also depends upon the complexity of the problem, e.g., the shape, color, and size of the object to detect, number of classes, the relationship between the classes, etc. As shown in Figures 1 and 3-5, the powerline equipment exhibits extreme shape, color, size, aspect-ration, lighting, viewpoint variations, we feel the need to gather more annotated training and testing data to further improve the results.
Train Data Class Imbalance
The potential cause of the less precision and recall values for the ceramic insulator, LP, COS, LA and add-on in Table 2 is the fact that there is a high class imbalance between the number of samples of polymer insulators and other equipment. In fact, polymer insulator images are about 82% of the total training data, and hence during the training process, when the training optimizer computes the overall training loss, it appears to be very small because the validation process can easily achieve 82% accuracy by simply predicting polymer insulators. And that is one of the reasons why the precision and recall of the polymer insulator is better than the precision and recall of the other equipment. We believe that if we train the network with more training samples for equipment other than polymer insulators, or build a cost-sensitive loss function, we can achieve better overall detection results.
Training Time
As the CNN-framework used in this paper does not contain a region proposal network (RPN, [40], p. 6518), rather it tries to find objects in the fixed sized grid, so its detection time is very fast as compared to the CNN-frameworks containing RPN, while it takes a longer time to learn the possible size and positions of the objects in the dataset, hence the training time of our CNN-based detector is more than the training time of RPN based network. Our CNN-based detector takes an average of 48 h to train, but shows a detection time of 0.023 s per frame (~43 frames per seconds), while the network with RPN roughly takes 16 h to train, but it offers a detection time of seven frames per second [40] (p. 6521), which is far from meeting the needs of a real-time inspection system.
Valid False Positives
Our training routine automatically extracts negative regions from the input image by considering the region other than the ground truth bounding box as negative. In order not to detect the equipment whose body is occluded more than 50% (because highly occluded equipment cannot be effectively tested for defects), during the annotation process, we marked such occluded equipment as "ignore" (not used as positive or negative samples during the training process). But the trained CNN network is highly robust against occlusion, and it still detects occluded objects with high confidence. Although, these detections are valid (i.e., detects the right object), our testing routine consider these detections as false positives, and hence reduce the overall reported performance of the system. Figure 11 shows some of these detection results.
Limitations in the Ellipse Detection Algorithm
As discussed earlier, our ellipse detector algorithm requires that the rotation of the input insulator image be normalized, and the insulator body tightly cropped, due to which we use a rotation normalizer and HR_2 detector. Hence, in the proposed system, there is a trade-off between the accurate ellipse detection and the computational resources used by the additional rotation normalizer and the HR_2 detector.
Furthermore, even though the proposed ellipse detector is robust against noisy edges, still the ellipse detector algorithm relies on the quality of edges. As the ground vehicle taking the videos of the electrical equipment is continuously moving, there are cases where the detected electrical equipment is blurred out or occluded in such a way that their important edges are not retrieved. In such cases, the ellipse detector struggles to detect the caps, as shown with some example cases in Figure 12.
Limitations in the Ellipse Detection Algorithm
As discussed earlier, our ellipse detector algorithm requires that the rotation of the input insulator image be normalized, and the insulator body tightly cropped, due to which we use a rotation normalizer and HR_2 detector. Hence, in the proposed system, there is a trade-off between the accurate ellipse detection and the computational resources used by the additional rotation normalizer and the HR_2 detector.
Furthermore, even though the proposed ellipse detector is robust against noisy edges, still the ellipse detector algorithm relies on the quality of edges. As the ground vehicle taking the videos of the electrical equipment is continuously moving, there are cases where the detected electrical equipment is blurred out or occluded in such a way that their important edges are not retrieved. In such cases, the ellipse detector struggles to detect the caps, as shown with some example cases in Figure 12.
Furthermore, even though the proposed ellipse detector is robust against noisy edges, still the ellipse detector algorithm relies on the quality of edges. As the ground vehicle taking the videos of the electrical equipment is continuously moving, there are cases where the detected electrical equipment is blurred out or occluded in such a way that their important edges are not retrieved. In such cases, the ellipse detector struggles to detect the caps, as shown with some example cases in Figure 12.
Availability of Defected Cap Images
Images of caps damaged by gunshots or any other damage are scarce and due to this reason, our testing set is highly imbalanced (95% of the segmented caps have no gunshot defect). Consequently, the precision drops more drastically as we try to improve the recall. A 100% precision and 75% recall shown by the gunshot defect analyzer means that the system is able to reject 100% false positives in 523 healthy caps, at the cost of rejecting only six faulty caps as false negatives, which is one false negative per 87 images (still a quite good ratio).
Conclusions and Future Directions
In this paper, a novel automatic electrical equipment detect and inspection system has been proposed. The proposed system can detect 17 different types of electric power line equipment from videos taken by a ground vehicle and analyze one type of defects in polymer insulators. Compared to previous methods, the proposed system is superior in many aspects: (a) powerful CNN-based insulator detection versus handcrafted features, (b) capability to detect 17 types of insulators, (c) use of close-shot high-resolution images for defect analysis, enabling more precise defect analysis of insulators, (d) evaluation of detection performance on a large diverse dataset of long-shot (204 test samples) and close-shot images (807 test samples), which proves the robustness of the proposed system against occlusion, lighting conditions, in-and-out of plane rotations, viewpoint changes, color, shape, texture and cluttered environment.
The proposed insulator rotation normalization and ellipse detection methods enable the system to recognize cap level defects. In order to support real-time operations, the proposed system is accelerated by a GTX1080 GPU (NVIDIA Corporation, Santa Clara, CA, USA) and uses OpenCV, CUDA and cuDNN libraries along with Darknet's CNN framework. The proposed system is tested on a large, unbiased, unconstrained evaluation dataset of insulator images and state-of-the-art results are presented. Comparisons with previous studies verified the superior performance of the proposed system.
The system pipeline presented in this work provides a natural guide to future research, which includes pushing existing CNN models to learn different types of insulators at deeper levels and formalizing for detection of additional types of insulator defects. The future research will also consider studying a specific and unified CNN architecture for insulator detection and insulator defect analysis.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2021-08-21T05:24:42.012Z
|
2021-08-19T00:00:00.000
|
237241207
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0256435&type=printable",
"pdf_hash": "7b23dae2b8e8b4dedbc8a37e1af91ecfb2a72d34",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46635",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "7f3d33f6e4c521b96b582c587a0605467ece5ad7",
"year": 2021
}
|
pes2o/s2orc
|
“Donor milk banking: Improving the future”. A survey on the operation of the European donor human milk banks
Background Provision of donor human milk is handled by established human milk banks that implement all required measures to ensure its safety and quality. Detailed human milk banking guidelines on a European level are currently lacking, while the information available on the actual practices followed by the European human milk banks, remains limited. The aim of this study was to collect detailed data on the actual milk banking practices across Europe with particular emphasis on the practices affecting the safety and quality of donor human milk. Materials and methods A web-based questionnaire was developed by the European Milk Bank Association (EMBA) Survey Group, for distribution to the European human milk banks. The questionnaire included 35 questions covering every step from donor recruitment to provision of donor human milk to each recipient. To assess the variation in practices, all responses were then analyzed for each country individually and for all human milk banks together. Results A total of 123 human milk banks completed the questionnaire, representing 85% of the European countries that have a milk bank. Both inter- and intra-country variation was documented for most milk banking practices. The highest variability was observed in pasteurization practices, storage and milk screening, both pre- and post-pasteurization. Conclusion We show that there is a wide variability in milk banking practices across Europe, including practices that could further improve the efficacy of donor human milk banking. The findings of this study could serve as a tool for a global discussion on the efficacy and development of additional evidence-based guidelines that could further improve those practices.
Results
A total of 123 human milk banks completed the questionnaire, representing 85% of the European countries that have a milk bank. Both inter-and intra-country variation was documented for most milk banking practices. The highest variability was observed in pasteurization practices, storage and milk screening, both pre-and post-pasteurization.
Introduction
Human milk banks (HMBs) select, collect, screen, store, process and distribute donor human milk (DHM) that is intended for high-risk infants [1][2][3]. Since operational safety and quality assurance is considered as a key priority for all HMBs, each practice should be well monitored, and a quality control system should be implemented [1,4,5]. Donor recruitment and screening, milk expression, handling and storage (conditions, temperature, duration) both at donors' homes and in HMBs, transportation to the milk bank (if applicable), bacteriological testing, quality control, pooling, thawing and pasteurization of DHM are included in those practices.
According to the European Milk Bank Association (EMBA), there are currently 248 HMBs located in 26 European countries [6]. Most HMBs operate based on locally implemented standards, nationally or internationally published guidelines. Guidelines published or translated in English are available from the UK, France, Italy, Spain and Sweden. Other countries with nationally recognized guidelines include Germany, Austria, Norway, Slovakia, and Switzerland [1]. HMBs in Poland and Estonia follow internal procedures of conduct that are not subjected to legislation nor are they monitored on a national level. Currently, DHM is not under EU legislation. In Austria, the existing recommendations are legally binding and only in France and Italy federal authorities are closely regulating DHM services [7,8]. Differences among existing guidelines are mainly due to variations in practices, organization and regulation of HMBs throughout Europe. Those differences include DHM legal classification, location and distribution area of each HMB, and lack of evidence for standardization of some operational points [1,2,7].
As no European-wide published guidelines were available, EMBA's Guideline Working Group was convened in 2015 to undertake this task. Group members from 13 countries (Austria, France, Germany, Italy, Norway, Poland, Portugal, Serbia, Slovenia, Slovakia, Spain, Switzerland, and the UK) completed a detailed survey on the practices followed by their national HMBs. The group investigated whether a consensus on practices was apparent and whether published evidence was available to support recommendations. The EMBA Recommendations for the establishment and operation of human milk banks in Europe became available in 2019 [1]. Notwithstanding the foregoing, and studies on actual procedures in some European countries, a pan-European overview of milk banking practices is lacking and may differ from these recommendations, even among HMBs within individual countries. The aim of the present study was to collect detailed data on the human milk banking practices in Europe, with particular focus on human milk donation, storage, handling, screening and treatment. The outcomes of this study will be used to further strengthen human milk banking guidelines and recommendations.
Materials and methods
The EMBA Survey Working Group developed a structured web-based questionnaire on milk banking practices, to subsequently distribute to all HMBs that were actively operating in Europe at that time (n = 226, April 2019, EMBA [6]). A list with names and locations of 194 active HMBs in 26 European countries was created, with the joint effort of EMBA and the NGO PATH. Email addresses of 152 HMBs were initially available. The list was then updated and a total number of 215 HMBs with available contact details was finally obtained. Due to a lack of contact details, HMBs in Slovakia and Hungary (n = 11) could not be included in the final list. National coordinators from all 26 countries were appointed, to assist with survey distribution and completion. Their role included updating the number of active HMBs in their own countries, encouraging participation of those HMBs and lastly, minimizing linguistic barriers by offering a native language version of the questionnaire when required (Fig 1).
A general data protection regulation compliant online platform (SurveyMonkey, Portland, USA) was used to facilitate data collection. The selected questions (n = 35) targeted the most critical aspects of the standardized procedures followed in HMBs: donor screening, handling, storage, processing, and microbiological testing of DHM. HMBs had to answer all questions, with the exception of HMBs that do not pasteurize DHM. In that case, HMBs could skip the group of questions regarding pasteurization (n = 7). The Bioethics Committee at Warsaw Medical University reviewed the current study and declared no objection on its conduction (KB/O/23/2021).
A survey invitation email with a web-link to the questionnaire was first sent in April 2019, along with a cover letter explaining the purpose of the study. The letter additionally included detailed information on confidentiality, survey conduction, and contact details of the head of the working group, in case further explanation was needed. Reminders were sent to all participating HMBs in July and August 2019. Next, the authors further contacted all HMBs with incomplete or unclear responses as well as all HMBs with contact details received after July 2019. The survey link remained active until November 2019.
Once the survey was completed, all responses were screened and categorized (per country, per question, per HMB, and as a whole) using Microsoft Excel (2010). GraphPad Prism software 8.0 (GraphPad Inc., La Jolla, CA) was then used for data analysis and visualization. To assess the variation in milk banking practices, adherence to guidelines and extent of milk banking activity, responses for each question were analyzed both for each country separately and for all HMBs together. All calculated percentages were rounded up to the nearest integer. The questionnaire and the list with the responses received per country are available as S1 and S2 Files.
Results
A total of 123 replies (response rate = 57%) from 22 out of the 26 European countries (85%) were received (S1 File).
Quality assurance
Most guidelines advise HMBs to implement DHM tracking and tracing systems and to conduct all operational processes based on Hazard Analysis and Critical Control Points (HACCP) and good manufacturing process (GMP) principles [1,4,5]. All HMBs implement at least one of the three systems; Approximately 40% of HMBs implement all three aforementioned systems, 30% implement two of the three systems, (HACCP & track and trace 7%, GMP & HACCP 2%, GMP & track and trace 21%) and another 30% only one of the three systems (HACCP 10%, GMP 9%, track and trace 11%).
Donor screening
The EMBA recommendations state that both verbal interviews and written health questionnaires should be performed as initial donor screening steps [1]. As a second step, all donors should undergo serological screening for a certain panel of diseases [1]. All HMBs indicated that donor selection was based on specific eligibility requirements, although with variation in requirements among the HMBs (Table 1). Some requirements showed very little variation; Lifestyle criteria such as smoking, alcohol, drugs of abuse and medicines, serological screening for human immunodeficiency virus (HIV), hepatitis B and C and the possibility of a donor being HIV infected within a specific period preceding the donation, are included in the donor screening processes of the majority of HMBs (>94%). Nonetheless, extensive inter-and intra-country variation was observed for cytomegalovirus (CMV) and human T-lymphotropic virus (HTLV) serological screening, screening for restricted diets (e.g. vegans), aspartate aminotransferase / alanine aminotransferase (ALAT/ASAT) ratio and testing after travelling to specific regions with increased risk of disease transmission.
Out of the 123 participating HMBs, seven HMBs dispense raw DHM only. For that reason, all donors are screened extensively. Serological screening for CMV is performed in six out of those seven HMBs, whereas five perform a serological HTLV screening. However, when adequate pasteurization is performed, CMV screening is not considered necessary [4,5].
Start and duration of donation
In 75% of the HMBs, donors are allowed to donate milk from birth onwards while the remaining 25% allows donation only from a specific postnatal week onwards. The maximum duration of milk donation after delivery is specified in 63% of HMBs. A maximum duration of 6 months is set in 26% of HMBs, while 20% of HMBs allow donation for more than 6 months and up to one year (S3 File).
Expression and storage of human milk at home
Almost all HMBs (99%) provide donors with instructions on how to express, store and handle the milk. Most HMBs (76%) supply the donors with breast pumps for DHM expression (85% electrical, 15% manual).
EMBA's recently published recommendations state that HMBs should request their donors to freeze DHM as soon as possible, but at least within 24h (48h if collected and stored in a hospital refrigerator) [1]. Our data suggest that 75% of HMBs follow these recommendations. The maximum storage duration of DHM in a domestic freezer before transportation to HMBs varies from 1 week up to 6 months ( Table 2).
PLOS ONE
"Donor milk banking: Improving the future" Table 2. Maximum DHM storage duration at home and at the HMB, before and after pasteurization (n = 123).
Donor human milk handling at human milk banks
Upon arrival at the HMB, DHM should be checked for proper labeling (time of expression and donor identification should be clear) and whether it has remained frozen during transportation [1,4,5]. Our data show that about half of the HMBs (52%) have a home collection service, to ensure safe transportation. At the same time, 82% of HMBs check that DHM arriving at the HMB is both frozen and properly labelled. However, 18% of HMBs either accept DHM that arrives already partially thawed or they do not examine the milk's temperature at all. In total, 62% of HMBs reported that unpasteurized DHM is kept in a refrigerator for up to 24h awaiting pasteurization or directly stored in a freezer, whereas 22% accept storage in the refrigerator up to 48h and 10% up to 72h. In total, 59% of HMBs set either 3 or 6 months as the maximum storage duration of unpasteurized DHM in the freezer ( Table 2).
Half of the HMBs (50%) reported that more than one thawing method for DHM is performed. Different methods could be combined due to practical reasons, such as time constraints or variations of the preferred equipment used (e.g. refrigerator, water bath, heating blocks, air bottle warmers). Thawing DHM in a refrigerator is performed in 73% of HMBs, but only half of those HMBs use this method alone. (Fig 2). Of the HMBs, 26% do not pool DHM, while 54% pool from a single donor and 20% pool from multiple donors (pools of 2-3 donors, n = 11, pools of 4-8 donors, n = 11 and no maximum number of donors specified, n = 2).
Pre-and post-pasteurization donor human milk screening
There is a large variation in the microbiological screening practices of unpasteurized DHM among HMBs. EMBA recommendations suggest that all pools of milk should be tested before pasteurization, while every batch (referring to the bottles in a single pasteurization cycle) should be tested after pasteurization [1]. Our data suggest that before pasteurization, 23% of HMBs test microbiologically every single container of DHM while 33% test every sample of pooled DHM. Only 2% screen microbiologically both all single and pooled samples of DHM (S1 Fig). A wide variation was observed in the microbiological criteria defining DHM acceptability before pasteurization (Table 3). In our study, 15% of the HMBs reported either not screening DHM microbiologically before pasteurization or that they are unaware of the criteria applied. DHM with more than 10 6 Colony-Forming Units (CFU) / ml for total viable bacteria counts (TVC) and 10 4 CFU/ml for Staphylococcus aureus is discarded in 13% of the HMBs, although this number refers to HMBs located in one country only. DHM with TVC>10 4 CFU/ml is Table 3. Microbiological criteria defining DHM acceptability before pasteurization (n = 123).
PLOS ONE
"Donor milk banking: Improving the future" discarded in 9% of HMBs, while in 8% of the HMBs, DHM is discarded when TVC>10 5 CFU/ ml. The NICE guidelines specify that DHM should be discarded if TVC>10 5 CFU/ml or >10 4 CFU/ml for Enterobacteriaceae or S. aureus, which is followed by 8% of HMBs [5]. The EMBA recommendations suggest accepting DHM containing �10 5 CFU/ml non-pathogenic organisms and no pathogens for each DHM pool tested before pasteurization [1], which is done by 7% of HMBs. The applied criteria varied greatly, not only between but also within countries. HMBs from only two countries (out of the eight countries that were represented by n >3 HMBs in this study), follow a specific guideline with adherence �60% per country. Microbiological testing after pasteurization is always performed in 56% of HMBs and regularly in 27%, where regularly includes once a month, every 10 pasteurization cycles, only when there are concerns about the processing, or when new equipment or employees are introduced. Microbiological testing after pasteurization is never performed in 11% of HMBs, while 6% do not pasteurize DHM.
After pasteurization, 62% of HMBs accept only DHM with no detected microbial growth. Pasteurized DHM with TVC�10 CFU/ml is accepted in 13% of HMBs, while 8% accept DHM with counts �100 CFU/ml or have no defined thresholds. The remaining 17% either do not pasteurize DHM (6%) or do not perform microbiological testing after pasteurization (11%).
Donor human milk treatment
Holder pasteurization (62.5˚C for 30 minutes) is recommended for DHM treatment. The ideal process should consist of a rapid heating phase, followed by a phase where the temperature remains constant, and finally a rapid cooling phase [1,4,5,9]. Our findings show that DHM is heat treated in 94% of HMBs. Four HMBs in Norway, two HMBs in Germany and one HMB in Sweden represent the remaining 6% (n = 7) that do not pasteurize DHM. DHM is heated at 62.5˚C for 30 minutes in 95% of the HMBs that pasteurize DHM, while slightly different parameters (60-64˚C for 30-65min, n = 5 and 75˚C for 15sec, n = 1) are applied by the remaining 5%. The majority of HMBs (70%) reported using standard pasteurizers, with water as the heating medium. Shaking water baths and dry heating pasteurizers are lesser used (11% and 11%, respectively) and 8% did not specify pasteurizer design.
The same volume of DHM is included in every bottle within a pasteurization cycle by 66% of the HMBs. Of the remaining 34% of HMBs that pasteurize different DHM volumes within the same cycle, 6% answered that volumes depend on their needs, on available bottle sizes or that they are not aware of the volumes used. Differences in DHM volume ranging from 40ml to 90ml within the same pasteurization cycle were reported by 16% of HMBs and from 100ml to 210ml by 12% of HMBs.
The time required to raise the temperature of DHM to the pasteurization temperature (heating up time) and the cooling down time, which are important factors in processing efficacy, showed large differences among HMBs; Reported durations ranged from 10 to 120 minutes and from 5 to 110 minutes respectively, while the total processing time, which corresponds to the sum of the heating up time, the holding time and the cooling down time, ranged from 20 to 200 minutes (Fig 3). This could be attributed to the combination of different pasteurizer designs, DHM volumes and variations in the execution of the cooling phase. Lastly, 12% of HMBs do not monitor the temperature/time progression during the pasteurization process.
Post-pasteurization storage
Pasteurized DHM is stored at -18˚C to -30˚C in 88% of HMBs. Almost all (96%) of those HMBs, store pasteurized DHM for 3 to 6 months while 3% exceeds this storing period. Only 5% keep pasteurized DHM for 1-3 days at a refrigeration temperature ( Table 2).
The overall storage duration of DHM in a freezer (in a domestic freezer and in a HMB before and after pasteurization) was largely different among HMBs. The different storage durations applied are shown in Table 2.
Discussion
Our findings showed a huge variation in the practices currently applied across European HMBs. Diversity of practices was observed not only between but also within countries, indicating that even when national guidelines existed, actual practices differed.
One of those practices was the maximum storage time in a freezer before pasteurization. This reflects the variation in the published recommendations which ranges from 1-12 months [4]. Similarly, on a global level, the regulations established by the National Sanitary Surveillance Agency (ANVISA) in Brazil, indicate 15 days as the recommended maximum DHM storage time at a temperature of -3˚C, while the recommendation from the Human Milk Banking Association of North America (HMBANA) is a maximum of 3 months, at -20˚C [10,11]. Prolonged storage duration (>3 months) could enable HMBs to secure adequate DHM supplies and reduce the disposal of expired DHM. However prolonged storage could also impact the quality of DHM. Studies investigating the effect of frozen storage (1-9 months) on specific proteins, report contradictory results (S4 File) [12][13][14][15][16][17][18][19][20][21][22][23]; Freezing DHM for 3 months at -20˚C has been found to cause a minimal loss of its biological activity [12], but a significant decrease in lactoferrin levels has been also reported [13,18]. On the contrary, one study found no effect
PLOS ONE
"Donor milk banking: Improving the future" on lactoferrin and SIgA levels after 9 months at -20˚C [19]. Freezing pasteurized DHM at -20˚C for 8 months did not decrease the macronutrient or energy content [20].
In conclusion, storage of DHM at -20˚C for a maximum of 3 months seems to be safe without substantial loss of quality of the DHM. Probably a longer storage time can be applied, although more data are needed to make such a recommendation.
After storage of frozen DHM, thawing methods vary among HMBs. This is consistent with the existing recommendations, as not one specific thawing method is currently recommended. Thawing DHM in a refrigerator, in a water bath, at room temperature, under running lukewarm water or with special thawing devices are all methods described in published guidelines (S5 File), thus including both slow and quick thawing methods [1,5,[24][25][26][27][28][29]. The Brazilian regulations additionally allow thawing DHM in a microwave, but only when the exposure time for specific DHM volumes has been calculated based on the equipment specifications, size and shape of the bottles, so that DHM temperature does not exceed 5˚C. According to the HMBANA guidelines, the DHM temperature while thawing should remain below 7.2˚C, while EMBA recommendations specify that DHM temperature should not exceed 8˚C [10,11]. A considerable risk when thawing DHM at room temperature or higher is bacterial growth [4]. When thawing DHM in a water bath or under running water, additional precautions should be taken to avoid submersion and cross-contamination through ingress of water in the event of the containers not being properly sealed [4]. Therefore, we propose that guidelines allowing such methods should extensively describe the monitoring procedure as well as all potential hazards.
Overall, since thawing can affect both the quality and the safety of DHM, certain practices should be preferred. Refrigeration overnight is considered as optimal, as no significant increase in bacterial counts for 24h has been reported [4,30,31]. Thawing DHM with waterless defrosting devices could be another option, as the risk of cross-contamination due to contact with water is eliminated while at the same time quicker thawing times are achieved [32]. As such devices can be conveniently used in HMBs, further research is needed in order to conclude on their effects on DHM quality.
Most guidelines recommend pooling of unpasteurized DHM from a single donor only [1,5,25,29]. However, some guidelines also mention that multi-donor pools may be acceptable, but only from a limited number of donors (S5 File) [4,26,28]. Multi-donor pools are also allowed in other non-European published guidelines such as the Brazilian and the HMBANA guidelines ( [10,11]). In our study, 25 HMBs from various countries use multi-donor pools. One reason for using multi-donor pool could be the compensation for possible nutritional differences among donors, although nowadays, both nutrient analyses using human milk analyzers and individualized fortification can be performed. Pooling also enables smaller volumes of DHM to be used sooner, thus reducing pre-pasteurization storage times. To avoid microbial contamination and to ensure donor traceability, future guidelines should extensively describe the practices that should be followed if pooling is applied.
For DHM treatment, holder pasteurization is performed in almost all participating HMBs, with the exception of a few HMBs in Germany and Sweden, and the majority of HMBs in Norway. This method effectively inactivates DHM microbial contaminants, but the specific timetemperature combination used may negatively affect the activity of several DHM components [9]. Ensuring rapid heating up and cooling down is also of crucial importance; Since DHM bioactive components start to be significantly damaged from 58˚C, the time DHM is heated above this temperature should be limited [9,33]. In addition, optimized pasteurizers with shorter plateau duration and better temperature control during a cycle have been shown to better preserve SIgA, lactoferrin and lysozyme in DHM [34]. However, no recommendations are currently available regarding the maximum heating up time. Only the Brazilian regulations include detailed information on how to calculate the heating up time, based on the DHM volume, type and number of bottles used. The regulations additionally specify that all bottles should contain the same volume of DHM and the starting temperature should be stable and around 5˚C. A table of the calculated heating up times for all different DHM volumes used in the HMB should then be created [11].
In addition, a rapid cooling down would minimize spore germination. To avoid bacterial proliferation, a temperature drop from 62.5˚C to 25˚C in 10 minutes is suggested [4]. Moreover, a total of 20 minutes to reach a final DHM temperature � 8˚C has been recommended [26]. Although temperatures <10˚C are mostly suggested [1,4,5,26,35], no consensus currently exists over time and temperature.
Our data show that DHM is at present exposed to slow heating up and cooling down phases, which is in contrast with the recommended rapid pasteurization performance. The wide range of reported heating up and cooling down times could be due to the different pasteurizer designs, the final cooling temperature, and the differences in DHM volume within one pasteurization cycle. Dry heating pasteurizers seemed to expose DHM to longer total processing times, but as the majority of those pasteurizers do not include an automated cooling down phase, this is mostly dependent on how the cooling phase is performed (S1 Data).
Due to the various practices applied, recommending a single practice would be challenging. However, additional recommendations on pasteurization efficacy can be added to the existing guidelines. A recommendation on the optimal duration of both phases could facilitate the standardization of pasteurization.
Bacteriological screening practices of DHM were quite variable both between and within countries in our study. This is in line with the EMBA's Guideline Working Group findings, where no consensus could be derived for either the defined criteria or for the frequency of testing [1]. More than half of the HMBs reported testing DHM only regularly (e.g. once a month). Interestingly, stricter practices were not applied even in HMBs performing multi-donor pools, thus increasing the risk of administrating DHM that does not meet the acceptance criteria. EMBA's recommendations (Test all DHM pools before pasteurization and accept DHM with �10 5 CFU/ml of non-pathogens, test each batch after pasteurization and accept only DHM with no detected microbial growth) could be further adopted in order to increase the safety of the recipients. Regarding donor screening, the recruiting criteria should be flexible and adaptable to country-specific infectious diseases risk factors and the distribution of health-related events worldwide.
Conclusions
This study investigated actual human milk banking practices among European HMBs, with a high number of participants. Our findings highlight the wide variability covering most human milk banking practices in Europe, especially with regards to the DHM processing and bacteriological screening. When practices were evaluated based on both national and international guidelines, adherence was low, specifically with respect to the application of specific control systems, DHM storage, thawing, processing and screening. However, since variation in certain practices can exist without posing any safety risk, concluding on whether the observed variations have a negative impact on actual DHM quality and safety, remains a high priority. Risk assessment strategies may further assist in evaluating the effect of this variability, while future research may also focus on further analyzing the causes of these variations. More extensive guidelines should therefore become available, while the need for developing guidelines covering all essential steps in DHM handling with large variations in execution such as DHM processing and storage, is of particular importance.
|
v3-fos-license
|
2021-01-15T06:16:22.624Z
|
2020-02-26T00:00:00.000
|
231605930
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "http://old.scielo.br/pdf/anp/v79n1/1678-4227-anp-0004-282X20200105.pdf",
"pdf_hash": "15e2fde1de416c82666039cd38af08db9ae7e63c",
"pdf_src": "Thieme",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46639",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e90aa0f3566d3775317af763cb7d499746851887",
"year": 2020
}
|
pes2o/s2orc
|
Simultaneous electrocardiogram during routine electroencephalogram: arrhythmia rates through the eyes of the cardiologist
ABSTRACT Background: The importance of simultaneous 2-lead electrocardiogram (ECG) recording during routine electroencephalogram (EEG) has been reported several times on clinical grounds. Objective: To investigate arrhythmia rates detected by simultaneous 2-lead ECG in our patient sample undergoing routine EEG. Remarkably, we sought to assess the possible expansion of results with a more experienced interpretation of simultaneous ECG. Methods: Simultaneous 2-lead ECG recordings during routine EEG, performed between January and March, 2016, have been retrospectively analyzed by a cardiology specialist. In addition, EEG reports were screened with the keywords ‘arrhythmia, tachycardia, bradycardia, atrial fibrillation, extrasystole’ to evaluate the neurologist interpretation. Results: Overall, 478 routine EEG recordings were scanned. The mean age of the patients was 42.8±19.8 (16–95), with a sex ratio of 264/214 (F/M). In 80 (17%) patients, findings compatible with arrhythmia were identified on simultaneous ECG after a cardiologist's evaluation. The detected arrhythmia subtypes were: ventricular extrasystole (n=27; 5.6%), supraventricular extrasystole (n=23; 4.8%), tachycardia (n=9; 1.8%), prolonged QRS duration (n=7; 8.7%), atrial fibrillation (n=6; 1.2%), and block (n=6; 1.2%). On the other hand, keywords related to arrhythmia were present in 45 (9.4%) of EEG reports. The reported statements were tachycardia (3.3%), arrhythmia (2.5%), bradycardia (2.1%), and extrasystole (1.5%). Conclusions: A considerably high rate of arrhythmia cases was determined on simultaneous ECG during routine EEG after being interpreted by a cardiologist. However, the screening results of EEG reports revealed relatively low arrhythmia rates. These results suggest that the detection rates of ECG abnormalities during routine EEG may be potentially improved.
According to the definition of the International League Against Epilepsy in 2014, epilepsy is a brain disease determined by any of the following conditions: (1) at least two unprovoked (or reflex) seizures occurring >24 h apart; (2) one unprovoked (or reflex) seizure and a probability of further seizures similar to the general recurrence risk (at least 60%) after two unprovoked seizures, occurring over the next 10 years; (3) diagnosis of an epilepsy syndrome 1 . It is one of the most common neurological disorders, with an incidence of 50/100,000 2 . However, a large number of seizures is triggered by other etiologies, and the differential diagnosis of epilepsy and these provoked attacks may be extremely difficult to handle among physicians 3 . A crucial report has revealed that 20% of followed patients diagnosed with epilepsy have been misdiagnosed 4 . Rates of misdiagnosed epilepsy have been described to be between 23 and 26%, with cardiac syncope standing out in these reports 5,6 . Thus, evaluating the cardiac condition might be critical for the differential diagnosis. On the other hand, another related topic of interest in this field may be the sudden unexpected death in epilepsy (SUDEP), which accounts for half of the causes of death related to seizures and has an annual frequency of 1.2/1000 7,8 . Although the exact mechanisms underlying SUDEP remain unknown, cardiac arrhythmia originating from central nervous system control constitutes the foreground hypothesis of focus 9,10 . Identifying epilepsy patients with high risk for SUDEP has become a significant subject of interest. As a result of their study, Nashef et al. have suggested investigating interictal cardiac abnormalities to determine SUDEP risk 7 . When taking these study results into account, we can understand the need to evaluate the cardiac status in epilepsy patients. In addition, by considering the arrhythmogenic effects of some particular antiepileptic drugs AEDs (phenytoin, carbamazepine, etc.), we can perceive that the cardiac status of patients with epilepsy should be kept in mind while making appropriate treatment regulations.
Up to now, a huge number of studies on cardiac monitoring has been published in the literature, addressing issues such as differential diagnosis between epilepsy and cardiac syncope; determination of patients at risk for SUDEP; and the association of arrhythmia subtypes with seizure onset sites. These studies have used several methods to evaluate cardiac status, including continuous electrocardiogram (ECG) monitoring 11,12 , simultaneous ECG recording during video electroencephalogram monitoring (VEEGM) 13 , implantable loop recorder, 9 and myocardial perfusion scintigraphy 14 . Moreover, studies addressing the results of ECG recordings performed concurrently with routine electroencephalogram (EEG) may provide crucial data, considering that EEG is a vital test in the evaluation process of patients with epilepsy, performed in all individuals prediagnosed with epilepsy. However, at present, only a limited number of studies have focused on this point 15,16,17 . In two of these studies 15,16 , neurologists analyzed the ECG recordings, whereas evaluation by cardiologist was only conducted in a single report by Kendirli et al 17 . In the present study, we aimed to investigate the cardiac arrhythmia burden in patients undergoing EEG, based on a method that analyzes simultaneous ECG recordings in routine EEG by both cardiologist evaluations and EEG reports. Remarkably, we sought to draw attention to the potential usefulness of simultaneous ECG recordings during routine EEG with a more experienced interpretation of ECG. In addition, we will discuss the co-occurrence of cardiac arrhythmia in patients referred to the EEG laboratory.
METHODS
In our center, simultaneous ECG recording with two electrodes has been carried out in every routine EEG since 2007. In our practice, one electrode is placed over the precordium and the other over the left 3rd-4th intercostal space. One channel is used to evaluate ECG in the monitor. In this retrospective study, ECG recordings during routine EEG, which were performed between January 7 and March 2, 2016, have been analyzed by a cardiology specialist. Our laboratory performs a significant part of EEG monitoring analyses of out-patients (approximately 80-90%); however, a minor proportion belongs to in-patients. All records performed between the specified dates have been included in the study. To avoid overdiagnosis due to artifacts, evaluation processes have also been repeated (and confirmed) by the cardiology specialist under the supervision of a neurologist. Arrhythmia subtypes have been investigated. In addition, for patients with arrhythmia, data from the electronic patient record information system of the hospital related to demographic characteristics (age, gender), survival rates, and provisional diagnoses during EEG recordings were analyzed in the first week of April 2016. Among these patients, the subgroup of individuals who had been further investigated in our hospital as to their cardiac status (such as Holter monitoring, echocardiography) was identified by searching the hospital information system. EEG reports of the arrhythmia group were re-evaluated to detect epileptiform abnormalities, paroxysmal disorders, and slowing of background activity. AED use data were collected from EEG reports or the Hospital Nucleus Recording System. Patients with suspected long QT intervals were informed via telephone interview, and a 12-channel ECG was performed on those who were available.
In addition, EEG reports were scanned for the keywords ' arrhythmia, tachycardia, bradycardia, atrial fibrillation, extrasystole' to evaluate the interpretation of neurologists. The reports were investigated regarding diagnoses and preliminary diagnoses of the patients.
RESULTS
A total of 478 routine EEG recordings were scanned. All recordings were interictal. The mean age of the patients was 42.8±19.8 , with a gender ratio of 264/214 (F/M). In 80 (17%) patients, findings compatible with arrhythmia were identified on simultaneous ECG after a cardiologist's evaluation. Of note, 284 of the 478 patients undergoing routine EEG had a confirmed diagnosis of epilepsy. Among these patients, 44 (15.4%) presented arrhythmia, and arrhythmia rates showed no difference between the patient group with a confirmed diagnosis of epilepsy and other individuals (p=0.37).
Four of the five patients with suspected long QT intervals were available. However, only two of them showed prolonged QT interval in the 12-channel ECG, which were evaluated as drug-induced prolonged QT. For further medication adjustments, a cardiologist was consulted. Moreover, when the EEG reports of patients with arrhythmia were scanned, they showed a slowing of background activity in 19 individuals, epileptiform abnormalities in 12, and paroxysmal disorders in 12 (Figure 1).
The detailed investigation of the preliminary diagnosis and the diagnosis of patients with suspected arrhythmia revealed that 40 (50%) of them were on follow-up with the diagnosis of epilepsy. Besides, seizure was the preliminary diagnosis in 30 (37.5%) patients and syncope in 13 (16.2%). EEG was requested for 18 (22.5%) patients with other preliminary diagnoses (impaired consciousness, dementia, psychogenic seizure, headache, etc. Some patients were referred to EEG laboratories with multiple preliminary diagnoses). Patient follow-up data revealed that 28 (35%) of them were taking at least one AED, while 41 (51%) were not on AED. Data on drug use could not be determined in 11 (14%) patients.
We found no difference as to gender in the group with suspected arrhythmia (F/M: 45/35; p=0.96). The mean age was higher in the arrhythmia group compared to the group without arrhythmia [A (+): 51.8±23.3; A (-): 38.8±16.0 (p=0.000; Student's t-test)]. The mean age of the subgroups of patients according to their arrhythmia subtype was 58.2±6.6 for SVES, 58.1±3.5 for VES, and 42.4±4.2 for tachycardia. The assessment of the survival rates of all patients revealed that seven individuals died, and, remarkably, six of them were in the group with suspected arrhythmia (Table 2). However, further investigation about the causes of death in these patients did not show any death etiology suggesting SUDEP. All patients who died during the follow-up were hospitalized patients. The cause of death was cardiac arrest following septic shock in all individuals ( five of them were intubated during EEG). Finally, EEG reports were scanned for keywords related to arrhythmia to compare the interpretation of cardiologists and neurologists. We found keywords related to arrhythmia in 45 (9.4%) EEG reports. The reported statements were tachycardia (n=16; 3.3%), arrhythmia (n=12; 2.5%), bradycardia (n=10; 2.1%), and extrasystole (n=7; 1.5%) ( Table 3). We noted that most of the assessments performed by the neurologists were confirmed by the cardiologist. Yet, a significant proportion of arrhythmia subtypes, consisting of VES, SVES, prolonged QRS duration, and AF, could not be reported by the neurologists.
The investigation of diagnoses and preliminary diagnoses showed that 16 (35.5%) patients were on follow-up with the diagnosis of epilepsy. Nevertheless, EEG was requested with a preliminary diagnosis of seizure for 11 patients and syncope for 5.
DISCUSSION
This study revealed that 17% of EEG data presented findings compatible with arrhythmia on ECG following the cardiologist's evaluation. Remarkably, arrhythmia rates were similar in the subgroup of patients with a confirmed diagnosis of epilepsy. Among the limited number of previous reports, arrhythmia rates in routine EEG ranged from 2 to 18% 15,16,17 . In light of these study results, the arrhythmia rate in our study was considerably high. However, we believe that the method of interpreting EEG data (which was performed by a cardiologist) might be a crucial factor in these high rates.
Accordingly, ECG abnormalities were detected at a similar rate of 18% in a previous unique report by Kendirli et al., in which the interpretations were also performed by a cardiologist 17 . We underline that other related reports had their evaluations carried out only by neurologists 15,16 . In this context, the scan of EEG reports for keywords revealed that results related to arrhythmia were present in only 9.4% of them.
The most common arrhythmia subtypes detected by the cardiologist were VES (5.6%), SVES (4.8%), tachycardia (1.8%), AF (1.2%), and block (1.2%). In the study by Kendirli et al., the ECG abnormalities identified were tachycardia, extrasystole, and bradycardia 17 . On the other hand, in a previous report from our center conducted retrospectively on 2,136 patients, the most common arrhythmia subtypes were extrasystole and tachycardia 16 . Of note, EEG reports were scanned for keywords, allowing us to measure the extent of the neurologists' interpretations. The most common arrhythmia subtype in our cohort, VES, has proven to be remarkably associated with cardiac syncope 18,19 . In the report by Kendirli et al., extrasystole was also a common arrhythmia subtype (13/376; 3.4%). Nonetheless, this report did not divide the subtypes into VES and SVES. Moreover, extrasystole was also the most common keyword in the previous report from our center 16 . Considering that cardiac syncope is a crucial differential diagnosis of epilepsy 20,21 , we believe that the identification of this finding, 'VES', in simultaneous ECG during routine EEG may potentially provide a crucial perspective to neurologists. In another aspect, some studies have reported that the use of carbamazepine may induce VES arrhythmia 22,23 , which could also define VES as a potential critical sign in the clinical practice of neurologists. We detected a prolonged QRS duration in 5% (n=6) of patients, which has also been previously associated with cardiac syncope 24 . Retrospective scanning revealed that 28 patients with suspected arrhythmia were examined by echocardiography or rhythm Holter monitoring, and 16 had abnormal results (hypokinetic segments, valve regurgitation, effusion, VES, bradycardia, etc.). However, due to the retrospective study design, we could not determine how many of these patients had a cardiology evaluation performed by the neurologist based on ECG data during routine EEG. Further followup of these patients was unavailable. Therefore, we cannot comment on the real significance and clinical impact of the evaluation of simultaneous ECG during routine EEG based on our study method. A significant result of our study was determining a high arrhythmia rate (17%) as a result of cardiologist evaluations, given that the detection rate was nearly half of this number (9.4%) in neurologist reports. Thus, the high arrhythmia results in our initial analyses may not reflect actual clinical values. Remarkably, a significant proportion of arrhythmia subtypes, consisting of VES, SVES, prolonged QRS duration, and AF, could not be reported by the neurologists. Indeed, no EEG report described long QRS duration or differentiated between extrasystole subtypes. Nevertheless, these results suggest that the efficacy of prolonged ECG monitoring during EEG may increase with a more ideal and experienced interpretation. We also draw attention to the need to raise awareness and expand ECG knowledge of arrhythmias among neurologists to avoid underdiagnosis (or insufficient cardiology consultation) for patients.
AF has been detected in 1.2% of the recordings. Similarly, AF showed a rate of 2% in the study by Kendirli and colleagues 17 . That being said, AF is the most common cardiac arrhythmia in the community, and its reported prevalence in Europe and North America ranges between 2 and 3% 25 . We found no increased co-incidence of AF in our group of patients with epilepsy or suspected epilepsy.
Besides, no significant difference was identified with respect to gender among arrhythmia patients. Yet, as expected, the mean age of patients with ECG abnormalities was higher than that of patients with normal ECG (p=0.00). This finding may be related to the increasing rate of cardiac comorbidities with aging. We did not include data on patient comorbidities due to the insufficient record-keeping system. On the other hand, we can hypothesize that longer duration of epilepsy, as well as AED use by aging, may also be associated with this increasing arrhythmia rate. Moreover, seven patients died in the overall group, and, remarkably, six of them were in the group with ECG abnormalities. However, further investigations about the causes of death among these patients did not show any death etiology suggesting SUDEP, and the mean age of the arrhythmia group was higher. Ergo, we believe that the increased death rates may be related to aging and a higher level of comorbidities. Finally, the most common EEG abnormality in the arrhythmia group was slowing of background activity (n=19; 24%).
Five patients with long QT syndrome were detected in the records. A 12-lead ECG was performed on the four available patients, and only two of them had long QT syndrome confirmed. Long QT syndrome in these patients was evaluated as drug-induced, and medical therapy was readjusted under the supervision of a cardiologist. Long QT syndrome is a disorder characterized by a prolonged QT interval in the ECG analysis and a propensity for ventricular tachyarrhythmias, which may lead to syncope, cardiac arrest, or sudden death 26 . Remarkably, a prolonged QT interval is usually present in epilepsy patients, and its significance and relationship with the risk of sudden cardiac death have been suggested as crucial topics for further discussions 27,28,29 . Nevertheless, its co-incidence with epilepsy is still controversial 27 , as several studies found no increased incidence of prolonged QT in patients with epilepsy 30,31 . Lambert et al. hypothesized that the differences in epilepsy severity and drug use among the group of patients included in these studies might be responsible for these conflicting results 27 . In our study, patients with prolonged QT intervals did not differ as to epilepsy diagnosis and drug use compared to the overall patients (one of them was diagnosed with epilepsy and on monotherapy).
The usefulness of evaluating the QT interval in simultaneous ECG during routine EEG has been mentioned in a few previous reports 32, 33 . In a study of pediatric patients, Jha et al. detected prolonged QT intervals in 2% of patients with seizures, whereas the incidence rate in patients diagnosed with syncope was 14%. In conclusion, they emphasized the importance of evaluating QT intervals in simultaneous ECG during routine EEG for neurologists 33 . In the previous study of our group, we detected 5 patients with prolonged QT interval (evaluated by a neurologist) in a large cohort of 2,136 patients 16 . Surprisingly, these studies did not confirm the long QT interval in these patients, which may represent a major limitation. Also, Kendirli et al. did not report patients with long QT syndrome in the evaluations performed by the cardiologist 17 . Thus, we believe that future prospective reports on a larger patient group may provide a substantial perspective on the usefulness of determining the suspicion of long QT interval in simultaneous ECG during routine EEG.
The main limitation of our study may be its retrospective design. We understand that prospective studies, including patient follow-up data with ECG abnormalities in routine EEG and 12-lead ECG data, as well as results of cardiology consultation in patient subgroups, may provide a significant contribution in this regard. The National Institute for
References
Health and Care Excellence (NICE) guidelines recommend performing 12-lead ECG in every individual with suspected epilepsy; however, the American Academy of Neurology (AAN) guidelines do not mention it 34, 35 . Hence, further investigations addressing the association between epilepsy and arrhythmia, as well as evaluation procedures for these patients, are still necessary. A major concern may be that, although we have adopted a method of retrospective evaluation of long-term 2-lead ECG data by a cardiologist and held a committed discussion on the results, we cannot support that this method is practical and ideal for cardiac monitoring of patients undergoing EEG. Nevertheless, this method has provided a substantial contribution to the potentially greater usefulness of simultaneous ECG when evaluated in a more experienced manner, which was the main hypothesis of this study. Another limitation may the need for a larger group of patients for a more rational interpretation of this critical issue. However, considering that studies focusing on the importance of simultaneous ECG during routine EEG are extremely rare in the literature, we trust that the results of our study may provide a pivotal contribution.
In conclusion, we identified a considerably high arrhythmia rate in simultaneous ECG during routine EEG based on a cardiologist's interpretation. These results suggest that the efficacy of prolonged ECG monitoring during routine EEG may potentially increase with a more ideal and experienced interpretation. Besides, findings related to the significant arrhythmias rates in patients undergoing EEG can be interpreted as the high co-incidence rates of arrhythmia in patients with suspected epilepsy. Yet, these results cannot provide a detailed conclusion regarding the sub-items of arrhythmia rates in epilepsy patients, the high rates of cardiac syncope in the differential diagnosis of epilepsy, or the causal relationship between AEDs and arrhythmias. We believe that these subtopics need to be investigated separately in future prospective studies.
|
v3-fos-license
|
2018-04-03T01:15:36.691Z
|
2013-09-24T00:00:00.000
|
264676317
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://cardiab.biomedcentral.com/counter/pdf/10.1186/1475-2840-12-136",
"pdf_hash": "464ce486936057c194f81d970b30f5a7bbece94a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46640",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "fe0e3a149f12fc99681ef40bfbb20a3e555ad053",
"year": 2013
}
|
pes2o/s2orc
|
Cardiac metabolism in a new rat model of type 2 diabetes using high-fat diet with low dose streptozotocin
Background To study the pathogenesis of diabetic cardiomyopathy, reliable animal models of type 2 diabetes are required. Physiologically relevant rodent models are needed, which not only replicate the human pathology but also mimic the disease process. Here we characterised cardiac metabolic abnormalities, and investigated the optimal experimental approach for inducing disease, in a new model of type 2 diabetes. Methods and results Male Wistar rats were fed a high-fat diet for three weeks, with a single intraperitoneal injection of low dose streptozotocin (STZ) after fourteen days at 15, 20, 25 or 30 mg/kg body weight. Compared with chow-fed or high-fat diet fed control rats, a high-fat diet in combination with doses of 15–25 mg/kg STZ did not change insulin concentrations and rats maintained body weight. In contrast, 30 mg/kg STZ induced hypoinsulinaemia, hyperketonaemia and weight loss. There was a dose-dependent increase in blood glucose and plasma lipids with increasing concentrations of STZ. Cardiac and hepatic triglycerides were increased by all doses of STZ, in contrast, cardiac glycogen concentrations increased in a dose-dependent manner with increasing STZ concentrations. Cardiac glucose transporter 4 protein levels were decreased, whereas fatty acid metabolism-regulated proteins, including uncoupling protein 3 and pyruvate dehydrogenase (PDH) kinase 4, were increased with increasing doses of STZ. Cardiac PDH activity displayed a dose-dependent relationship between enzyme activity and STZ concentration. Cardiac insulin-stimulated glycolytic rates were decreased by 17% in 15 mg/kg STZ high-fat fed diabetic rats compared with control rats, with no effect on cardiac contractile function. Conclusions High-fat feeding in combination with a low dose of STZ induced cardiac metabolic changes that mirror the decrease in glucose metabolism and increase in fat metabolism in diabetic patients. While low doses of 15–25 mg/kg STZ induced a type 2 diabetic phenotype, higher doses more closely recapitulated type 1 diabetes, demonstrating that the severity of diabetes can be modified according to the requirements of the study.
Introduction
The incidence of type 2 diabetes continues to increase, despite the best current therapies and educational programs available. Cardiovascular disease is the leading cause of mortality in type 2 diabetic patients in the United Kingdom [1]. Metabolic changes in the heart have been implicated in the increased incidence of myocardial infarction [2], with diabetic patients having decreased cardiac glucose metabolism and increased cardiac fatty acid metabolism [3][4][5][6]. Therefore, a greater understanding of how type 2 diabetes affects the heart, the role of abnormal cardiac metabolism and how novel interventions could circumvent this are needed.
Animal models of type 2 diabetes are currently the first line for investigating disease mechanisms and pharmacological therapies. For relevance to humans, animal models must replicate the phenotype seen in patients as closely as possible, but it is also desirable that they mimic the developmental process of the disease. From a practical perspective, models that are easy to generate, cheap and develop in a timely manner will be favoured over expensive and time-consuming models.
No animal models are perfect, and current rodent models of type 2 diabetes have been associated with a number of drawbacks (comprehensively reviewed by Bugger and Abel [7]). For example, the db/db mouse, ob/ob mouse and Zucker fatty rat have been extensively studied in the literature [8][9][10][11], and are generated by genetic abnormalities in the leptin signalling pathway, whereas, in patients, type 2 diabetes usually results as a consequence of multiple gene polymorphisms in combination with environmental factors. Similarly, the Goto-Kakizaki GK rat is insulin resistant but remains lean [12,13], making comparisons to the human condition and its association with obesity difficult. Another drawback of spontaneously diabetic and transgenic animals is the expensive cost of purchase [14]. In relation to cardiac research, one of the limitations of current rat models is the extended periods taken to develop cardiac phenotypes, despite the presence of abnormal circulating metabolites from an early age. For example, Zucker diabetic rats only show cardiac metabolic dysfunction after 12 weeks of age [15], and Zucker fatty rats only after 12 months [16]. Similarly, high-fat diet alone isn't effective at modifying cardiac and systemic metabolism unless fed over an extended period [17].
A relatively new rat model was proposed first by Reed et al. [18], with modifications by Srinivasan et al. [19], which aimed to induce type 2 diabetes by using high-fat feeding to induce peripheral insulin resistance, followed by a low dose of the pancreatic β-cell toxin, streptozotocin (STZ). STZ is traditionally used at high doses to induce type 1 diabetes, as it results in impaired insulin secretion from the β-cell [20,21]. Reed et al. proposed that if a low dose of STZ was used after high-fat feeding, the function of the β-cell mass would be modestly impaired without completely compromising insulin secretion, resulting in a moderate impairment in glucose tolerance [18,19]. This would mimic the human disease process resulting in a metabolic phenotype similar to that in late stage type 2 diabetic patients. This model has become increasingly popular in recent years, both for investigating the mechanisms involved in type 2 diabetes and for testing potential therapies [22][23][24][25][26]. However, the degree of diabetes induced, the amount of STZ used, background strain and starting body weight vary considerably between these studies. As examples, Reed et al. administered 50 mg/kg STZ via an intravenous route following anaesthesia, Srinivasan et al. used 35 mg/kg STZ administered intraperitoneally but using relatively juvenile rats, whereas Zhang et al. fed rats on a high-fat diet for 2 months prior to STZ [18,19,27]. Therefore, a better understanding of the cardiac phenotype of this model, the metabolic changes associated with this method of inducing diabetes and determining the optimal protocol would be desirable prior to use in large scale studies.
Therefore, we set out to determine whether this high-fat feeding/low dose STZ model of type 2 diabetes modified cardiac metabolism in a similar manner to the human disease. In addition, we aimed to determine the optimal experimental approach to induce the disease by testing the cardiac effects of a variety of STZ doses in mature adult Wistar rats. Rats were fed a high-fat diet for three weeks, with a single intraperitoneal injection of STZ of either 15, 20, 25 or 30 mg/kg body weight after two weeks. Our results showed that inducing type 2 diabetes, using a combination of highfat feeding with a low dose of STZ, mimics the human condition. We also demonstrate that while low doses of STZ induced type 2 diabetes and cardiac metabolic changes, a dose of 30 mg/kg induced overt and severe systemic alterations that more closely resembled type 1 diabetes.
Rat model of type 2 diabetes
Male Wistar rats (n = 55, 260 ± 7 g) were obtained from a commercial breeder (Harlan, UK). All procedures were in accordance with Home Office (UK) guidelines under The Animals (Scientific Procedures) Act, 1986 and with institutional guidelines. Control rats were fed for three weeks on a standard chow diet (Harlan Laboratories), with an Atwater Fuel Energy of 3.0 kcal/g, comprising 66% calories from carbohydrate, 22% from protein and 12% from fat (Additional file 1: Table S1). To induce diabetes, rats were fed a high-fat diet (Special Diet Services) for three weeks, with an Atwater Fuel Energy of 5.3 kcal/g, comprising 60% calories from fat, 35% from protein and 5% from carbohydrate, according to a modification of the protocols of Reed et al. and Srinivasan et al. [18,19]. On day 13, rats were fasted overnight and given a single intraperitoneal injection of streptozotocin (STZ in citrate buffer, pH 4) the following morning, and the high-fat diet feeding was continued for a further week (or chow diet for controls). Different doses of STZ (0, 15, 20, 25 and 30 mg/kg bodyweight w/w) in combination with high-fat diet, were investigated to determine the optimal dose to induce a type 2 diabetic phenotype with modified cardiac metabolism. We started our study with a dose of 30 mg/kg, to closely replicate that used by others [19], then included additional groups on lower doses of STZ until hyperglycaemia was no longer induced, mortality was not observed with any dose of STZ. After three weeks on their designated diet, rats in the fed state were terminally anaesthetised with sodium pentobarbital, hearts and livers were rapidly excised, freeze clamped and stored at −80°C for subsequent analysis. Following excision of the heart, blood was collected from the chest cavity, plasma separated and analysed for metabolites using a Pentra analyser (ABX, UK) and an insulin ELISA (Mercodia, Sweden). Both left and right epididymal fat pads were excised, trimmed and weighed, for assessment of adiposity.
Tissue analysis
Cardiac and hepatic glycogen content were determined by the breakdown of glycogen to glucose units, using amyloglucosidase. Triglyceride content was measured in cardiac and hepatic tissue, following Folch extraction, using a kit from Randox, UK. The active fraction of pyruvate dehydrogenase was assayed in cardiac homogenates according to the protocol of Seymour et al. [28]. Medium chain acyl-coenzyme A dehydrogenase activity in cardiac homogenates was measured by following the decrease in ferricinium ion absorbance, as described by Lehman et al. [29]. Citrate synthase activity was measured in cardiac homogenates according to the method of Srere [30].
Western blotting
Cardiac lysates were prepared from frozen tissue and equal concentrations of protein were loaded and separated on 12.5% SDS-PAGE gels, and transferred onto immobilon-p membranes (Millipore, UK) [31]. FAT/ CD36 was detected with an antibody kindly donated by Dr Narendra Tandon (Otsuka Maryland Medicinal Laboratories, USA) [32]. Prof. Geoff Holman (University of Bath, UK) kindly donated the GLUT4 antibody [33,34]. PDK4 was detected using an antibody kindly donated by Prof. Mary Sugden (Queen Mary's, University of London, UK) [35]. Antibodies against GLUT1 and UCP3 were purchased from Abcam, UK, and against monocarboxylate transporter (MCT) 1 from Santa Cruz. Even protein loading and transfer were confirmed by Ponceau staining.
Isolated heart perfusion
A second group of rats were treated with the lowest dose of STZ (15 mg/kg) in combination with high-fat diet, to investigate the effect on cardiac glycolytic flux in the isolated perfused heart. Hearts were isolated into ice-cold Krebs-Henseleit (KH) buffer, cannulated via the aorta and perfused in Langendorff mode at a constant perfusion pressure of 100 mmHg at 37°C [31] To measure functional changes during the perfusion protocol, a fluid filled, PVC balloon was inserted into the left ventricle, inflated to achieve an end-diastolic pressure of 4 mmHg, and attached via a polyethylene tube to a bridge amplifier and PowerLab data acquisition system (ADInstruments, Oxfordshire, UK). Left ventricular developed pressure was determined as systolic pressure minus end-diastolic pressure. Rate pressure product was calculated as the product of developed pressure and heart rate. To measure glycolytic rates, the KH buffer was supplemented with 0.2 μCi.ml -1 [5-3 H]glucose and timed aliquots of perfusate were collected during the perfusion protocol. Glycolytic rates were determined from the conversion of 3 H-glucose to 3 H 2 O in the aliquots.
Statistics
Results are presented as means ± SEM, and were considered significant at p < 0.05 (SPSS Statistics 18). Differences between all six groups (control, 0, 15, 20, 25 and 30 mg/kg STZ) were investigated using a one-way ANOVA with Tukey post-hoc correction for multiple comparisons. To investigate the STZ dose-dependent effects, a two tailed regression analysis was performed between the five doses of STZ.
Physical parameters
Control rats gained 69 ± 3 g in body weight over the three week protocol, averaging 23 ± 4 g per week (Table 1 and Additional file 2: Figure S1). Following the injection of STZ at day 14, the lower doses of STZ (0, 15, 20 and 25 mg/kg) did not alter total body weight gain or body weight gain in the final week compared with the control Table 1 Physical parameters from control and diabetic rats, induced using high-fat feeding in combination with low dose STZ group. In contrast, the 30 mg/kg STZ dose induced weight loss in the final week despite the continuation of the high-fat diet, compared with the control group and other lower doses of STZ. There was a significant correlation between body weight gain in the final week and STZ doses (r 2 = 0.33, p < 0.05). Epididymal fat pad weight, an indicator of total body adiposity, and fat pad to body weight ratio were significantly increased with higher doses of STZ compared with controls. Heart weight and heart weight to body weight ratio were not significantly different between any groups, demonstrating no significant cardiac hypertrophy.
Plasma metabolites in the fed state
Blood glucose concentrations were not increased by high-fat diet alone or in combination with 15 mg/kg STZ, compared with control chow fed rats (Table 2). In contrast, blood glucose levels were increased by 20, 25 and 30 mg/kg STZ in combination with a high-fat diet, with the 30 mg/kg STZ dose increasing blood glucose significantly higher than all other groups. Plasma insulin concentrations were not significantly different to control animals at lower doses of STZ, but were significantly decreased by 30 mg/kg STZ. Non-esterified fatty acids (NEFA), triglyceride, and β-hydroxybutyrate (β-OHB) concentrations were increased only with the highest dose of STZ (30 mg/kg), compared with controls and the other doses of STZ investigated. Cholesterol concentrations were elevated by administration of 25 mg/kg STZ in combination with high-fat feeding compared with controls and lower doses of STZ. Regression analysis demonstrated significant positive relationships between STZ dose and plasma glucose (r 2 = 0.37), triglycerides (r 2 = 0.30), β-OHB (r 2 = 0.41) and cholesterol (r 2 = 0.20), and significant negative relationships between STZ dose and plasma insulin (r 2 = 0.25) (p < 0.05 for all). Thus, while lower doses of 20-25 mg/kg STZ were sufficient to elevate blood glucose levels without impairing insulin secretion, the higher dose of STZ caused severe diabetes, as shown by increased glucose, β-OHB, triglycerides and decreased insulin concentrations.
Hepatic intracellular substrate stores
Hepatic triglyceride and glycogen stores were measured as an indicator of changes in liver metabolism. Three weeks of high-fat diet alone did not change hepatic triglyceride or glycogen concentrations compared with controls ( Figure 1). Combining high-fat diet with all doses of STZ tested increased hepatic triglyceride concentrations, compared with control rat livers. In contrast, hepatic glycogen concentrations were decreased following induction of diabetes using high-fat feeding in combination with STZ at doses of 15, 20 and 30 mg/kg STZ. Both measurements correlated with STZ dose (triglycerides r 2 = 0.39, glycogen r 2 = 0.23, p < 0.05).
Cardiac intracellular substrate stores
One of the main aims of this study was to characterise the cardiac metabolic phenotype of this new model of type 2 diabetes. Cardiac triglyceride concentrations were increased by all doses of STZ tested, compared with control rats, but not by high-fat diet alone, displaying a significant positive correlation with STZ dose (Figure 2). Cardiac glycogen concentrations were decreased by high-fat diet and lower doses of STZ (15 and 20 mg/kg), but showed a dose-dependent increase with STZ concentration. The highest dose of STZ tested (30 mg/kg) did not change cardiac glycogen concentrations compared with controls, and was significantly higher than lower doses of STZ (15 and 20 mg/kg).
Cardiac enzyme assays
Cardiac pyruvate dehydrogenase (PDH) activity, a heavily regulated enzyme of mitochondrial glucose metabolism, was significantly decreased in diabetic hearts at a dose of 30 mg/kg STZ, compared with controls ( Figure 3). There was a significant negative correlation between PDH activity and STZ dose. In contrast, medium chain acylcoenzyme A dehydrogenase (MCAD) activity, an enzyme involved in fatty acid β-oxidation, was significantly increased in 30 mg/kg STZ diabetic rats, compared with controls. These changes in PDH and MCAD activity were independent of a change in the Krebs cycle enzyme citrate synthase, used as a marker of overall mitochondrial content.
Cardiac metabolic proteins
As the changes in metabolic enzymes displayed dosedependent relationships between STZ dose and activities, we, therefore, investigated which doses of STZ induced changes in metabolic protein expression. Proteins involved in cardiac glucose metabolism were investigated in our model of type 2 diabetes, to determine if this pathway was downregulated in diabetic hearts (Figure 4). Protein levels of the PDH inhibitor pyruvate dehydrogenase kinase 4 (PDK4), were increased by all doses of STZ compared with controls. In contrast, the insulinresponsive glucose transporter, GLUT4, protein levels decreased with all doses of STZ, compared with control hearts. Both PDK4 and GLUT4 displayed dosedependent relationships with STZ concentrations (r 2 = 0.53 and r 2 = 0.22, respectively, p < 0.05). GLUT1 protein levels showed no significant differences between groups. Markers of cardiac fatty acid metabolism were also assessed, to determine if this pathway was upregulated in our diabetic hearts ( Figure 5). Uncoupling protein 3 (UCP3), a fatty acid regulated protein, was increased by all doses of STZ compared with control hearts, increasing in a dose-dependent manner with STZ concentration (r 2 = 0.34, p < 0.05). FAT/CD36, a fatty acid transporter, was not significantly different between groups when assessed by one way ANOVA. MCT1, responsible for ketone body and monocarboxylic acid uptake, was measured, to determine if changes in this transporter mirrored changes in plasma ketone bodies. There were no significant differences between groups in cardiac MCT1 protein levels, although there was a general trend for lower levels in diabetic hearts compared with control hearts.
Cardiac glycolytic rates from 15 mg/kg STZ high-fat fed diabetic rats
All doses of STZ investigated showed a downregulation of glucose metabolism proteins, therefore, we questioned whether this was sufficient to affect flux through the glycolytic pathway in the perfused, contracting heart. Given that a number of metabolic changes in these hearts displayed a dose-dependent relationship, we rationalised that if we saw a change in glycolysis with the lowest dose (15 mg/kg STZ), then this would likely indicate that overall the model was sufficient to inhibit cardiac glucose metabolic flux ( Figure 6).
Hearts from 15 mg/kg STZ high-fat fed diabetic rats did not show any defects in contractile function. Heart rates (284 ± 13 and 264 ± 5 bpm in control and diabetic hearts, respectively), developed pressures (165 ± 10 and 182 ± 10 mmHg in control and diabetic hearts, respectively), and rate pressure products (47 ± 4 and 48 ± 3 × 10 3 mmHg/min in control and diabetic hearts, respectively) were not significantly different between control and 15 mg/kg STZ diabetic hearts. In contrast, glycolytic rates in the presence of insulin were significantly decreased by 17% in 15 mg/kg type 2 diabetic rat hearts compared with control hearts. Thus, even at the lowest dose of STZ, glycolytic rates were suppressed in the hearts of these type 2 diabetic rats.
Discussion
This study has demonstrated that high-fat feeding in combination with a low dose of STZ is sufficient to induce the cardiac metabolic phenotype present in type 2 diabetes. In general, this model displayed hyperglycaemia, normoinsulinaemia and hepatic lipid deposition. Diabetic hearts had decreased proteins involved in glucose metabolism with a concomitant increase in proteins involved in fat metabolism. Many metabolic parameters displayed a dose-dependent relationship with STZ, with the highest dose of STZ inducing a metabolic profile that more closely resembled type 1 diabetes. Therefore, this model of type 2 diabetes would appear to mirror the human condition, but care must be taken when Figure 3 Cardiac pyruvate dehydrogenase, medium chain acyl-coenzyme A dehydrogenase (MCAD) and citrate synthase activities in control and diabetic rats following high-fat feeding in combination with low dose STZ. * p < 0.05 vs. control, # p < 0.05 vs. high-fat only, † p < 0.05 vs. 15 and 20 mg/kg STZ, n = 4-5 per group.
determining the dose of STZ to use and the subsequent degree of diabetes induced.
The highest dose of STZ tested (30 mg/kg) induced systemic changes that more closely resembled the type 1 diabetic phenotype. Only this high dose of STZ induced weight loss, and produced a plasma metabolite profile that included hyperketonaemia, hyperlipidaemia, and hypoinsulinaemia. This is in agreement with other studies that have used high doses of STZ in isolation or in combination with high-fat feeding [19,[36][37][38][39], with these high doses also causing abnormalities in liver morphology and function [25]. In contrast, the lower doses of STZ investigated in the present study avoided these extreme phenotypes, instead presenting with hyperglycaemia in the absence of ketosis, normoinsulinaemia and with maintenance of body weight. Increased adiposity relative to body weight was observed with doses of 20, 25 and 30 mg/kg STZ, and wasn't present with high-fat diet only, demonstrating a combined effect of the dietary
Control
Diabetic (15mg/kg STZ) Cardiac glycolytic rates (µmol/gww/min) * Figure 6 Glycolytic rates in isolated perfused hearts from control and 15 mg/kg STZ in combination with high-fat fed diabetic rats. * p < 0.05 vs. control, n = 11 for control, n = 6 for 15 mg/kg diabetics. manipulation and STZ administration. The increased adiposity and elevated cholesterol induced by 25 mg/kg STZ are features present in other models of type 2 diabetes [40][41][42], suggesting that, from a systemic point of view, the middle range dose of STZ tested was the most desirable for future work.
In patients with type 2 diabetes, non-invasive imaging studies have demonstrated a metabolic shift in cardiac substrate metabolism [3][4][5][6], with glucose metabolism suppressed and fatty acids metabolism elevated, characteristic of the Randle cycle [43]. This metabolic shift has been demonstrated in a number of animal models of diabetic cardiomyopathy [44,45], and has been implicated in the increased incidence of, and decreased recovery following, myocardial infarction [2]. In our diabetic hearts, we found markers of fatty acid oxidation, such as UCP3, were upregulated. UCP3, MCAD and PDK4 are peroxisome proliferator-activated receptor α (PPARα) targets, a transcription factor that is activated by fatty acid ligands to upregulate fat metabolism, and is increased in diabetes [46]. The activation of PPARα in diabetes is due to both the increased intake of dietary fatty acids, in addition to increased adipose lipolysis associated with adipose insulin resistance [47]. In contrast to fatty acid metabolism, proteins involved in cardiac glucose metabolism, such as GLUT4, were downregulated in our diabetic hearts. Certainly, the large increase in PDK4 with 30 mg/kg STZ, would account for the decrease in PDH activity in these hearts via inhibitory phosphorylation of this complex. Thus, the changes in protein and enzyme activity in our diabetic hearts would fully support the shift away from glucose metabolism towards fat metabolism, reported in other models and in patient studies [48]. Future studies to measure the effects on fatty acid and oxidative metabolism in this model will confirm the link between the changes in mitochondrial proteins and flux through these pathways.
Perfused heart studies allow the simultaneous measurement of metabolic flux and contractile function in the isolated organ. Even at the lowest dose of STZ, glycolytic rates were decreased, confirming that our changes in proteins were sufficient to impact on overall flux through the pathway, in agreement with studies on db/db and ob/ob mice [44,45,49,50]. The decrease in glycolysis was independent of impaired cardiac systolic function or loss of mitochondria, suggesting that the glycolytic changes were not secondary to adverse cardiac remodelling and mitophagy. In a clever study by Marsh et al., the interaction between diet and diabetes on cardiovascular function was investigated, using a similar model to our current study [51]. They demonstrated that a combination of high-fat feeding and low dose STZ increased diastolic wall stress and arterial stiffness, as occurs in patients with type 2 diabetes, but that modifying only diet or using only STZ did not produce this effect, despite increased blood glucose and abnormal insulin tolerance tests, respectively [51]. Thus, inducing type 2 diabetes using high-fat feeding and low dose STZ, not only mimics the cardiac metabolic phenotype but also replicates the diastolic dysfunction and vascular complications associated with the human disease.
Overall, these data support the use of the high-fat diet/low dose STZ approach in the development of a type 2 diabetic model for future cardiac studies. The advantages of this model are that the disease is induced over a relatively short time and without high costs. In addition, the dose of STZ can be manipulated to match the degree of diabetes required for the study. In a study by Watts et al., this model of high-fat diet and low dose STZ was used and directly compared to the ZDF rat, with both models demonstrating the same hepatic and adipose effects, but the errors associated with the ZDF animals were much greater than with the high-fat/STZ model, suggesting that the reproducibility may be improved by using this new model [22]. Similarly, in a study by Islam and Choi, the high-fat diet in combination with low dose STZ model was identified as a better model of type 2 diabetes than an alternative chemicallyinduced model that utilised an injection of nicotinamide prior to administration of STZ [37]. We found it was essential to fast the rats prior to STZ injection, as preliminary experiments demonstrated a much lower success rate for inducing diabetes if injections were carried out in the fed state (data not shown). This is likely related to the mode of action of STZ, a glucosamine-nitrosourea antibiotic that competes with blood glucose for the pancreatic β-cell GLUT2 receptor [20].
Hepatic and cardiac triglyceride concentrations were elevated by all doses of STZ tested but not by high-fat diet alone, indicating that this was due to the combination of STZ and high-fat. Hepatic and cardiac glycogen concentrations showed dose-dependent relationships with STZ concentrations, but in opposite directions in the two organs; decreasing in liver but increasing in heart. Interestingly, hepatic glycogen was not affected by high-fat diet alone, in contrast, cardiac glycogen was significantly decreased just by the presence of high-fat diet, demonstrating different regulation of glycogen deposition in these two organs.
From our data, doses of 30 mg/kg STZ (and potentially above) would be less desirable than lower doses for modelling type 2 diabetes, due to the extreme systemic phenotype induced. However, at no time did we see an increase in fed insulin concentrations, which has been observed in a number of other models. It has been suggested that this is due to STZ causing a small degree of β-cell damage, which is sufficient to limit the upregulation of insulin secretion in response to the systemic insulin resistance [19]. Srinivasan et al. demonstrated that 35 mg/kg STZ had no effect on insulin concentrations in chow fed rats, whereas high-fat feeding in isolation increased insulin, with the combination of high-fat and STZ bringing insulin concentrations back to control levels [19]. Using a glucose tolerance test, they demonstrated systemic insulin resistance with high-fat diet alone [19]. Thus, it could be that our model using 30 mg/kg STZ mimics a later stage in type 2 diabetesinsulin resistance disease progression, when β-cell function starts to become compromised and no longer matches the increased demand for insulin.
In conclusion, a combination of high-fat feeding with a low dose of STZ provides a model of type 2 diabetes that mimics the metabolic phenotype present in patients. This model also has the added advantages of being relatively inexpensive, easy to induce and can be modified for different severities of diabetes, according to requirement of the study. For future studies we would use a dose of 25 mg/kg STZ in combination with high-fat feeding, as this induced adiposity, hypercholesterolemia, mild hyperglycaemia without compromising insulin secretion, and exhibited cardiac metabolic changes that mirrored the well characterised shift from glucose to fatty acid metabolism in type 2 diabetes.
Additional files
Additional file 1: Table S1. Composition of chow and high-fat diet.
Additional file 2: Figure S1. Body weight gain in control and diabetic rats. Rats were fed a chow or high-fat diet for 21 days, with STZ injected at varying doses at day 14. n = 4-11 per group.
|
v3-fos-license
|
2024-04-10T15:30:20.604Z
|
2024-04-08T00:00:00.000
|
269011392
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "a929909fb463f42a3d41238035bcd470812285f7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46642",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"sha1": "76ec197772fd936ca4783f882ebb75dbe4c2d3ac",
"year": 2024
}
|
pes2o/s2orc
|
A review of clinical use of surface-enhanced Raman scattering-based biosensing for glioma
Glioma is the most common malignant tumor of the nervous system in recent centuries, and the incidence rate of glioma is increasing year by year. Its invasive growth and malignant biological behaviors make it one of the most challenging malignant tumors. Maximizing the resection range (EOR) while minimizing the impact on normal brain tissue is crucial for patient prognosis. Changes in metabolites produced by tumor cells and their microenvironments might be important indicators. As a powerful spectroscopic technique, surface-enhanced Raman scattering (SERS) has many advantages, including ultra-high sensitivity, high specificity, and non-invasive features, which allow SERS technology to be widely applied in biomedicine, especially in the differential diagnosis of malignant tumor tissues. This review first introduced the clinical use of responsive SERS probes. Next, the sensing mechanisms of microenvironment-responsive SERS probes were summarized. Finally, the biomedical applications of these responsive SERS probes were listed in four sections, detecting tumor boundaries due to the changes of pH-responsive SERS probes, SERS probes to guide tumor resection, SERS for liquid biopsy to achieve early diagnosis of tumors, and the application of free-label SERS technology to detect fresh glioma specimens. Finally, the challenges and prospects of responsive SERS detections were summarized for clinical use.
Introduction
Cancer has become the most significant disease that troubles human society.Glioma is one of the most common malignant tumors in the nervous system in the world, and the incidence rate is increasing year by year (1).When normal cells transform into malignant tumor cells, they acquire special abilities, such as immune escape, infinite proliferation, invasive growth, anaerobic digestion, and promoting vascular proliferation (Figure 1).
In the process of tumor development, changes in metabolites produced by tumor cells and their microenvironments often precede variation in their imaging, and also play a certain guiding role in the developing mechanisms of tumor cells.For example, glioma cells have the common value-added characteristics of general malignant cells, which can be glycolytic in an oxygen-free environment, so that the local microenvironment of tumor cells becomes acidic.At the same time, intracellular matrix metalloenzyme, one of the main reasons for glioma cells causing invasive damage, and the elevated cytokine interleukin-1 in Glioma β (IL-1 β) and tumor necrosis factor-α (TNF-α), are all overexpressed (2).Therefore, the proliferation, migration, and invasion of cancer cells are accompanied by significant changes in tumor-related metabolites and microenvironments (3,4).Monitoring the metabolites and microenvironment of tumor tissue can serve as a primary method for diagnosing and treating cancer, which has received widespread attention in biomedical applications.At present, surgical resection is still the main way to treat malignant tumors.Due to the invasive nature of glioma, the main problem of glioma surgery is to retain normal brain tissue while resecting glioma tissue as much as possible.It is of great significance for prolonging the survival period and improving the quality of life of patients, and can minimize the occurrence of postoperative complications to the greatest extent possible (5).However, due to the invasive nature of malignant tumors, identifying the boundaries of malignant tumors is particularly difficult.Many imaging technologies have been used for guiding the diagnosis and treatment of glioma, determining the boundaries of gliomas in clinical practice, for instant, pre-operative and intraoperative magnetic resonance imaging (MRI), fluorescence, intraoperative ultrasound, and intraoperative neuroelectrophysiological testing.Pre-operative nuclear magnetic plain scan and enhanced examination can effectively locate and qualitatively identify the location of tumors and accurately identify the relationship between functional areas.However, there are significant differences in functional and anatomical aspects.The variability between individuals, the impact of brain tumors, and their associated mass effects may distort common anatomical markers, making anatomical-based functional localization inaccurate (6).In recent years, intraoperative MRI and intraoperative ultrasound have been widely used to resect gliomas (7).It has been fully demonstrated that using intraoperative MRI significantly improved the surgical success rate and prognosis of glioma patients.However, it cannot be denied that intraoperative MRI and intraoperative ultrasound imaging have some limitations.Artifacts often appear in intraoperative ultrasound imaging, and intraoperative MRI, which needs to terminate the surgical process and is a major challenge for surgeon (7).Fluorescence imaging related to tumor metabolites has been widely used in clinical practice in recent years (8), especially the dyes with an emission in the near-infrared spectroscopic range (9).So far, the clinically allowed auxiliary imaging agents mainly include fluorescein sodium (FLS), indocyanine green (ICG), and 5-amino Levulinic acid (5-ALA) (10)(11)(12).Under normal conditions, fluorescein sodium has a large molecular weight and cannot penetrate the normal blood-brain barrier.However, due to the invasive growth of glioma cells, vascular endothelial cells were damaged so that fluorescein could enter tumor tissue through the blood-brain barrier, giving unique yellow-green fluorescence.However, the inherent drawbacks of fluorescence, such as rapid bleaching and short blood circulation, hinder clinical development (13,14).Intraoperative neuroelectrophysiological monitoring technology is another technique that mainly focuses on the removal of gliomas located in the functional area.It can effectively avoid damaging the main functional nerves during the removal of gliomas located in the functional area, while preserving some neural function while maximizing tumor resection.However, intraoperative electrophysiological monitoring cannot identify the boundaries of gliomas (15).Among the commonly used techniques nowadays, an examination technique that can quickly, sensitively, and accurately determine the boundaries of gliomas is still highly required.The advantages and disadvantages of these technologies are summarized in Table 1.
Raman scattering originates from the inelastic scattering of light, which can directly reflect the vibration/rotational vibration information in molecules and materials (16).Due to the specific spectral effects of Raman scattering on specific biological molecules, it can be used for imaging tissues and cells (17).In addition, Raman scattering has minimal sample preparation, low water molecule interference, and the ability to simultaneously monitor multiple molecules, making it an ideal method for detecting tumor-related metabolites (18).However, normal Raman scattering is usually a very weak process since only one out of approximately 10 8 photons will spontaneously undergo one Raman scattering photon (19).This inherent weakness limits the strength of the available Raman signal.It is found that molecules adsorbed on rough precious noble metal surfaces can realize a significantly enhanced Raman signal by a billion orders of magnitude (20), noted as surface-enhanced Raman scattering (SERS) (21).SERS overcomes the shortcomings of weak Raman scattering signals, making SERS an applicable tool for biomedical applications.SERS mainly has the following advantages for biomedical purposes: (1) High specificity.Due to the different spectral characteristics of SERS generated by different molecules, the SERS spectrum reflects the intrinsic characteristic structures of different molecules in the form of fingerprints, thus possessing the advantages of high specificity.(2) High sensitivity.Molecules adsorbed on rough precious metal surfaces can enhance a Raman signal by a fold of about 10 6 , contributing to subtle spectral change extraction (21).(3) In situ detection.This technology can achieve in situ detection, which means that molecules can be measured from their original positions, no matter tumor cell tissues, its microenvironments, or interstitial fluids (22,23).(4) No interference from water.Since water can provide signals in many spectral ranges (infrared, terahertz and microwave, etc), it will be troubled when applying those methods in water systems.For Raman and SERS approaches, most tissues and cells give signals in a range of 400-2,000 cm −1 , which has no overlaps with water.SERS technology has already become a promising detection method for biomedical testing, liquid biopsy, and in vitro diagnosis (IVD) (24,25).
In this review, we focus on SERS applications in glioma-related systems.First, the SERS nanoprobes were introduced, followed by a The hallmarks of glioma tumor.
SERS probes for clinical use
SERS is strongly dependent on SERS substrates.As we all know, SERS substrates are mainly represented in two forms, solid-supporting substrates and colloidal nanoparticles.Colloidal SERS-active nanoparticles are dominantly used in biomedical fields.In most sensing strategies, indirect SERS was adopted, and SERS tags with a noble metal nanoparticle decorated with Raman reporter molecules (sometimes a protective layer is also needed) were fabricated.Plasmonic nanomaterials including gold and silver are the first choices of most SERS clinical studies.Very limited publications utilized the SERS-active semiconductor materials, e.g., metal oxides, silver halide, single-element semiconductors, and semiconductor sulfides/arsenides (26).The materials of plasmonic nanoparticles have significant impacts on the SERS intensities.Gold nanospheres, nanorods, and nanostars are highly stable and not easily oxidized, which have been chosen in many studies.Although silver nanoparticles are prone to oxidation, the Raman signals generated above Ag exhibit much stronger than that of gold nanoparticles (27).Optimization of size and shape allows passive enrichment of nanoparticles to the tumor location.Nanoprobes with specific sizes can pass through tumor tissue but not normal tissue (28), which can be explained by the permeability enhancement effect that is caused by the destruction of blood vessels around the tumor tissue and the retention effect due to the destruction of the lymphatic channels around the tumor tissue, reducing the reflux of the nanoprobe.Thus, the edge of the tumor can be delineated and depicted according to the residue of the nanoprobe.To track immune information on tissue samples, the immuno-SERS tags (Figure 2) were employed (29), which can provide feedback on immune information on the surface of tissue samples, similar to immunohistochemistry.These SERS tags were decorated with antibodies to enrich them with high-specific identification functions, and they use the fingerprint characteristics of reporter molecules to simultaneously realize multiplex detections of antigens and targets.
A highly sensitive and responsive SERS nanoprobe is preferred, which can respond quickly to tumor microenvironments that become an indicator for malignant lesions.A stable, responsive SERS probe typically consists of three parts: a noble metal substrate, responsive Raman reporter molecules, and a protective layer (30).A typical pH-responsive probe is fabricated by attaching pH-sensitive Raman reporters to plasmonic nanoparticles.With the change of pH value in the environment, the reporter molecule will undergo structural changes due to the protonation/deprotonation, and its vibration mode will also undergo corresponding changes (31).Thus, different Raman signals infer different pHs, pointing to the intracellular microenvironments (32).The commonly used pH-responsive Raman reporters include 4-mercaptopyridine (4-MPY), p-aminophenylthiol (p-ATP), 3-amino-5-mercapto-1, 2, 4-triazole (AMT), and 2-aminophenylthiol (2-ABT), etc. (33-39).
In addition, to avoid unspecific binding from the other matrices, a protective layer is needed to protect them from damage and replacement.At present, the protective layers include bovine serum albumin (BSA), SiO 2 , MnO 2 , TiO 2 , and organic polymers (40,41).Sometimes, organic polymers (pegylated) (42) or carbon (graphitic) shells ( 43) are also used.
Timepoint Technique Advantages Disadvantage
Preoperative Magnetic resonance imaging (MRI) Locate and qualitatively identify the location of tumors and accurately identify the relationship between functional areas.
There are significant differences in functional and 3 Application of responsive SERS probes in biomedical fields
Tumor cell microenvironments revealed by SERS
The metabolic growth and development of cells are often accompanied by the acidification of extracellular fluid, which is often accompanied by cell aging, apoptosis, proliferation, etc., especially for tumor cells.The tumor's extracellular fluid is often accompanied by a change in extracellular pH (49).Therefore, detecting the pH of extracellular fluids to be acidic can be a sign of tumors.Research shows that the acidification of extracellular fluid is often related to the invasion of nausea tumors.Therefore, exploring a pH-responsive SERS probe becomes a feasible way to distinguish tumor boundaries.
Li et al. (50) reported an intelligent SERS navigation system for describing the acidic edge of glioma with a nondestructive way (Figure 3).They utilized the water droplet extraction to transfer the acidic microenvironment of the tumor cutting edge into a drop of water.Then, they put the drop on a pH-sensitive SERS chip that had been modified by IR7p, which underwent protonation and deprotonation according to different environments, giving color changes and SERS signal variation.Based on its color sensing method, the acidic range of the environment was determined.The model was applied to a rat tumor-bearing model and the intraoperative resection of glioma.The results showed that the recurrence time of the tumor in the group of rat glioma resection guided by SERS technology was significantly later than that of other groups.Further, they applied this technology to human glioma tissue.It was shown that the detection results of SERS technology guided by pH were consistent with those of hematoxylin-eosin (HE) staining, and it could quickly depict the pH map of the tumor resection bed.Acidity-related cancer cell density and proliferation levels were shown in animal models and tumor margin tissues excised from glioma patients.Compared with conventional strategies used in clinical practice, the overall survival rate of postoperative animal models guided by the SERS system significantly increased.This technology is expected to accelerate the clinical transformation of acidic edge-guided surgery.
Zhang and Xu developed a similar SERS strategy for rapid diagnosis of glioma boundary by using an ultrasensitive SERS substrate and a portable Raman spectrometer (Figure 4).They prepared a SERS substrate by the self-assembled silver nanoparticle monolayer bridged by polyelectrolyte, followed by an assembled layer of 4-MPY.They constructed pure water droplet arrays in different regions of tumor tissue, which can allow the interstitial fluid of tumor tissue to diffuse into water.By monitoring the peak intensity ratio of 4-MPY (1,091 cm −1 /1,580 cm −1 ) recorded by a portable Raman spectrometer, the acidification characteristics of tumor regions were revealed, which shows a different pH relative to normal tissue, thereby accurately distinguishing the tumor boundaries.The detection results were consistent with the results achieved by the microelectrochemical pH electrode.This method has no harm to the surgical tissue and is expected to replace the rapid pathological detection during the operation of glioma and become a feasible technology for intraoperative navigation (23).
Illuminating glioma in living body by SERS
By the tissue injection of SERS nanoprobes to start circulatory system delivery, SERS technology can determine the boundaries of 51) loaded tumors on mice to simulate human glioblastoma.After injecting Au@SiO 2 nanoparticles through the tail vein and circulating in the body for 24 h, brain tissue was taken and fixed in formaldehyde.Mouse glioma tissue was detected under white light, static Raman instrument, and handheld Raman spectrometer (Figure 5).When glioma tissue was removed, residual nano signals of tumor tissue could be detected using a static Raman instrument at a vertical angle facing the tissue.Once these residual signal tissues were removed, no residual tumor tissue was observed.However, when the angle was changed, residual glioma tissue was still found in the brain tissue around the tumor.Subsequently, after slicing the tissue area and conducting a pathological examination, it was indeed a residual tumor tissue.This study proves that the detection of glioma tissue can be achieved through SERS technology, and the portable Raman analyzer has a more convenient and sensitive detection method.Han et al. ( 52) developed a AuS-IR7 probe, and they injected it intravenously into mice (Figure 6).After reaching the tumor tissue with the probe, they measured the area with the strongest surfaceenhanced resonant Raman scattering (SERRS) of AuS-IR7, guiding the edge resection.After resection, MRI was used to evaluate the postoperative prognosis of the SERS-guided resection in comparison to the white light-guided resection.The MRI images showed that the tumor tissue excised under white light showed an enhanced MRI signal on the fifth day.On the 12th day, the recurrent tumor tissue reached 14 mm 3 , and almost occupied the Cerebral hemisphere 15 days later.Interestingly, the tumor tissue resected under the SERS guide did not exhibit noticeable MRI signal enhancement, proving that SERS-guided resection provided a better prognosis.
Diaz et al. ( 53) also realized the transmission of SERS-active gold nanoparticles through the blood-brain barrier through focused ultrasound so that nanoparticles can be accurately injected into tumor tissue, and tumor boundaries can be accurately identified through SERS measurement to achieve resection.
SERS techniques in liquid biopsy for early diagnosis of tumors
The preventive measures currently taken for malignant tumor cells in clinical practice are still secondary prevention, namely early detection and early treatment.Early detection and early treatment can achieve maximum relief of patient pain while obtaining the best therapeutic effect.Pathological biopsy is mainly used as the gold standard in clinical practice to determine tumor type and staging (54).Because it is an invasive operation that causes great damage to patients, a new non-invasive method for detecting tumor cells is urgently needed.Some biogenic substances in blood were also used for biomarkers, such as Alpha-fetoprotein (AFP), carcinoembryonic antigen (CEA), carbohydrate antigen 153 (CA153), carbohydrate antigen 199 (CA199), carcinoembryonic antigen 125 (CA125), prostate-specific antigen (PSA), etc. (55,56).However, these biomarkers are often used for recurrent diagnosis and are not sensitive to early diagnosis.
Liquid biopsy can achieve non-invasive detection while minimizing patient pain.Research shows that endogenous substances in the body can be stored in the internal environment of the human body, such as blood, interstitial fluid, urine, saliva, and cerebrospinal fluid, and these substances can be revealed by SERS.The endogenous substances found in recent years mainly include circulating tumor cells (CTCs), circulating tumor DNA (ctDNA), microRNAs (miRNAs) and some substances secreted in exosomes (57,58).Especially for glioma tissue, early detection and concurrent surgical resection can preserve the functional area of tissue to the maximum extent, and resect tumor tissue to the maximum extent, which can greatly increase the prognosis and quality of life of patients.Due to the invasive growth of glioma, blood-brain barrier is damaged.So the extracellular vesicles (EVs) of biological fluid that are not easy to appear in the blood.At the same time, the presence of cerebrospinal fluid greatly increases the chances of this substance being present.
Jalal et al. ( 59) developed a nanorobot-shaped antenna-decorated microfluidic device to identify EVs (Figure 7).They separated EVs from non-cancer cell lines and two different glioma cell lines (U373 and U87) to measure their SERS spectra.The SERS data displayed characteristic peaks layed near 1,250, 1,325, and 1,580 cm −1 .Liposomes, U373, and U87 can be accurately distinguished by the covariance PCA algorithm.They can be considered as the special fingerprints of U373 and U87 EVs.This study displays that the SERS integrated with microfluidic devices provides the potential for the diagnosis and treatment of glioma.
In recent years, ctDNA has become an emerging liquid biopsy biomarker, mainly derived from cell apoptosis, necrosis, and secretion processes.The content of single-base mutated ctDNA sequences increased in diffuse intrinsic pontine gliomas (DIPGs).Miao et al. (60) reported a method by combining cyclic enzyme DNA amplification technology and gold nanoparticles@silicon (AuNPs@Si) assisted SERS technique, as shown in Figure 8.They designed an oligonucleotide probe folded into a stem ring hairpin, labeled with a cyanine dye (Cy5) at the 5′ end.The stem ring structure of the oligonucleotide probe could be changed by hybridization with the target sequence of the mutant ctDNA, forming a new double helix structure.In the new double helix structure, the prominent 3′ end can be specifically recognized by the Exo III enzyme and gradually cleaved into nucleotides.After the cleavage process was completed, the target sequence of ctDNA would be released into the solution and recycled for the next round of enzymatic DNA cleavage of oligomer probes.In this way, the residual DNA sequence generated by enzymatic cleavage of oligonucleotide probes accumulated to a large amount through this cyclic reaction.They added AuNPs@Si to hybridize, causing the Cy5 tag closer to the substrate, efficiently generating dense SERS signals.ctDNA can initiate the cyclic generation of residual DNA sequences, thereby achieving SERS detection of ctDNA based on the content of Cy5.The results indicated the SERS intensity at 1,366 cm −1 showed a particular linear relationship when the concentration of ctDNA increased from 1 pM to 0 fM.Therefore, an early diagnosis of glioma was achieved based on the changes in SERS intensity displayed by the changes in the content of ctDNA in blood.
Chemical information revealed by label-free SERS
Using a specific spectrum generated by a certain molecular substance to identify the species has become popular in recent years and has grown fast with the progress of deep learning and artificial intelligence (AI) techniques.The label-free SERS can reveal the chemical information of analytes, which supplies richer component information and avoid the false positive results that could happen in SERS-labeling methods.SERS utilizes the molecular resonance hotspot effect to enhance the spectrum of the molecular substance, making it easier to identify the difference between tumor tissue and normal brain tissue.
Sun et al. ( 62) compared the Raman spectra of glioma tissue, normal tissue, and 2-hydroxyglutarate (2HG).They selected 24 normal brain tissue and 23 AC/DC tissue samples (Figure 10).After the biopsy, each tissue was cut into 1 mm-thickness sections, and then physiological saline was added.The supernatant was dripped on a PEGylated SERS substrate and their corresponding SERS spectra were measured.Compared with normal tissue, stronger Raman peaks around 500-800, 1,000, and 1,600 cm −1 were found, indicating we enable the differential diagnosis of glioma and normal brain tissue by means of SERS.The summary of the application of SERS in glioma is shown in Table 2.
Summary and outlook
Since the birth of SERS technology, it has been widely used for detecting biological samples with its high sensitivity, specificity, non-invasive, and efficiency.This article briefly introduces the structure and responsive mechanisms of SERS nanoprobes and reviews their applications in glioma-related studies, including the detection of tumor cell microenvironment using SERS technology, SERS imaging of glioma tissue in living body, SERS technology for glioma-related liquid biopsy, and the application of free-label SERS technology to detect fresh glioma specimens for qualitative diagnosis of glioma.
These studies provide broad prospects for the application of SERS technology in biomedicine.However, it is undeniable that SERS technology also has many shortcomings that hinder its application in 11).All this indicates that with the help of various effective Raman approaches, people are constantly advancing their understanding of brain glioma.We believe that in the near future, these spectral technologies will continue to carry forward and serve biomedicine deeply.
1 )
anatomical aspects.The variability between individuals, the impact of brain tumors, and their associated mass effects may distort common anatomical markers, making anatomical-based functional localization inaccurate Intraoperative Magnetic resonance imaging (MRI) Avoiding anatomical displacement caused by tissue traction in preoperative nuclear magnetic resonance imaging Surgical process needs to be terminated Intraoperative Fluorescence imaging Fluorescein could enter tumor tissue through the blood-brain barrier, giving unique yellow-green fluorescence.Therefore, it can be used to detect tumor boundaries (Rapid bleaching and short blood circulation.avoid damaging the main functional nerves during the removal of gliomas located in the functional area, while preserving some neural function while maximizing tumor resection.It cannot identify the boundaries of gliomas Intraoperative surface-enhanced Raman scattering (SERS) (1) High specificity:the SERS spectrum reflects the intrinsic characteristic structures of different molecules in the form of fingerprints.(2) High sensitivity: High signal strength.(3) In situ detection.This technology can achieve in situ detection, which means that molecules can be measured from their original positions, no matter tumor cell tissues, its microenvironments, or interstitial fluids.(4) No interference from water There are errors in the spectral acquisition process Yang et al. 10.3389/fneur.2024.1287213Frontiers in Neurology 04 frontiersin.orgNowadays,SiO 2 is a common protective layer, which is usually formed by the decomposition of Na 2 SiO 3 or tetraethyl orthosilicic acid(44,45).The protective layer can also prevent SERS tags from the influence of working surroundings.For example, a metal-organic framework (MOF) as a shell can protect Au nanoparticles from aggregation, which was used for indicating tumor edge under SERS imaging(46).Endowing the specific target recognition feature to SERS tags for the cell membrane and organelle surface has been well designed in recent years.Kircher et al.(47) injected Au@SiO 2 nanoprobes coated with 1, 4, 7, 10-tetraazacyclododecane-1, 4, 7, 10-tetraacetic acid (DOTA) into the tail vein of mice for SERS imaging.The aggregation of nanoprobes in tumor tissue can clearly observe the tumor area, thus revealing tumor tissue that cannot be detected by the naked eye.In addition, Vendrell et al. (48) developed an efficient tumor-targeting nanostructure based on the single-walled carbon nanotubes (SWNTs), which provides a strong and fixed Raman peak at 1,593 cm −1 .SWNTs were decorated with an RGD peptide (arginyl-glycyl-aspartic acid) to increase the cancer cell internalization efficiency.This nanoprobe was injected through the tail vein of mice to identify boundaries by tracking the location of probes.
FIGURE 3
FIGURE 3 Schematic diagram of the SERS navigation system intraoperatively delineating acidic margin of glioma.A trace amount of pure water (≈0.4 μL) in the pipette tip contacts suspicious tissue at the tumor cutting edge for 2~4 s.Then, the water droplet is sucked back and dripped onto a pH-sensitive SERS chip.The Raman spectra of the aqueous sample on the SERS chip were acquired by a handheld Raman scanner equipped with a 785 nm laser.The pH map of tumor cutting edges was intraoperatively delineated with the assistance of a deep learning model by automatically analyzing the Raman spectra.With the guidance of the pH map, acidic tissues with pH values less than 7.0 were excised (50).Copyright 2022 John Wiley and Sons Ltd.
FIGURE 4
FIGURE 4Preparation of a tumor-bearing model in a naked mouse, the extraction of the interstitial fluid of a glioma sample, and its SERS measurement by a portable optical-fiber Raman spectrometer equipped with a 532 nm laser(23).Copyright 2022 Elsevier Science.
FIGURE 5
FIGURE 5 Glioblastoma (GBM) resection with the guidance of a Raman microscope.(A) Photographs of the intact brain before (A1) and after (A2,A3) successive tumor resections guided by the Raman microscope (fixed 90 angle).When the hand-held Raman scanner was used at variable angles after these resection steps, additional microscopic tumor tissue was detected (location depicted by arrowhead in A4).(B) SERS images acquired with the Raman microscope before (B1) and after (B2,B3) successive tumor resections.(C) The hand-held Raman scanner was used for verification of signal (C1-C3) observed with the Raman microscope (B1-B3).(C4) Angulated scanning of the lateral wall of the resection bed with the hand-held Raman scanner detected microscopic tumor, which had been missed by the Raman microscope.Tissue was left in place for histological verification in situ (red Raman spectra = nanoparticles detected in brain tissue; blue Raman spectra = SERS nanoparticle standard as control) (51).Copyright 2014 ACS.
FIGURE 6
FIGURE 6 AuS-IR7 intraoperatively guiding glioma resection in live mouse models.(A) Pre-operative T1W and T2W MR images of rat brain bearing orthotopic glioma xenograft.Tumor was located in the cortex (left, rat1) or corpus striatum (right, rat2).(B) Drawing up a surgical plan before the craniotomy.(C) Sequential glioma resection pictures of a rat bearing orthotopic glioblastoma xenograft.Yellow dashes marked the areas with detectable Raman signals.Star symbols present the points with the highest Raman signal.(D) In vivo Raman spectra at the sequential steps during the SERRS-guided tumor resection.The characteristic twin peaks of AuS-IR7 were highlighted by dash squares (52).Copyright 2019 ACS.
FIGURE 7 Ultrasensitive
FIGURE 7 Ultrasensitive SERS detection of EVs from non-cancerous (NHA) and cancerous (U373) glial cells as well as liposomes with the nanobowtie microfluidic chip.(A) SERS characterization for investigating the specific Raman scattering signals of EVs derived from non-cancerous glial cells (NHA), cancerous glioma cells (U373) and liposomes.Each spectrum is the mean value of the spectra and the SD is demonstrated with lighter color.For each sample, a minimum of 15 data points were used after the normalization process and elimination of the out of range data points.(B) Unique peaks existing in EV spectra that did not appear for liposomes or were considerably weak.(C) PC1 and PC2 loading Raman bands based on which the (D) PCA score plot of the SERS data, demonstrating the distinct position of the spectra from each sample, and that each type is defined.Each point is related to one experiment.In the same color, the 95% confidence ellipses are demonstrated.(E) Comparison analyses of lipid membrane properties (Chol amount) based on the R = I 2,880 cm−1 /I 2,930 cm−1 intensity ratio distribution.Each point is related to one trial.(F) The histogram and correlated fit of R-values for liposomes and EVs, demonstrating the composition of Dioleyl phosphatidylcholine (DOPC): Chol while showing the heterogeneity (59).Copyright 2021 RSC.
FIGURE 9
FIGURE 9Normalized mean spectra with standard deviation for healthy (blue) and tumor patients (red).Arrows mark the new Raman peaks identified(61).Copyright 2021 MDPI.
FIGURE 8 (
FIGURE 8 (A) The integration of cycled enzymatic DNA cleavage/amplification and SERS for sensitive detection of ctDNA.(B) The scanning electron microscopic image of the AuNPs@Si substrate.(C) SERS spectra of testing Cy5 collected from 50 random spots on the AuNPs@Si substrate in a single assay.(D) Averaged SERS intensities at three peaks from 50 random spots, respectively (60).Copyright 2021 Frontiers Media SA.
FIGURE 10 Raman
FIGURE 10Raman spectrum of solid 2-hydroxyglutarate (2HG), the mean Raman spectrum of 23 glioma supernatant, and the mean Raman spectrum of 24 normal supernatant(62).Copyright 2019 John Wiley and Sons Ltd.
FIGURE 11 SRS
FIGURE 11 SRS images of frozen human GBM xenograft.(A) High-magnification view of normal to minimally hypercellular cortex.(B) Infiltrating glioma with normal white matter bundles (asterisk), tumor-infiltrated bundles (arrow), and dense tumor cells (arrowhead).(C) Bright-field microscopy appears grossly normal, whereas SRS microscopy within the same field of view demonstrates distinctions between tumor-infiltrated areas and non-infiltrated brain (normal), with a normal brain-tumor interface (dashed line) (66).Copyright 2013 American Association for the Advancement of Science.
TABLE 1
Advantages and disadvantages of glioma detection technology.
(64,65), sample collection.Most Raman detection devices require the transfer of intraoperative tissue to the detection equipment for detection.During the transfer process, it is difficult to avoid causing tissue sample denaturation, which affects signal changes.At the same time, during SERS measurement, the signal is highly susceptible to the influence of the surrounding environment and probe concentration (63).Secondly, more convincing results on the longterm tracking report on the cytotoxicity generated by the injection of nanoparticles into veins are needed.If applied to human tissues, longterm uncertainty and biocompatibility may arise.For the commonly used SERS tags, less toxicity to the human body, safe and unambiguous metabolic pathway, and more obvious signal enhancement(64,65).Thirdly, due to the need for SERS technology to collect a large amount of spectral information and preprocess the spectral information, the large sample size and complex program are unachievable in clinic.Therefore, it is necessary to explore a fast method for identifying and analyzing spectral information to simplify the complex program required for spectral processing.The development of high-speed imaging technology may be a good solution.Coherent anti-Stokes Raman scattering (CARS) and stimulated Raman scattering (SRS) have been rapidly developed in the last two decades.CARS can detect lipid content.Evans et al. (67) used a CARS microscope to detect the lipid content, and they observed a significant decrease in tumor tissue signal, proving that CARS technology can detect tumor cell boundaries.SRS microspectral technology can generate different signals based on different protein and lipid contents, and can also display different regions.Ji et al. (66) implanted glioblastoma cells into mice, allowing them to infiltrate and grow into tumors.The slices were then subjected to SRS imaging, and the spectral information generated accurately distinguished the protein-rich tumor infiltrating areas from normal brain tissue, indicating that SRS technology is a promising technology in clinical practice (Figure
TABLE 2
Summary of the application of SERS in glioma.By using SERS to detect glioma and normal brain tissue, stronger Raman peaks were found in glioma tissue around 500-800, 1,000, and 1,600 cm-1 compared to
|
v3-fos-license
|
2020-05-22T14:59:13.321Z
|
2020-05-22T00:00:00.000
|
218772214
|
{
"extfieldsofstudy": [
"Materials Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-020-65552-6.pdf",
"pdf_hash": "9ff0ce71748b14769261442e7f273af38946daf2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46643",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Chemistry"
],
"sha1": "9ff0ce71748b14769261442e7f273af38946daf2",
"year": 2020
}
|
pes2o/s2orc
|
Solid-electrolyte interphase nucleation and growth on carbonaceous negative electrodes for Li-ion batteries visualized with in situ atomic force microscopy
Li-ion battery performance and life cycle strongly depend on a passivation layer called solid-electrolyte interphase (SEI). Its structure and composition are studied in great details, while its formation process remains elusive due to difficulty of in situ measurements of battery electrodes. Here we provide a facile methodology for in situ atomic force microscopy (AFM) measurements of SEI formation on cross-sectioned composite battery electrodes allowing for direct observations of SEI formation on various types of carbonaceous negative electrode materials for Li-ion batteries. Using this approach, we observed SEI nucleation and growth on highly oriented pyrolytic graphite (HOPG), MesoCarbon MicroBeads (MCMB) graphite, and non-graphitizable amorphous carbon (hard carbon). Besides the details of the formation mechanism, the electrical and mechanical properties of the SEI layers were assessed. The comparative observations revealed that the electrode potentials for SEI formation differ depending on the nature of the electrode material, whereas the adhesion of SEI to the electrode surface clearly correlates with the surface roughness of the electrode. Finally, the same approach applied to a positive LiNi1/3Mn1/3Co1/3O2 electrode did not reveal any signature of cathodic SEI thus demonstrating fundamental differences in the stabilization mechanisms of the negative and positive electrodes in Li-ion batteries.
which determines the edge to basal plane ratio. Earlier salt reduction at the edge plane favors larger content of inorganic components in a SEI 1,16,21,22 . Preferential solvent reduction on the basal plane at lower potential vs. Li + /Li determines larger organic content in the SEI 1 . Besides, the SEI growth and structure depend on binder material 23,24 . To the best of our knowledge, only two in situ AFM measurement of SEI formation were reported on a composite electrode comprised of graphite powder mixed with polyvinylidene difluoride (PVDF) binder and a conductive additive: (1) in 1997 25 authors reported difficulties of such measurements, and (2) in 2017 26 the imaging quality was not enough to observe SEI formation due to rough surface and AFM tip contamination.
Not only anodic SEI but also cathodic electrolyte interface (CEI) is under ongoing investigation especially relevant for emerging high voltage materials, where 4.7 V vs. Li + /Li oxidation potential of common organic electrolytes may be surpassed. Different mechanisms underlying the first cycle irreversibility in layered oxides have been discussed, including formation of CEI [27][28][29][30][31][32][33][34] , side reactions 35 , and structural transformations [36][37][38] . The experimental results are still sparse due to absence of model samples such as HOPG and due to difficulty of in situ AFM measurements of powder samples.
Here we provide a facile methodology for in situ atomic force microscopy (AFM) measurements of SEI formation on cross-sectioned composite battery electrodes allowing for direct observations of SEI formation on various types of carbonaceous negative electrode materials for Li-ion batteries. Using this approach, we observed SEI nucleation and growth on highly oriented pyrolytic graphite (HOPG), MesoCarbon MicroBeads (MCMB) graphite, and non-graphitizable amorphous carbon (hard carbon). Besides the details of the formation mechanism, the electrochemical and mechanical properties of the SEI layers were assessed. The comparative observations revealed that the electrode potentials for SEI formation differ depending on the nature of the electrode material, whereas the adhesion of SEI to the electrode surface clearly correlates with the surface roughness of the electrode. Finally, the same approach applied to a positive LiNi 1/3 Mn 1/3 Co 1/3 O 2 electrode did not reveal any signature of cathodic SEI thus demonstrating fundamental differences in the stabilization mechanisms of the negative and positive electrodes in Li-ion batteries.
electrochemical cell
In order to measure SEI formation on cross-sections of composite battery electrodes in situ in AFM, we designed a new electrochemical cell on the basis of a liquid perfusion cell capable to measure bulky samples. Figure 1 schematically illustrates a standard AFM electrochemical cell and the new one. In the standard cell ( Fig. 1(a)) a flat sample is clamped at the bottom of the cell body, sealed by an o-ring, and connected as a working electrode (WE) to an external potentiostat/galavanostat. The cell body with counter (CE) and reference (RE) electrodes is filled with an electrolyte and a cantilever is immersed into the electrolyte bath for scanning. The cell body is typically made of Polyether ether ketone (PEEK) and Polytetrafluoroethylene (PTFE), which possess high chemical resistance to a wide range of chemical compounds. This cell configuration allows measurements only on flat samples such as HOPG.
In the new cell ( Fig. 1(b)) measurements are performed in the electrolyte meniscus formed between the sample and the cantilever holder. Thus the sample may be bulky and does not require sealing by an o-ring, which allowed us to use cross-sections of composite battery electrodes embedded in epoxy resin. The epoxy resin fixes the composite electrode sample, and its polished surface additionally serves as a support for the electrolyte meniscus and the reference and counter electrodes. The meniscus is formed by injecting the electrolyte through the tubing fixed from the top in the window of the cantilever holder.
Influence of sample preparation on the surface state
After preparation of the cross-sections, described in details in the Methods section, we analyzed to what extent the polished surface is equal to the untreated surface of the original powder. Figure 2 illustrates normalized Raman spectrum for HOPG, MCMB, and HC. The fresh HOPG basal plane gave intense G-band and no D-band. The Ar ion beam polished edge surface of the HOPG gave additional D-band with the G-to D-band area ratio equal 1.2. D and G bands from HC were similar from the pristine powder, ball-milled powder, the mechanically polished surface after the final polishing with the active oxide polishing suspension (OP-S, Struers), and the ion beam polished surface. On the pristine MCMB powder the D-to G-band ratio was 8.4. After the ball milling its ratio dropped to 2. After the mechanical polishing the D-to G-band ratio dropped to 0.8 and after the ion beam polishing slightly increased to 1.5. Both the HOPG and the MCMB possessed the 2D bands, while the HC did not.
The results show that the HC surface after the ball milling, mechanical, and ion beam final polishing closely resemble the pristine powder surfaces. On the contrary, the ratio of basal to non-basal planes on the MCMB surface (correlates with the G-to D-band ratio 19 ) strongly drops already after the ball milling, which is an inherent step of the battery electrode production. Additional mechanical polishing further slightly reduces this ratio, and the following ion beam polished slightly increases it, making similar to the ion-beam polished HOPG edge plane. Overall, on the MCMB the polished surface resemble the ball milled one which is utilized in a commercial battery. Taking into account that the SEI composition on the HOPG edge plane, hard carbon, and soft carbon is similar with somewhat smaller content of salt reduction products in the soft carbon SEI 2 , the SEI from the cross-sections must be more representative for real battery SEI than the SEI from the basal plane HOPG.
Due to higher roughness of the ion beam polished samples for further study we used the OP-S polished samples. Figure 3 shows comparative cyclic voltammetry (CV) curves and corresponding in situ AFM images of the HOPG, MCMB, and HC surface before, during, and after cycling. The HOPG was used as a reference sample. Its freshly cleaved surface shown in Fig. 3(a) is the graphite basal plane with small fraction of edge sites along step edges. The MCMB and HC samples shown in Fig. 3 (d,g), respectively, are cross sections of composite electrodes made of powder mixed with polyvinylidene difluoride (PVDF) binder and Super P carbon black embedded in epoxy resin. Being rougher than the HOPG, such cross sections are sufficiently flat for AFM imaging.
In situ Sei formation on HopG, Mesocarbon MicroBeads (McMB) graphite, and hard carbon (Hc)
During the first CV cycle on HOPG, SEI was detected along step edges at 0.8 V and on the basal plane at 0.5 V (Fig. 3(c-4)). This process is associated with the first current peak at 0.8-0.3 V. At about 0.4 V the SEI merged into a uniform layer and its topography remained stable during further cycling. Figure 3(b) shows the formed SEI after the first CV. Apart from the SEI we observed blisters with lateral size up to 2 µm and height up to 35 nm. During the second CV (dashed line in Fig. 3(c)) the SEI morphology did not change and the 0.8-0.3 V current peak was 3 times smaller.
The observed earlier SEI nucleation at the edge carbon sites is in agreement with previous studies showing that the edge plane possess higher electrocatalytic activity than the basal plane 21,22 , which facilitates earlier salt reduction and favors larger content of inorganic components such as LiF, Li 2 O, and Li 2 CO 3 in a SEI in LiPF 6 -containing electrolyte 1,16 and higher thickness 2 . Preferential solvent reduction on the basal plane at lower potential vs. Li + /Li determines larger organic content in the SEI.
Likewise, blistering is a typical phenomenon in ethylene carbonate (EC)-based electrolytes [39][40][41] . It is caused by cointercalation of electrolyte molecules and trace water into graphite structure and consequent gas evolution on the cathodic scan 42 . Indeed when we used water contaminated electrolyte and observed a water reduction current peak at 1.3 V 13,43-45 , blistering was more intense and particularly active on the cathodic cycle ( Supplementary Fig. 1).
On the MCMB sample the SEI nucleation was detected at about 0.9 V of the first CV cycle ( Fig. 3(f-4)). At 0.6 V the SEI formed a uniform layer, which stabilized at about 0.3 V. The process is associated with the current peak at 1.0-0.4 V. The formed SEI morphology remained stable during further cycling. During the second CV (dashed line in Fig. 3(f)) the SEI morphology did not change and the current peak was almost 5 times smaller. Because the MCMB graphite particle is a mixture of nanosized regions with different orientations, the edge to basal sites ratio on the surface is much larger than on HOPG, which naturally explains the 0.9 V SEI nucleation potential -close to the step edge SEI nucleation on the HOPG. Consequently, we can expect more inorganic fraction in the SEI. www.nature.com/scientificreports www.nature.com/scientificreports/ On the HC sample distinct but loosely bound surface deposit appeared at about 1 V. It was permanently scraped off by the cantilever until a complete SEI layer was formed at about 0.4 V (Fig. 3(i-4)). Its topography further remained unchanged. The process was associated with the gradual current increase on the stage 4 ( Fig. 3(i)) without distinct anodic peak. A larger area scan after the first CV ( Fig. 3(h)) revealed partial SEI delamination on the particles on the left hand side. Thickness of the delaminated SEI was about 70 nm. After the second CV the SEI morphology remained unchanged but more rough atop the delaminated regions (Supplementary Figure 2).
Similarly to the edge plane of graphite, disordered structure of hard carbon facilitates its electrocatalitic activity and results in SEI composition similar to one on the edge plane of HOPG and graphite 2 . However, we reproducibly observed SEI formation at about 0.4 V vs. Li + /Li. We suggest that due to flat surface of hard carbon (Rms = 1.4 nm) and weak adsorption, the inorganic products of salt reduction were removed from the surface by the AFM tip even in the gentle tapping mode, which was manifested as the loosely bound surface deposit in the 1.0-0.4 V range. The appearance of organic products of electrolyte reduction below 0.5 V must have enhanced adhesion and allowed SEI anchoring on the HC surface.
The samples with formed SEI were gently washed in dimethyl carbonate in order to remove remaining salt, and dried in the Ar filled glovebox. The SEI then was scratched by a stiff (98 N/m spring constant) diamond coated conductive cantilever with 100 nm tip radius in a conductive AFM mode. The moment of reaching carbon through the SEI was detected by onset of electric current at 0.5 V electric bias applied between the sample and the cantilever. Such approach allows excluding misinterpretation of the dense lower SEI layer with the electrode surface. Figure 4 illustrates topography and current maps of the samples during scratching with the applied force increasing from top to bottom. The SEI was scraped off when the force reached 1.9 µN on HOPG and 4.1 µN on MCMB and on HC. This difference is naturally explained by the fact that the SEI formed on edge plane sites of MCMB and on disordered sites of HC is rich in inorganic salt reduction products, while the SEI on the HOPG basal plane is rich in polymer products 1,16 . Moreover, when the force exceeded the threshold value, the SEI readily peeled off from the HC (Fig. 4(a,b)) and from HOPG ( Fig. 4(e,f)), showing smooth edge on the current maps, while on the MCMB graphite the edge was rough (Fig. 4(c,d)). It suggests that the SEI is better bound to the rough surface of MCMB than to smooth surface of the HC and the basal plane of HOPG.
SeM characterization of Sei cross-sections
The washed samples with SEI were transferred in a sealed vial to another N 2 filled glovebox with a physical vapor deposition chamber installed inside, where a 100 nm thick Al coating was deposited on the samples in order to protect the SEI from exposure to atmosphere during transition into the Helios PFIB G4 UXe dual beam system. During the deposition the samples' temperature did not exceed 60 °C. Figure 5 illustrates cross-sections of the samples exposing host electrode material (HOPG, HC, and MCMB), SEI layer, protective Al coating, and Pt layer deposited prior to focused ion beam milling. On the HOPG we can clearly distinguish a blister ( Fig. 5(a,b)) and delaminated SEI (Fig. 5(a-c)). The delamination was probably caused by vertical tension stress developed during blistering. The SEI is integral with thickness about 45 nm. On the HC the SEI in about 90 nm thick -twice thicker than on the HOPG basal plane -and possess smooth interface with HC. Partial SEI delamination is observed in Fig. 5(g). On the MCMB the SEI is about 90 nm thick -similar to the HC. However, the SEI/MCMB interface is drastically different: the SEI fills pits in porous graphite surface and thus is pinned to the surface. Such interface structure must reinforce the SEI/MCMB contact and reduce possibility of delamination. It is also consistent with the AFM scratching results - Fig. 4(d) -and correlates with surface roughness: root mean squared (Rms) roughness of the fresh in situ scanned regions of 4 × 4 µm 2 of HOPG, HC, and MCMB are 0.1 nm, 1.4 nm, and 3.2 nm respectively. Thus, sufficiently rough surface of particles benefits stronger contact with SEI. www.nature.com/scientificreports www.nature.com/scientificreports/
In situ AFM investigation of NMC 111 surface during cycling
Using our approach, we performed in situ electrochemical AFM measurements on a cross section of a composite cathode made of LiCo 1/3 Ni 1/3 Mn 1/3 O 2 (NMC 111) powder mixed with PVDF binder and Super P carbon black. Figure 6 shows a cyclic voltammetry (CV) curve and corresponding AFM images of the NMC 111 surface before, during, and after cycling. We did not observe SEI formation or any deposit on the surface in the whole 3.0-4.5 V potential range. Instead, comparison of Fig. 6(a,b) revealed changes in morphology of secondary particles, associated with anisotropic lattice contraction caused by Li deintercalation from NMC 111 particles (more details in Supplementary Fig. 3). The CV curve demonstrates a partial current peak associated with Li deintercalation from www.nature.com/scientificreports www.nature.com/scientificreports/ NMC 111 and low and broad intercalation current, different from the macroscopic CV. The latter may be due to flat electrode geometry and size effect 46 . conclusion In this work we proposed a new methodology for in situ atomic force microscopy (AFM) measurements of SEI formation on cross-sectioned composite battery electrodes allowing for direct observations of SEI formation on various types of carbonaceous negative electrode materials for Li-ion batteries. Using this approach, we observed and compared SEI nucleation and growth on highly oriented pyrolytic graphite (HOPG), MesoCarbon MicroBeads (MCMB) graphite, and non-graphitizable amorphous carbon (HC). We found that under given www.nature.com/scientificreports www.nature.com/scientificreports/ experimental conditions SEI on the edge sites of HOPG nucleated at about 0.8 V and on the basal plane sites at about 0.5 V. On the MCMB graphite SEI appeared at 0.8-0.9 V and on HC at about 0.4 V with the preliminary weakly bound deposit at about 1 V. The SEI on both MCMB and HC were twice thicker -90 nm vs 45 nm -and mechanically stronger -4.1 µN vs 1.9 µN -than the SEI on HOPG. These findings are in good agreement with previous studies showing that SEI on the edge plane is rich in inorganic salt reduction products, while the SEI on the basal plane is rich in polymer products. Moreover, we found that smooth SEI/HOPG and SEI/HC interface is prone to delamination, while rough SEI/SG interface is less so due to SEI penetration into surface porosity. Finally, the same approach applied to a positive LiNi 1/3 Mn 1/3 Co 1/3 O 2 electrode did not reveal any signature of cathodic SEI thus demonstrating fundamental differences in the stabilization mechanisms of the negative and positive electrodes in Li-ion batteries.
Methods
Materials synthesis. The layered cathode material LiNi 1/3 Co 1/3 Mn 1/3 O 2 (NMC 111) was synthesized by calcination of the precursor prepared using a co-precipitation method. First, a 2 M aqueous solution of Mn 2+ , Ni 2+ , and Co 2+ was prepared from NiSO 4 ·6H 2 O (RusKhim), MnSO 4 ·H 2 O (RusKhim), and CoSO 4 ·7H 2 O (RusKhim) in a 1:1:1:stoichiometric ratio. The solution was pumped into a Batch reactor (20 L) under N 2 atmosphere. At the same time, an alkali solution with 2 M Na 2 CO 3 (RusKhim) and 0.3 M NH 4 OH was also dropped into the reactor. The pH value, temperature, and stirring speed were carefully controlled. Then, the co-precipitated particles were obtained after filtering, washing with deionized water, and drying at 90-110 °C in a vacuum oven. Finally, LiNi 1/3 Co 1/3 Mn 1/3 O 2 was prepared by annealing the dried precursor with 6% excess of LiOH·H 2 O (RusKhim) at 500 °C for 5 h in air, and then at 850 °C for 12 h in air.
The hard carbon (HC) powder was prepared by a hydrothermal synthesis from D-glucose (Sigma Aldrich, >99.5%) followed by pyrolysis. 9 g of D-glucose was mixed with 0.1 g of pectin (Souzopttorg, ARA104) and dissolved in 25 ml of deionized water. The mixture was placed in a Teflon lined stainless steel autoclave reactor with addition of 2 ml polytetrafluoroethylene solution (PFTE: Sigma Aldrich, 60 wt. % dispersion in H 2 O) and the synthesis was performed at 180 °C for 8 hours. The obtained powder was centrifuged, washed in deionized water, dried in air, and finally annealed in a tubular furnace with Af flow at 1200 °C for 5 h.
Materials characterization. The LiNi 1/3 Co 1/3 Mn 1/3 O 2 powder was characterized by X-ray diffraction using a Huber G670 Guinier diffractometer (CoKα1 radiation (λ = 1.78892 Å), curved Ge(111) monochromator, image plate detector). To determine the lattice parameters, the Le Bail decomposition was carried out using the JANA2006 software. The XRD spectra is presented in Supplementary Fig. 4. The hard carbon powder was characterized by scanning electron microscopy using Quattro S ESEM (FEI) and Raman spectroscopy using DXRxi Raman Imaging Microscope (Thermo Fisher Scientific). The results presented in Supplementary Fig. 5 illustrate a characteristic round particle of hard carbon and its Raman spectra with D-band (defect-induced) and G-band (crystalline graphite) peaks at ~1353 cm −1 and ~1585 cm −1 respectively.
Raman spectroscopy on the HOPG, pristine powders, and cross-sections was performed using DXRxi Raman Imaging Microscope (Thermo Fisher Scientific) using the 532 nm laser.
Sample preparation. MCMB, HC, and NMC 111 powders were separately mixed with PVDF and super P carbon black conductive additive in 80:10:10 mass ratio in N-Methyl-2-pyrrolidone (NMP) solvent and homogenized in a ball mill (Spex 8000 M) for 20 minutes. The slurry was deposited onto a polyimide tape (Kapton) and dried in a vacuum oven at 50 °C overnight. Small pieces (up to 5 × 5 mm) of the dried composite electrodes were delaminated from the polyimide substrate by a tweezer and placed in a mold, which was made by cutting a piece of a silicone tubing with 10 mm internal diameter. The mold with the electrode was filled with a bisphenol A/F www.nature.com/scientificreports www.nature.com/scientificreports/ epoxy resin HT2 with hardener HT2 (R&G Faserverbundwerkstoffe GmbH, Germany), placed in a vacuum oven for vacuum infusion in order to fill porosity in the electrodes, and cured under ambient conditions according specification. Diameter of the mold was chosen considering size of a sample stage of the AFM.
After curing and hardening the samples were mechanically polished on a SiC sand paper (25 µm and 10 µm particle size), diamond suspension (3 µm and 0.25 µm particle size), and OP-S silica suspension (Struers, 40 nm particle size) consequently. The last step provides high quality surface comparable with one obtained after chemical etching 47 . (Additional cross-sectional samples for Raman imaging were further polished by an Ar ion beam (Leica EM RES102) with a 10 min cleaning step at 10° and a 15 min polishing step at 4°.) The polished samples were carefully washed in deionized water, dried in a stream of N 2 , and fixed from the bottom side on a steel substrate with conductive silver paint. The perimeter of the samples on the substrate was additionally sealed with bisphenol A/F or TorrSeal epoxy resin in order to prevent accidental electrolyte leak to the contact and dissolution of the silver paint. After transfer to the glove box, the samples' surface was additionally washed with DMC. The HOPG ZYA sample was fixed on a steel substrate in the same way. Fresh surface was exposed before measurements by peeling off the top layer by a scotch tape. The prepared samples are shown in Supplementary Fig. 6.
Bisphenol A/F epoxy resin was chosen considering its electrochemical stability in the 1 M LiPF 6 in EC/ DMC = 50/50 (v/v) electrolyte solution. Bisphenol A is used for sealing microelectrodes for electrochemical applications 48,49 . Additional Bisphenol F reduces viscosity of the epoxy resin. The epoxy resin was thoroughly tested before measurements. First, the embedded samples were stored for 2 weeks in the closed vial filled with the electrolyte solution without visible changes. Second, 4 coin cells with the NMC 111 cathode and Li anode were assembled: 2 with and 2 without pieces of epoxy inside (weight of the epoxy pieces (≈3 mg) was comparable to the weight of the active NMC 111 powder (4.1 mg)). The cells were cycled between 2.8 V and 4.2 V at 0.3C-rate for 30 cycles. Resulting potential profiles and capacity were similar for cells with and without epoxy ( Supplementary Fig. 7).
In situ AfM measurements. In situ AFM measurements were performed in a tapping mode using Cypher ES microscope (Asylum Research, Oxford Instruments) installed inside an Ar filled glove box (MBraun) with O 2 < 0.1 ppm and H 2 O < 0.1 ppm. A Si cantilever with 140 kHz resonance frequency and 0.6 N/m spring constant was mounted in a liquid perfusion cantilever holder and installed in an environmental sample cell. Before measurements the cantilever was washed with acetone and deionized water. An external potentiostat/galavanostat (BioLogic SP 150) was connected to the microscope. The samples were connected as working electrodes and a Li foil was connected as a reference and a counter electrode in a two electrode configuration. The cantilever was brought to a distance of 100 µm from the sample surface. Commercial battery grade electrolyte solution (1 M LiPF 6 in EC/DMC = 50/50 (v/v)) (Sigma Aldrich) was injected between the sample and the cantilever holder by a syringe via a polyethylene tubing until it formed a meniscus between the sample and the fused silica window of the cantilever holder. After that the cantilever was lander on the sample surface and the measurements were performed with 512 × 512 pixels resolution. The detailed setup is illustrated in Supplementary Fig. 8.
AFM images were processed using Gwyddion software.
Scanning electron microscopy. Scanning electron microscopy (SEM) images were obtained using Thermo Scientific Helios PFIB G4 UXe dual beam system in secondary electrons (SE) mode. The accelerating voltage was 5 kV and 20 kV, electron beam current was 0.1 nA. A sample tilt of cross-section images was 52°. Sample cross-sections were obtained by focused ion beam (FIB) under high vacuum. First, a Pt protective layer was deposited at 12 kV and 1 nA on top of the Al cover layer. Then, cross-sections were milled at 30 kV and 4 nA. Finally, the cross-sections were cleaned at 30 kV and 0.3 nA in order to obtain a smooth surface.
Data availability
The data are available from the corresponding author upon reasonable request.
|
v3-fos-license
|
2017-03-30T22:34:18.805Z
|
2013-09-02T00:00:00.000
|
7118547
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0072715&type=printable",
"pdf_hash": "7bb53ae9141efa566d6ac7beae77f67e85239204",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46646",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "5fc0d8d38dcc5c9d0a3ff0566bcadb09c1afbf90",
"year": 2013
}
|
pes2o/s2orc
|
Consumption of Dairy Products and Colorectal Cancer in the European Prospective Investigation into Cancer and Nutrition (EPIC)
Background Prospective studies have consistently reported lower colorectal cancer risks associated with higher intakes of total dairy products, total milk and dietary calcium. However, less is known about whether the inverse associations vary for individual dairy products with differing fat contents. Materials and Methods In the European Prospective Investigation into Cancer and Nutrition (EPIC), we investigated the associations between intakes of total milk and milk subtypes (whole-fat, semi-skimmed and skimmed), yoghurt, cheese, and dietary calcium with colorectal cancer risk amongst 477,122 men and women. Dietary questionnaires were administered at baseline. Multivariable hazard ratios (HRs) and 95% confidence intervals (CIs) were estimated using Cox proportional hazards models, adjusted for relevant confounding variables. Results During the mean 11 years of follow-up, 4,513 incident cases of colorectal cancer occurred. After multivariable adjustments, total milk consumption was inversely associated with colorectal cancer risk (HR per 200 g/day 0.93, 95% CI: 0.89–0.98). Similar inverse associations were observed for whole-fat (HR per 200 g/day 0.90, 95% CI: 0.82–0.99) and skimmed milk (HR per 200 g/day 0.90, 95% CI: 0.79–1.02) in the multivariable models. Inverse associations were observed for cheese and yoghurt in the categorical models; although in the linear models, these associations were non-significant. Dietary calcium was inversely associated with colorectal cancer risk (HR per 200 mg/day 0.95, 95% CI: 0.91–0.99); this association was limited to dairy sources of calcium only (HR per 200 mg/day 0.95, 95% CI: 0.91–0.99), with no association observed for non-dairy calcium sources (HR per 200 mg/day 1.00, 95% CI: 0.81–1.24). Conclusions Our results strengthen the evidence for a possible protective role of dairy products on colorectal cancer risk. The inverse associations we observed did not differ by the fat content of the dairy products considered.
Introduction
Colorectal cancer is the third most common cancer worldwide, with over 1.2 million new diagnoses estimated to have occurred in 2008 [1]. Variation in international incidence rates [2,3] coupled with findings from migrant studies [4,5] suggests that colorectal cancer etiology is influenced by modifiable lifestyle factors, such as diet. In the recent WCRF/AICR Continuous Update Project, alcoholic drinks and red and processed meat were judged to be ''convincing'' factors associated with increased colorectal cancer risk; whilst foods containing dietary fibre were similarly judged but associated with reduced risk [6]. For total dairy products, an updated meta-analysis (the WCRF Continuous Update Project) recently reported a 17% lower colorectal cancer risk per 400 g/ day increased intake, [7] but indicated that evidence for individual products was lacking and/or uncertain.
Although an inverse association between consumption of total milk with colorectal cancer risk has been consistently observed, [7,8] whether the fat content of milk offsets a potential anticarcinogenic role is unclear. Animal models have shown that highfat consumption results in bile acid production, which in turn promotes colorectal cancer, [9] but associations between milk subtypes, with different fat contents, and colorectal cancer have rarely been examined in prospective studies [10]. Similarly, how other high-fat dairy products, such as cheese and yoghurt, are associated with colorectal cancer risk is unclear, as mixed results have been reported from the handful of previous prospective studies. For cheese consumption, four prospective studies reported null associations [8,[11][12][13] and one study reported an inverse association [14]. For yoghurt, three cohort studies have not found any association, [8,11,12] but a recent analysis within the European Prospective Investigation into Cancer and Nutrition (EPIC)-Italy cohorts reported reduced risks amongst those with higher consumption, even after adjustment for calcium intake [15].
The principal anti-carcinogenic component contained within dairy products is believed to be calcium. Most, [8,11,12,16,17] but not all [18] cohort studies that have investigated calcium intake in relation to colorectal cancer have reported inverse associations. Previously within EPIC, a nested case-control study based on 1,248 colorectal cancer cases reported higher intakes of dietary calcium were associated with lower colorectal cancer risk [19]. Although, whether this association differed according to dairy and non-dairy sources of calcium was not investigated, nor was a potential non-linear relationship that has been observed in other cohorts [8,11].
In this present analysis, we investigated how intakes of milk with different fat content (total, whole-fat, semi-skimmed, and skimmed), cheese, yoghurt, and dietary calcium (total, dairy and non-dairy sources) relate to colorectal cancer risk in the EPIC study. The EPIC is a large prospective cohort from 10 European countries with a wide range of dietary intakes. The large number of participants and colorectal cancer cases accrued provided high statistical power to investigate relationships according to individual dairy products and across cancer sub-sites.
Methods
Outline EPIC is an on-going multicentre prospective cohort study designed to investigate the associations between diet, lifestyle, genetic and environmental factors and various types of cancer. A detailed description of the methods has previously been published [20,21]. In summary, 521,448 participants (,70% women) mostly aged 35 years or above were recruited between 1992 and 2000. Participants were recruited from 23 study centres in ten European countries: Denmark, France, Germany, Greece, Italy, the Netherlands, Norway, Spain, Sweden, and United Kingdom (UK). Participants were recruited from the general population, with the following exceptions: the French cohort were teacher health insurance programme members; the Italian and Spanish cohorts included members of blood donor associations and the general population; the Utrecht (the Netherlands) and Florence (Italy) cohorts contained participants from mammographic screening programs; the Oxford (UK) cohort included a large proportion of vegetarians, vegans, and low meat eaters; finally, only women participated in the cohorts of France, Norway, Naples (Italy) and Utrecht (the Netherlands). Written informed consent was provided by all study participants. Ethical approval for the EPIC study was obtained from the review boards of the International Agency for Research on Cancer (IARC) and local participating centres. Exclusions prior to the onset of the analyses included: participants with prevalent cancer at enrolment (n = 28,283); participants with missing dietary, lifestyle, and anthropometric data (n = 6,253); participants in the highest and lowest 1% of the distribution for the ratio between energy intake to estimated energy requirement (n = 9,600); and finally participants with extreme total dairy intakes above 2000 g/day (n = 190). Our study therefore included 477,122 participants (334,981 women and 142,141 men).
Diet and lifestyle questionnaires
Dietary information over the previous 12 months was obtained at study baseline using validated country/centre specific dietary questionnaires. In Malmö (Sweden), a dietary questionnaire was combined with a 7-day food registration and interview. In Greece, two Italian centres, and Spain, interviewers administered the dietary questionnaires. In all other centres/countries, the questionnaires were self-administered. In Spain, France, and Ragusa (Italy) questions were structured by meals, while in other countries the structure was by food groups. Also at baseline, standardized computer-based single 24-hour dietary recalls (24-hdr) were collected from 36,994 study participants. This additional dietary assessment was used to calibrate for differences in questionnaires across countries [22]. Individual dairy products were categorized as milk, cheeses, and yoghurts. Due to relatively low intakes and incomplete measurements across centres, other individual dairy products such as ice cream, cream desserts and milk-based puddings, milk beverages, dairy creams and creamers for milk and coffee were not analysed individually. Total milk was assessed as the sum of all types of milk consumed (whole-fat, skimmed, semi skimmed, and not specified). Semi-skimmed milk was defined as milk containing 0.5-2.5% fat, and skimmed milk was defined as having ,0.5% fat content. Milk subtype information was unavailable in Norway, and only partially available in Germany, Greece (both whole-fat milk only), and three Italian centres (Florence, Varese, Turin; whole-fat and semi-skimmed milks only). Cheese included all kinds of fresh, fermented, and matured cheese. Yoghurt included natural and flavoured in all cohorts, and additionally fermented milk in Sweden, Norway, and Denmark. Intakes of calcium were obtained from the EPIC Nutrient Data Base (ENDB); in which the nutritional composition of foods across the different countries has been standardized [23].
Lifestyle questionnaires were used to obtain information on education (used as a proxy for socioeconomic status), smoking status and intensity, alcohol consumption, and physical activity levels. Height and weight were measured at the baseline examination in all centres apart from part of Oxford, and all of the Norway and France sub-cohorts, where measurements were self-reported via the lifestyle questionnaire [20].
Ascertainment of colorectal cancer incidence
Population cancer registries were used in Denmark, Italy, the Netherlands, Norway, Spain, Sweden and the United Kingdom to identify incident cancer diagnoses. In France, Germany and Greece cancer cases during follow-up were identified by a combination of methods including: health insurance records, cancer and pathology registries, and by active follow-up directly through study participants or through next-of-kin. Complete follow-up censoring dates varied amongst centres, ranging between 2005 and 2010.
Cancer incidence data were coded using the 10 th Revision of the International Classification of Diseases (ICD-10) and the second revision of the International Classification of Disease for Oncology (ICDO-2). Proximal colon cancer included those within the caecum, appendix, ascending colon, hepatic flexure, transverse colon, and splenic flexure (C18.0-18.5). Distal colon cancer included those within the descending (C18.6) and sigmoid (C18.7) colon. Overlapping (C18.8) and unspecified (C18.9) lesions of the colon were grouped among colon cancers only. Cancer of the rectum included cancer occurring at the recto sigmoid junction (C19) and rectum (C20).
Statistical analysis
Hazard ratios (HRs) and 95% confidence intervals (CIs) were estimated using Cox proportional hazards models. Age was the primary time variable in all models. Time at entry was age at recruitment. Exit time was age at whichever of the following came first: colorectal cancer diagnosis, death, or the last date at which follow-up was considered complete in each centre. To control for differing follow-up procedures, questionnaire design, and other differences across centres, models were stratified by study centre. Models were also stratified by sex and age at recruitment in 1-year categories. Possible non-proportionality was assessed using an analysis of Schoenfeld residuals, [24] with no evidence of nonproportionality being detected.
Dietary intakes were modelled using either quintiles defined across cohort participants (total milk, total dairy and calcium); predefined categories (whole-fat, semi-skimmed, and skimmed milks: non consumers, ,100, 100-199, 200-299, $300 g/day); and a predefined low intake reference category and quartiles defined across the remaining participants (cheese reference category = ,5 g/day; yoghurt reference category = non-consumers). Intakes were also modelled as continuous variables, with HR expressed per increments of: 200 g/day for milk; 100 g/day for yoghurt; 50 g/day for cheese; 400 g/day for total dairy intake, and 200 mg/day for calcium. Trend tests across intake categories were calculated by assigning the median value of each intake quintile/category and modelling as continuous terms into Cox regression models.
Analyses for colorectal, colon, proximal colon, distal colon, and rectal cancers were conducted for both sexes combined as no interactions by sex were observed for intakes of total dairy products (P = 0.26), milk (P = 0.28), cheese (P = 0.58), yoghurt (P = 0.51), and dietary calcium (P = 0.11). The results by sex are in Tables S1, S2, S3, and S4 in File S1. All models were adjusted for total energy intake, using the standard model, to obtain isocaloric risk estimates and partly control for measurement error of dairy products and calcium intake estimates. All models were additionally adjusted for: body mass index (BMI; kg/m 2 ; continuous); physical activity (inactive, moderately inactive, moderately active, active, or missing); smoking status and intensity (never; current, 1-15 cigarettes per day; current, 16-25 cigarettes per day; current, 25+ cigarettes per day; former, quit #10 years; former, quit 11-20 years; former, quit 20+ years; current, pipe/cigar/occasional; current/former, missing; or unknown); education level (none/ primary school completed, technical/professional school, secondary school, longer education -including university, or unknown); menopausal status (premenopausal, postmenopausal, perimenopausal/unknown menopausal status, or surgical postmenopausal); ever use of oral contraceptive (yes, no, or unknown); ever use of menopausal hormone therapy (yes, no, or unknown); and intakes of alcohol (yes or no; continuous, g/day), red and processed meats, and fibre (both continuous, g/day). Finer adjustment for body shape was attempted by also controlling for waist circumference in a subset of the cohort for which measurements were available. When included in the multivariable models, instead of, or with BMI, the risk estimates were virtually unchanged; and accordingly, we adjusted solely for BMI. In the analyses for whole-fat, semiskimmed, and skimmed milk, the models included the covariates as detailed above, plus additional adjustment for the other milk subtypes. Similarly, the dairy and non-dairy calcium analyses were mutually adjusted for one another.
To determine whether the dietary calcium-colorectal cancer association differed according to anthropometric, lifestyle, and dietary characteristics, we included interaction terms (multiplicative scale) in separate models. The statistical significance of the cross-product terms were evaluated using the likelihood ratio test.
Cox proportional hazard restricted cubic spline models were used to explore possible deviation from a non-linear calciumcolorectal cancer relationship, with five knots specified at the median of each quintile of intake [25]. Heterogeneity of associations across anatomical cancer sub-sites was assessed by calculating x 2 statistics. The heterogeneity across countries was explored by taking a meta-analytic approach [26]. To evaluate possible reverse causality, cases diagnosed within the first 2 and 5 years of follow-up were excluded from the analyses.
To improve comparability of data across study centres and to partially correct the relative risk estimates for the measurement error of dietary intakes, a linear regression calibration model was used utilizing the 24-hdr taken at baseline from a subset of the cohort (n = 34,426 in this analysis) [27,28]. The 24-hdr were regressed on dietary questionnaire values, with adjustment for the same list of covariates detailed above, and further control for the week day and season of recall measurements. Country and sexspecific calibration models were used to obtain individual calibrated values of dietary exposure for all participants. Cox proportional hazards regression models were then applied using the calibrated values for each participant on a continuous scale. The standard error of the de-attenuated coefficients was corrected through bootstrap sampling. The P-value for the trend of the deattenuated coefficients was calculated by dividing the de-attenuated coefficient by the bootstrap-derived standard error and approximating the standardized normal distribution. (29).
Statistical tests used in the analysis were all two-sided and a Pvalue of ,0.05 was considered statistically significant. Analyses were conducted using SAS version 9.1 and Stata version 11.0.
Results
After a mean (SD) follow-up of 11.0 (2.8) years, 4,513 colorectal cancer cases were documented amongst the 477,122 participants. Of the 4,513 colorectal cancer cases, 2,868 were colon tumours (1,298 proximal; 1,266 distal and 304 overlapping or unspecified), and 1,645 were rectal tumours. The total person-years and distribution of colorectal cancer cases by country are shown in Table 1. The crude colorectal cancer incidence rates for men and women were 12 and 7 cases per 10,000 person-years respectively. Intakes of total dairy products were relatively low in Greece and Germany and higher in Spain, the Netherlands, and Sweden (men) cohorts. The lowest calcium intakes were reported in the Italian cohort, with the highest in the Netherlands, UK (men), and Germany (women). A higher proportion of current smokers were observed amongst men and women in the lowest intake quintiles of dairy products; whilst a greater proportion of physically active participants were observed amongst men and women in the highest intake quintiles ( Table 2). Compared to those in the lower intake quintiles, men and women with higher reported dairy intakes tended to have lower BMIs, higher education level, and reported lower intakes of alcohol, and higher intakes of dietary fibre ( Table 2).
Total milk and milk subtypes by fat content
Total milk was similarly inversely related to the cancer risk across all locations of the bowel (colon vs. rectal P Heterogeneity = 0.83; distal colon vs. proximal colon P Heterogeneity = 0.76) ( Table 3). In calibrated models, colorectal cancer risk was 7% lower for each 200 g/day higher intake of total milk. Over 17% of participants reported consuming more than one milk subtype. The linear inverse associations for colorectal, colon, and rectal cancers were of similar strength for whole-fat and skimmed milk, but there were no significant associations for semi-skimmed milk (Table 4). However, in sensitivity analyses, when the models included only sole consumers of each milk subtype, identical inverse colorectal cancer risk estimates were observed for whole-fat (HR per 200 g/ day 0.87, 95% CI: 0.79-0.95), semi-skimmed (HR per 200 g/day 0.87, 95% CI: 0.78-0.97) and skimmed milks (HR per 200 g/day 0.87, 95% CI: 0.76-0.99) (data not tabulated).
Cheese
Cheese consumption was inversely associated with colorectal cancer in the categorical model ( Table 3). The association was significant for colon ($56 g/day vs. ,5 g/day HR, 0.83, 95% CI: 0.71-0.97; P-trend = 0.047) but not rectal cancer, although this difference was not significant (P Heterogeneity = 0.39). In the linear calibrated models, non-significant inverse associations were observed for colorectal, colon and rectal cancers. For proximal colon cancer, the highest consumers (.56 g/day) had a 27% (95% CI: 0.58-0.93) reduced risk compared to those consuming ,5 g/day, but in the calibrated model, this association was not significant. No association was observed for tumours in the distal region of the colon, and the heterogeneity in association by colonic region was not statistically significant (P Heterogeneity = 0.82).
Yoghurt
Yoghurt intake was significantly inversely related to colorectal cancer risk in categorical models ($109 g/day vs. non-consumers, HR 0.90, 95% CI: 0.81-0.99; P-trend = 0.043) ( Table 3). The inverse association was restricted to the colon and not observed for tumours in the rectum, although the difference was not statistically significant (P Heterogeneity = 0.79). Within the colon the difference in association across the distal and proximal regions was non-significant (P Heterogeneity = 0.29). No associations were observed in the linear calibrated models for cancers across all bowel locations. After adjustment for dietary calcium intake the inverse association for colorectal cancer using the categorical model was no longer significant ($109 g/day vs. non-consumers, HR 0.94, 95% CI: 0.85-1.04; P-trend = 0.33; data not tabulated).
Total dairy intake
Total dairy intake was significantly inversely associated to colorectal cancer risk ($490 g/day vs. ,134 g/day, HR 0.77, 95% CI: 0.70-0.86; P-trend ,0.001) ( Table 3). In calibrated models, each 400 g/day higher intake of total dairy products was associated with a 14% lower risk. The inverse association was of similar magnitude for colon and rectal cancer (P Heterogeneity = .72); and within the colon, there no evidence of heterogeneity across distal and proximal regions (P Heterogeneity = 0.66).
Dietary calcium
For dietary calcium, similar strength inverse associations were observed across all locations of the colorectum (colon vs. rectal P Heterogeneity = 0.56; distal colon vs. proximal colon P Heterogeneity = 1.00) ( Table 5). There was no deviation from linearity for the relationship between dietary calcium and colorectal cancer in the restricted cubic spline model (P = 0.43) (data not shown). Calcium intake from dairy foods was inversely associated to cancer risk across all locations of the bowel. When calcium and milk were included in the same models, the inverse associations for milk weakened and became non-significant, but the significant inverse associations for calcium remained (data not shown). Dietary calcium from non-dairy sources was not inversely associated with colorectal cancer risk. The association between dietary calcium intake and risk of colorectal cancer did not differ by BMI (P = 0.56), waist circumference (men P = 0.74; women P = 0.64), physical activity (P = 0.26), smoking status (P = 0.37 alcohol consumption (P = 0.75), and intakes of red and processed meat (P = 0.50), and fibre (P = 0.65) (data not tabulated).
Between country heterogeneity and inclusion of preclinical disease
There was evidence of significant heterogeneity by country for total dairy products (P = 0.034) ( Figure S1 in File S1); although risk estimates #1 were observed in most countries. No associations were observed in the Sweden and Denmark cohorts. Nonsignificant between country heterogeneity was observed for intakes of dietary calcium (P = 0.60; Figure S2 in File S1), total milk (P = 0.13), cheese (P = 0.64), and yoghurt (P = 0.12).
Excluding the participants with less than 2 and 5 years of follow-up (including 502 and 1,483 colorectal cancer cases respectively) from the total dairy, total milk, cheese, yoghurt, and calcium intake analyses resulted in negligible differences in the colorectal cancer associations (data not shown).
Discussion
In this analysis of the EPIC cohort, after a mean follow-up of 11 years where 4,513 cases accrued, higher intakes of all subtypes of milk, cheese, yoghurt, total dairy products and dietary calcium from dairy sources were associated with reduced colorectal cancer risk. Overall, our results provided no evidence for divergent relationships for high and low-fat dairy products with colorectal cancer risk. The inverse association we observed for total milk consumption was similar to what was reported by both the Pooling Project of cohort studies, and a recent systematic review [7,8]. Few prospective studies have previously investigated the associations for milk by fat content. In the Adventist Health Study, a stronger inverse association was reported for non-fat milk consumers compared to consumers of milks containing higher fat [10]. In our larger analysis, similar strength inverse associations were observed for all milk subtypes, refuting the notion that the milk-colorectal cancer association differs according to fat content, at least in the range of intakes recorded within EPIC.
The inverse cheese-colorectal cancer association observed in the categorical models provides further evidence that the fat content of dairy products does not impair any possible anti-carcinogenic role. However, this inverse association was not replicated in the linear calibrated model, where a non-significant lower risk was yielded. Basic model -Cox regression using total energy intake (continuous), and stratified by age (1-year categories), sex, and centre. Multivariable model -Cox regression using total energy intake (continuous), body mass index (continuous), physical activity index (inactive, moderately inactive, moderately active, active, or missing), smoking status and intensity (never; current, 1-15 cigarettes per day; current, 16-25 cigarettes per day; current, 16+ cigarettes per day; former, quit #10 years; former, quit 11-20 years; former, quit 20+ years; current, pipe/cigar/occasional; current/former, missing; unknown), education status (none, primary school completed, technical/professional school, secondary school, longer education including university, or not specified), ever use of contraceptive pill (yes, no, or unknown), ever use of menopausal hormone therapy (yes, no, or unknown), menopausal status (premenopausal, postmenopausal, perimenopausal/unknown menopausal status, or surgical postmenopausal), alcohol consumption (yes or no; and continuous) and intakes of red and processed meat and fibre (both continuous), and stratified by age (1-year categories), sex, and centre. w Excluding Norway.
"
Excluding Norway, Germany, and Greece. 1 Excluding Norway, Germany, Greece, Florence (Italy), Varese (Italy), and Turin (Italy). *Total number of colorectal cancer cases across intake categories. [11][12][13]. For yoghurt, an inverse colorectal cancer association in the categorical model was also not replicated in the linear calibrated model. Some evidence suggests that lactic acid bacteria contained within yoghurt products may protect against colorectal cancer [29]. Recently, an analysis of the EPIC-Italy cohorts reported a 35% reduced colorectal cancer risk -after adjustment for calcium intake -amongst participants who consumed more than 25 g/day of yoghurt compared to nonconsumption (less than 1 g/day) [15]. When we additionally adjusted for calcium intake, the inverse colorectal association in the categorical model disappeared. However, our results do not rule out the lactic acid hypothesis, as the types of yoghurt consumed across EPIC countries may differ in lactic acid content, and this information may not have been captured within our study.
We observed consistent inverse associations across all cancer sub-sites for dietary calcium intake, in line with the majority of published cohort studies [8,11,12,16,17]. Some studies have reported a threshold level for calcium intake (,1,000 to ,1,400 mg/day), above which reductions in colorectal cancer risk are not observed [8,11]. In our analysis we did not observe a threshold association at any level of intake, or any departure from linearity. Our inverse associations were limited to dairy sources of calcium, as we observed either null or weak non-statistically significant associations in the non-dairy calcium models. Other prospective studies have reported no association [18,30] or increased risk [16] for non-dairy calcium intake with colorectal and colon cancers. A possible explanation for the non-inverse associations for non-dairy calcium could be that plant sources of calcium -the main contributors to non-dairy calcium intake amongst EPIC participants -contain oxalate and phytate which have been shown to inhibit calcium absorption [31]. Across all EPIC centres, milk contributes most to the consumption of total dairy products [32]. Lactose and casein in milk may increase the bioavailability of calcium, which could also explain the inverse associations we observed for dairy calcium [33]. The primary anticarcinogenic component contained within dairy foods is believed to be calcium [29]. Laboratory studies have shown that calcium can induce apoptosis in colonic epithelium cells, [34] and alter colonic K-ras gene mutations [35]. Animal and human intervention studies have shown that calcium impacts upon colonic cell differentiation: indirectly, by binding to available bile acids and fatty acids, suppressing their ability to modify colonic cell proliferation [36,37]; and directly, by suppressing colonic epithelial cell proliferation and inducing terminal differentiation [38]. Evidence from clinical trials suggests that calcium supplementation reduces the recurrence of colorectal adenoma [39]. Beyond the calcium content of dairy products, other constituents contained within these products may explain the inverse associations observed. For instance lactoferrin, vitamin D in fortified dairy products, and certain fatty acids, such as butyric acid, have been linked with having possible beneficial roles against colorectal cancer. [29] However, isolating the influence of individual food components present simultaneously in the same foods is difficult.
The public health implications of our results are complicated by the contrasting associations between calcium intake and prostate cancer. Dietary calcium has been consistently associated with increased prostate cancer risk, and the WCRF/AICR 2007 Expert report judged it a ''probable'' cause of the disease [40]. Within EPIC, a 300 mg/day intake of dietary calcium was previously associated with a 9% increased risk of prostate cancer [41]. In our analysis, an equivalent daily intake amongst men and women would be associated with 7% statistically significant reduced colorectal cancer risks. At present, the available evidence for the divergent associations between cancer sites has not been considered convincing enough to justify potential sex-specific calcium and dairy product intake recommendations.
Strengths of our study include its large-scale prospective design, the large number of colorectal cancer cases and the possibility of controlling for the main potential confounders. However, information on past bowel cancer screening and previous endoscopy procedures were unknown; although previous studies have observed unchanged inverse calcium-colon cancer relationships when the multivariable models were additionally adjusted for endoscopy history [30]. A further limitation was that intake of calcium supplements could not be included in our analysis; although other large cohort studies have observed only minor differences in associations between total calcium intakes (supplements plus diet) compared solely to dietary sources [8,17]. In our study, diet was assessed through dietary questionnaires, which are subject to measurement error. Random misclassification may have thus caused an attenuation of the estimates of the diet-disease association; however, we partially corrected our estimates through regression calibration using 24-hdr data [28]. Another study limitation was that changes in diet during follow-up could not be taken into account; however, this does not appear to have influenced our conclusions since our results are consistent with those of other cohort studies, some of which used cumulative estimates of diet over time [8,30].
In conclusion, our study supports potential beneficial roles for dietary intakes of dairy products and calcium on colorectal cancer prevention. Inverse associations were observed for low-fat and high-fat dairy products; indicating that the fat content contained within dairy products does not influence this relationship.
Supporting Information
File S1 Supporting information. Figure S1. Multivariable hazard ratios and 95% confidence intervals of colorectal cancer risk by country, per 400 g/day increase in total dairy intake. Hazard ratios estimated by Cox proportional hazards models adjusting for total energy intake (continuous), body mass index (continuous), physical activity index (inactive, moderately inactive, moderately active, active, or missing), smoking status and intensity (never; current, 1-15 cigarettes per day; current, 16-25 cigarettes per day; current, 16+ cigarettes per day; former, quit #10 years; former, quit 11-20 years; former, quit 20+ years; current, pipe/ cigar/occasional; current/former, missing; unknown), education status (none, primary school completed, technical/professional school, secondary school, longer education including university, or not specified), ever use of contraceptive pill (yes, no, or unknown), ever use of menopausal hormone therapy (yes, no, or unknown), menopausal status (premenopausal, postmenopausal, perimenopausal/unknown menopausal status, or surgical postmenopausal), alcohol consumption (yes or no; and continuous) and intakes of red and processed meat and fibre (both continuous), and stratified by age (1-year categories), sex, and centre. Figure S2. Multivariable hazard ratios and 95% confidence intervals of colorectal cancer risk by country, per 200 mg/day increase in total dietary calcium (B). Hazard ratios estimated by Cox proportional hazards models adjusting for total energy intake (continuous), body mass index (continuous), physical activity index (inactive, moderately inactive, moderately active, active, or missing), smoking status and intensity (never; current, 1-15 cigarettes per day; current, 16-25 cigarettes per day; current, 16+ cigarettes per day; former, quit #10 years; former, quit 11-20 years; former, quit 20+ years; current, pipe/ cigar/occasional; current/former, missing; unknown), education status (none, primary school completed, technical/professional school, secondary school, longer education including university, or not specified), ever use of contraceptive pill (yes, no, or unknown), ever use of menopausal hormone therapy (yes, no, or unknown), menopausal status (premenopausal, postmenopausal, perimenopausal/unknown menopausal status, or surgical postmenopausal), alcohol consumption (yes or no; and continuous) and intakes of red and processed meat and fibre (both continuous), and stratified by age (1-year categories), sex, and centre. Table S1. Multivariable hazard ratios (95% confidence intervals) of colorectal cancer risk in men by dairy product consumption categories. Table S2. Multivariable hazard ratios (95% confidence intervals) of colorectal cancer risk in women by dairy product consumption categories. Table S3. Multivariable hazard ratios (95% confidence intervals) of colorectal cancer risk in men by dietary calcium intake categories. Table S4. Multivariable hazard ratios (95% confidence intervals) of colorectal cancer risk in women by dietary calcium intake categories. (DOCX)
|
v3-fos-license
|
2020-01-23T09:21:14.172Z
|
2019-01-01T00:00:00.000
|
212626200
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=7874&context=kaesrr",
"pdf_hash": "163e26b17bde657dc1cca17f572b4e3675c6c315",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46648",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "6aae4d007f919b6135943af647b7e1a60bffb98f",
"year": 2019
}
|
pes2o/s2orc
|
Beta-Hydroxybutyrate Alters the mRNA Cytokine Profile from Beta-Hydroxybutyrate Alters the mRNA Cytokine Profile from Mouse Macrophages Challenged with Streptococcus uberis Mouse Macrophages Challenged with Streptococcus uberis
Summary The objective of this study was to determine if β-hydroxybutyrate (BHB) altered inflammatory responses in macrophages challenged with a common mastitis pathogen, Streptococcus uberis . Mouse macrophages (RAW 264.7 line) were cultured either in the presence or absence of BHB for 24 h, and then challenged or not with S. uberis . Relative transcript abundance of cell membrane receptors (TLR2 and GPR109a), cytokines (IL-1β, IL-10, TNFα, and TGFβ), and chemokines (CXCL2 and CCL5) were determined using quantitative real-time polymerase chain reaction (qPCR) and normalized against the geometric mean of HPRT and B2M. Streptococcus uberis activated the macrophages, noted by greater transcript abundance of analyzed genes. Intriguingly, S. uberis increased GPR109a mRNA abundance, a receptor that is activated by BHB. Consequently, BHB dose-dependently increased transcript abundance of the pro-inflammatory cytokine (IL-1β) and the anti-inflammatory cytokine (IL-10) but had no effect on TNFα or TGFβ. Moreover, BHB increased mRNA abundance of the chemo-kines, CXCL2 and CCL5. These data suggest a dysregulated immune response toward S. uberis due to BHB treatment, similar to what is seen in transition dairy cows. Future studies should be conducted in vivo to test the effect of BHB on immune function during an intramammary challenge.
Introduction
Mastitis is the most common and costly disease in the dairy industry, impairing animal welfare and decreasing milk production. The incidence of clinical mastitis is dramatically greater during the first few weeks after calving than in the rest of the lactation. At the beginning of lactation, a depression of feed intake occurs simultaneously with an increase in energy demand, resulting in metabolic stress and negative energy balance. Consequently, dairy cattle mobilize fat reserves, liberating non-esterified fatty acids (NEFA). These fatty acids are transported to the liver for energy production. However, not all NEFA are completely oxidized, resulting in the production of β-hydroxybutyrate (BHB), a major ketone body. This ketone body has long been associated with disease in early lactation, but it is important to recognize that association does not discriminate between causative disease mediators and adaptive responses that help resolve the disease. The association between ketosis and mastitis is likely due to the decreased function of host innate immune cells exposed to BHB. When neutrophils were cultured Dairy Research 2019 with BHB at various concentrations (0.1 to 8.0 mM), there was a stepwise reduction in extracellular killing of bacteria. Additionally, leukocytes from ketotic cows had a reduced ability to migrate toward an inflammatory response relative to those isolated from non-ketotic cows.
Streptococcus uberis is a common environmental mastitis pathogen that is responsible for a large proportion of mastitis during the first month of lactation, when negative energy balance is exacerbated. Therefore, it seems likely that BHB may be impairing immune responses toward this pathogen in early lactation dairy cattle. Hence, the objective of this experiment was to examine the effect of BHB on inflammatory mediators from macrophages during a Streptococcus uberis challenge. We hypothesized that BHB would attenuate inflammatory responses in a dose-dependent manner.
Bacterial Strain and Conditions
Streptococcus uberis (kindly provided by Dr. Petersson-Wolfe, Department of Dairy Science, Virginia Tech) was originally isolated from a dairy cow with mastitis and stored in 10% skim milk at -80°C. Bacteria were streaked on an esculin blood agar plate and incubated for 24 hours. Five colonies were then cultured in Todd-Hewitt broth and incubated for 7 hours at 37°C on an orbital shaker. Bacterial suspension was pelleted with centrifugation, washed with sterile phosphate-buffered saline, and resuspended in Dulbecco's Modified Eagle Medium (DMEM) containing 1% L-glutamine and 10% heat-inactivated fetal bovine serum. Serial dilutions were used to achieve the desired concentration of colony-forming units for challenge, and challenge inoculum concentration was verified using drop plating onto esculin blood agar.
Cell Culture Conditions and Treatments
Mouse macrophages (RAW 264.7 line) were cultured in DMEM supplemented with 1% L-glutamine, 10% heat-inactivated fetal bovine serum, and 0.2% penicillinstreptomycin. Twenty-four-well plates (n = 8 wells per treatment group) were seeded with 1 × 10 5 cells and incubated for 24 hours at 37°C and 5% CO 2 . Cells were then either treated with β-hydroxybutyrate (Sigma Aldrich) at various concentrations (0 mM, 0.6 mM, 1.2 mM, or 1.8 mM) or not for 24 hours to mimic ketosis. To maintain a neutral pH in culture media, β-hydroxybutyrate was added as sodium salt, and a treatment group with 1.8 mM added NaCl was included as an osmotic control. After the 24-hour incubation step, the medium was removed and fresh medium without antibiotics containing BHB at various concentrations (or not) and with or without 5 × 10 5 CFU/mL of S. uberis were added for 6 hours. Cells were then lysed and stored at -80°C.
RNA Isolation and qPCR
Total RNA was isolated from cell lysates using the RNeasy kit (Qiagen) and was quantified using spectroscopy (NanoDrop Technologies Inc., Wilmington, DE). One microgram of total RNA was used as template for the reverse transcriptase reaction using random primers. Quantitative real-time PCR was performed in duplicate with 200 nM gene-specific forward and reverse primers with real-time SYBR green fluorescent detection (7500 Fast Real-Time PCR System, Applied Biosystems). Primers were designed from mouse GenBank sequences and were designed to amplify an intron-spanning Dairy Research 2019 region of the gene. Relative mRNA abundance was quantified by the 2 -ΔCt method with the geometric mean of HPRT and B2M used to normalize values.
Statistical Analyses
Statistical analyses were conducted in PROC GLIMMIX SAS v. 9.4 (SAS Inst., Cary, NC). Orthogonal contrasts were performed to test the effect of S. uberis, overall effect of BHB within S. uberis challenged treatment groups, as well as linear and quadratic contrasts to test BHB dose responses. To meet the assumption of normality (PROC UNIVARIATE), all response variables required natural logarithmic transformation; least square means and standard errors were back-transformed. An outlier was defined if the observation had a studentized residual greater than 3 in absolute value, and therefore was removed from the analysis. Significance was declared at P ≤ 0.05.
Streptococcus uberis Effects
As expected, Streptococcus uberis induced the immune activation in macrophages. This was evident through increased mRNA abundance of a pathogen recognition receptor, toll-like receptor 2 (TLR2, P = 0.03), and thus downstream increases of both proand anti-inflammatory cytokine mRNA abundance. In particular, S. uberis increased mRNA abundance (all P < 0.01) of pro-inflammatory cytokines interleukin (IL)-1β and tumor necrosis factor α (TNFα), as well as anti-inflammatory cytokines IL-10 and transforming growth factor β (TGFβ). Moreover, S. uberis increased mRNA abundance of two chemokines (CXCL2 and CCL5), which are proteins used to attract immune cells into inflamed tissue. Lastly, S. uberis increased mRNA abundance of GPR109a (P < 0.01), a receptor for BHB that has known anti-inflammatory effects when activated. This is intriguing, as these data could imply that S. uberis promotes immunological tolerance, thus impairing the immune system's ability to kill this pathogen. Regardless, an increase in GPR109a should result in macrophages that are more responsive to BHB ligation.
Beta-Hydroxybutyrate Effects
Beta-Hydroxybutyrate increased mRNA abundance of both pro-and anti-inflammatory cytokines, although these data should be interpreted with caution. First, BHB dose-dependently increased mRNA abundance of the potent anti-inflammatory cytokine IL-10 (overall BHB effect, P = 0.01; linear BHB effect, P < 0.01) when compared to S. uberis challenged cells. Yet, BHB also dose-dependently increased mRNA abundance of the pro-inflammatory cytokine IL-1β (overall BHB effect, P < 0.01; linear BHB effect, P < 0.01). Interleukin-1β data should be interpreted cautiously; this cytokine is post-translationally regulated more so than other cytokines, so a future study must confirm if the active, secreted protein form of this cytokine was also increased by BHB. An increase in both IL-1β and IL-10 is likely due to increased abundance of the receptor that controls their expression, TLR2 (overall BHB effect, P < 0.01; linear BHB effect, P = 0.01). Lastly, BHB increased mRNA abundance of chemokines CXCL2 (overall BHB effect P = 0.01; linear BHB effect, P = 0.04) and CCL5 (overall BHB effect, P = 0.03; linear BHB effect, P = 0.07). Again, these data could imply a more robust immune response, as a greater abundance of chemokines should result in more immune cells migrating into inflamed tissue in vivo. However, in a similar study, BHB also increased chemokine mRNA abundance, yet this did not result in an increase in immune cells migrating into tissue. Thus, an increase in chemokine mRNA abundance may simply be a compensatory response in attempt to overcome a reduction in their efficacy. Future studies should examine protein abundance of these cytokines to ensure the transcriptional effects from BHB treatment indeed alter cytokine and chemokine protein profiles.
Conclusions
Streptococcus uberis is responsible for a large proportion of mastitis during the first month of lactation. In the present study, S. uberis increased GPR109a mRNA abundance, a receptor that is ligated by BHB. This resulted in a dose-dependent increase in mRNA abundance of both pro-and anti-inflammatory cytokines, including IL-1β and IL-10. An increase in the abundance of these cytokines could be indicative of the immune dysfunction that is typically seen in transition dairy cows. Future studies should be conducted in vivo to test the effect of BHB on immune function during an intramammary challenge. Figure 1. Effect of S. uberis and BHB on receptor (TLR2, A; GPR109a, B) transcript abundance in RAW 264.7 mouse macrophages. Streptococcus uberis challenge increased mRNA of the receptor for identification of Gram-positive pathogens, TLR2 (P = 0.03), as well as mRNA abundance of the receptor for BHB, GPR109a (P < 0.01). Beta-Hydroxybutyrate treatment linearly increased TLR2 mRNA abundance (P = 0.01) when compared to cells treated with just S. uberis. Treatment groups include: control (CON), OC (osmotic control), OC + S. uberis, 0.6 mM BHB + S. uberis, 1.2 mM BHB + S. uberis, and 1.8 mM BHB + S. uberis. Figure 2. Effect of S. uberis and BHB on pro-inflammatory (IL-1β, A; TNFα, B) and antiinflammatory (IL-10, C; TGFβ, D) cytokine transcript abundance in RAW 264.7 mouse macrophages. Streptococcus uberis challenge activated the macrophages, increasing both pro-and anti-inflammatory cytokine mRNA abundance (all P < 0.01). Beta-Hydroxybutyrate treatment linearly increased IL-1β (P < 0.01) and IL-10 (P < 0.01) mRNA abundance when compared to cells treated with just S. uberis, however, no effect of BHB was found on either TNFα or TGFβ. Treatment groups include: control (CON), OC (osmotic control), OC + S. uberis, 0.6 mM BHB + S. uberis, 1.2 mM BHB + S. uberis, and 1.8 mM BHB + S. uberis. Figure 3. Effect of S. uberis and BHB on chemokine (CXCL2, A; CCL5, B) transcript abundance in RAW 264.7 mouse macrophages. Streptococcus uberis challenge activated the macrophages, increasing both CXCL2 and CCL5 mRNA abundance (both P < 0.01). Beta-Hydroxybutyrate treatment linearly increased CXCL2 (P = 0.04) as well as increased CCL5 (P = 0.03) mRNA abundance when compared to cells treated with just S. uberis. Treatment groups include: control (CON), OC (osmotic control), OC + S. uberis, 0.6 mM BHB + S. uberis, 1.2 mM BHB + S. uberis, and 1.8 mM BHB + S. uberis.
|
v3-fos-license
|
2017-08-15T05:32:09.625Z
|
2017-07-21T00:00:00.000
|
22073292
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fendo.2017.00164/pdf",
"pdf_hash": "e75d03c4009a741ea7cf8c2ccd785e1ad8c51641",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46650",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"sha1": "e75d03c4009a741ea7cf8c2ccd785e1ad8c51641",
"year": 2017
}
|
pes2o/s2orc
|
Exposure to a High-Fat Diet during Early Development Programs Behavior and Impairs the Central Serotonergic System in Juvenile Non-Human Primates
Perinatal exposure to maternal obesity and high-fat diet (HFD) consumption not only poses metabolic risks to offspring but also impacts brain development and mental health. Using a non-human primate model, we observed a persistent increase in anxiety in juvenile offspring exposed to a maternal HFD. Postweaning HFD consumption also increased anxiety and independently increased stereotypic behaviors. These behavioral changes were associated with modified cortisol stress response and impairments in the development of the central serotonin synthesis, with altered tryptophan hydroxylase-2 mRNA expression in the dorsal and median raphe. Postweaning HFD consumption decreased serotonergic immunoreactivity in area 10 of the prefrontal cortex. These results suggest that perinatal exposure to HFD consumption programs development of the brain and endocrine system, leading to behavioral impairments associated with mental health and neurodevelopmental disorders. Also, an early nutritional intervention (consumption of the control diet at weaning) was not sufficient to ameliorate many of the behavioral changes, such as increased anxiety, that were induced by maternal HFD consumption. Given the level of dietary fat consumption and maternal obesity in developed nations these findings have important implications for the mental health of future generations.
Perinatal exposure to maternal obesity and high-fat diet (HFD) consumption not only poses metabolic risks to offspring but also impacts brain development and mental health. Using a non-human primate model, we observed a persistent increase in anxiety in juvenile offspring exposed to a maternal HFD. Postweaning HFD consumption also increased anxiety and independently increased stereotypic behaviors. These behavioral changes were associated with modified cortisol stress response and impairments in the development of the central serotonin synthesis, with altered tryptophan hydroxylase-2 mRNA expression in the dorsal and median raphe. Postweaning HFD consumption decreased serotonergic immunoreactivity in area 10 of the prefrontal cortex. These results suggest that perinatal exposure to HFD consumption programs development of the brain and endocrine system, leading to behavioral impairments associated with mental health and neurodevelopmental disorders. Also, an early nutritional intervention (consumption of the control diet at weaning) was not sufficient to ameliorate many of the behavioral changes, such as increased anxiety, that were induced by maternal HFD consumption. Given the level of dietary fat consumption and maternal obesity in developed nations these findings have important implications for the mental health of future generations.
Keywords: maternal, high-fat diet, obesity, anxiety, stereotypy, mental health, neurodevelopmental, cortisol, serotonin inTrODUcTiOn Developed nations have experienced a surge in the prevalence of both maternal obesity and pediatric neurodevelopmental disorders. In the United States, 64% of women of reproductive age are overweight, 35% are obese (1), and the majority of the population consumes excess dietary fat. The prevalence of obesity in pregnant women is particularly concerning, as the intrauterine and early postnatal environments are known to have a long-term impact on both the physiology and behavior of offspring. Early development is a sensitive period in which epigenetic changes can result in permanent alterations of behavioral and physiological processes. Given the prominent role that maternal nutrition and energy status play in regulating reproductive physiology, factors such as food availability, diet quality, and body weight are prime candidates for initiating epigenetic influences on offspring behavior and physiology. In epidemiologic studies, maternal obesity is associated with future risk of increased offspring Body Mass Index (BMI) (2), adiposity (3,4), and metabolic disorders (3). Furthermore, maternal obesity and consumption of a high-fat diet (HFD) are associated with increased future risk of mental health and neurodevelopmental disorders (5,6), such as attention-deficit hyperactivity disorder (ADHD) (5,7,8) and autism spectrum disorders (ASD) (9)(10)(11). Maternal obesity is also associated with childhood affective problems, such as increased risk of fear, sadness, and internalizing behavior (8,12), and is correlated with low or high birth weight which are linked to anxiety and depression during adolescence (13). Both non-human primate (NHP) and rodent studies demonstrate that chronic maternal HFD consumption produces long-term alterations in offspring's anxiety-related behaviors (6,14). It is challenging to study maternal diet in human participants due to difficulty in accurately monitoring food intake, and ethical issues related to manipulating the diet of pregnant women. Animal models have the advantage of investigator control over diet composition and elimination of many confounding genetic and environmental factors. Therefore, well-controlled animal studies are essential for exploring the specific effects of maternal diet and metabolic status on offspring behavior. NHP models are advantageous as they have complex social and mental health-related behaviors, similar developmental ontogeny of the brain and placental structure, and develop the full spectrum of metabolic disease consistent with humans. Using a NHP model of maternal HFD consumption, our group has shown that intrauterine overnutrition negatively impacts fetal development, resulting in increased activation of inflammatory cytokines in the placenta (15) and hypothalamus (16), decreased pancreatic α-cell plasticity (17), and abnormal development of the melanocortin system (16). We also documented changes in histone deacetylase activity in the liver of HFD offspring leading to decreased SIRT1 protein levels (18), suggesting NHP offspring are vulnerable to inflammation-induced epigenetic changes.
Our group further demonstrated perturbations of the NHP serotonergic system in the fetal brain and increased anxiety in infant female offspring from mothers consuming a HFD compared to infants exposed to a control diet (6). The purpose of the current study was to determine if these early alterations in behavior and brain development persist later in life. A second goal was to determine if an early intervention to a healthy diet at weaning would ameliorate maternal HFD-induced changes in behavior and brain development. In this study, we demonstrate that maternal and postweaning HFD consumption results in behavioral changes, including an increase in anxiety, that persist into the juvenile time period. These behavioral changes were associated with alterations in plasma and hair cortisol and impaired development of the central serotonin system, such as altered tryptophan hydroxylase-2 (TPH2) mRNA expression in the dorsal (DR) and median raphe (MnR) and reduced serotonin positive fibers in the medial prefrontal cortex (PFC). Importantly, this is the first study to examine the long-term impact of HFD exposure during development on behaviors related to psychopathology in a NHP species.
MaTerials anD MeThODs animals
All animal procedures were in accordance with National Institutes of Health guidelines on the ethical use of animals and were approved by the Oregon National Primate Research Center (ONPRC) Institutional Animal Care and Use Committee. An indepth characterization of the maternal (19) and juvenile (17,20) phenotype has been previously reported.
Dietary Information
Macronutrient composition of the control diet (CTR) (Monkey Diet no. 5000; Purina Mills) and the HFD (TAD Primate Diet no. 5LOP, Test Diet, Purina Mills) are provided in Table 1. The diets has also been described comprehensively in previous publications (21). Monkeys on the HFD were also provided with calorically dense treats (35.7% of calories from fat, 56.2% of calories from carbohydrates, and 8.1% of calories from protein) once daily. The HFD represents a typical Western diet in respect to percent of calories provided by fat and saturated fat content.
Adult Females
Adult female Japanese macaques (Macaca fuscata) were housed in groups of 4-12 individuals (male/female ratio of 1-2/3-10) in indoor/outdoor pens and were given ad libitum access to water and either the CTR or the HFD. Mothers in our study were aged 4.1-16.1 years and weighed 6.35-17.7 kg prepregnancy. Maternal body fat was determined using dual-energy X-ray absorptiometry prior to each pregnancy. Animals were sedated with Telazol (3-8 mg/kg; i.m., Fort Dodge Animal Health, Fort Dodge, IA, USA), supplemented with Ketamine HCl (5-10 mg/kg, i.m.; Ketaset, Fort Dodge Animal Health, Fort Dodge, IA, USA), and then positioned prone on the bed of a Hologic QDR Discovery scanner (Hologic, Bedford, MA, USA). Total body scans were done in the "Adult Whole Body" scan mode. Hologic QDR software version 12.6.1 was used to calculate body composition. Maternal body weight was assessed before pregnancy and during the third trimester of pregnancy. Percent weight gain during pregnancy was calculated by subtracting prepregnancy body weight from pregnancy body weight, dividing by prepregnancy body weight and multiplying by 100. Demographic classification of maternal metabolic profiles is provided in Table 2. As expected, chronic consumption of a HFD produced elevations in percent body fat and body weight in our adult females. We noted an increase in fasting insulin and a normal fasting glucose in HFD dams. The high fasting insulin accompanied by normal fasting glucose indicates that our HFD dams are in the early stages of insulin resistance, but are not diabetic (22). Likewise, we observed an increase in insulin area under the curve (IAUC) and lower glucose area under the curve (GAUC) in HFD dams providing additional support that the HFD dams are becoming insulin resistant as they released more insulin to clear the glucose load administered during the glucose tolerance test (GTT). Several studies indicate that changes in insulin secretion occur prior to changes in blood glucose in patients that later develop type two diabetes (23,24). It is also possible that the hyperinsulinemia is a response to the high circulating fatty acids due to HFD consumption, as insulin also regulates blood fatty acid levels (25).
Juvenile Offspring
Offspring were born naturally from CTR or HFD mothers who had been consuming the diet for 1.2-8.5 years at time of parturition. The 135 offspring included in the study were born from 65 mothers, with no more than six offspring from the same mother. By 4 months of age infants began independent ingestion of the maternal diet and were consuming this diet as their primary food source by 6 months of age. Offspring were maintained with their mothers until the time of weaning at a mean of 7.99 months of age (SEM = 0.09). The offspring were then placed into group housing with 6-10 similarly aged juveniles and 1-2 adult females. Half of the offspring were maintained on their mothers' diet and the other half switched diets, creating four diet groups (CTR/CTR, CTR/HFD, HFD/CTR, and HFD/HFD). The same animals were used for the majority of experimental measures; however, the actual sample size varied between measures. Only a subset of animals was euthanized at the 13-month time point for tissue collection, and as the data in this study were collected over 9 years some measures were added to the protocol in later years. The sample sizes for each group for the various measures are described in the figure legends.
Behavioral Testing
At 11 months of age (average age 10.84 months, SEM = 0.025), juveniles were removed from their pens and placed in a cage located in an adjacent room between 0800 hours and 0830 hours. Individuals were then transported in a covered transfer box to the behavioral testing suite where they were placed in a standard primate cage. The juvenile was videotaped from an adjoining room through a one-way mirror for the duration of the test. The behavior tests were initiated between 0830 hours and 1100 hours.
Human Intruder Test
This test reliably evaluates individual differences in anxiety and stress response in NHPs (26,27). The test began with a 10-min acclimation period, followed by a 2-min control period, and then a 2-min profile period, in which a human intruder (a woman unfamiliar to the monkey) entered the test room, stood 0.3 m from the cage, and presented her facial profile (a non-threatening stimulus) to the monkey. This was followed by another 2-min control period in which the stranger exited and the juvenile was alone. The human intruder then returned to the room for the 2-min stare period, stood in the same position 0.3 m from the cage, but made continuous direct eye contact (a threatening social stimulus) with the monkey. Another 2-min control period followed; the stranger reentered the room and made continuous direct eye contact while simultaneously offering a piece of apple (a desirable familiar food) to the monkey for the 2-min apple offer period.
Novel Objects Test
Anxiety-like behavior has been assessed in a variety of species, including NHPs, using novel object tests (28,29). The test began after a 2-min control period following the human intruder test. For the first novel object, the human intruder entered, avoiding direct eye contact with the monkey, and placed a potentially threatening toy on the tray attached to the cage. A rubber ball with large eyes painted on it was used the first year the tests were done and a plastic toy bunny with large eyes for the following years. The toy was placed with the eyes facing the monkey. After 5 min, the human intruder reentered, removed the toy, and placed a pretzel HFD during Development Programs Behavior Frontiers in Endocrinology | www.frontiersin.org July 2017 | Volume 8 | Article 164 (a novel food) on the tray attached to the cage. The monkey was left with the pretzel for 2 min, at which time the test concluded. The monkey was then hand caught to collect a blood sample for measurement of cortisol, before being placed in the transfer box and returned to their normal social housing.
Video Scoring
The videos of the behavior tests were scored using the Observer XT software Version 11 (Noldus Information Technology) via the continuous sampling method by two observers blind to maternal and postweaning diet. Behaviors such as locomotive state, exploration, vocalizations, anxious or abnormal behaviors, responses to stranger, and response to objects were scored. Actions which did not clearly fit a preestablished behavioral definition were coded as other and were submitted to a second blind observer for consideration. If agreement was not reached on the categorization of an individual's behavior, the behavior was not included in video scoring. Behaviors were coded as one of two mutually exclusive categories: point events, behavioral events of no quantifiable duration, and state events, behavioral events of determined duration with a distinct beginning and ending.
Composite Variable Determination
Stress responses elicited in reaction to the behavioral tests vary widely between individuals; however, our subjects exhibited deviations from typical manifestations of anxiety, including the near-absence of displacement behaviors. This prompted the creation of more inclusive anxiety measures derived from video scoring results. In order to form cohesive anxiety composites, the frequency, distribution, and co-morbidity of functionally similar behaviors determined to be the result of stress were examined. Resulting behavioral composites are unweighted sums of percent duration and total number of component behaviors and are detailed in Table 3.
activity Measurements
Physical activity levels were measured using accelerometers (Actical, MiniMitter, Bend, OR, USA) mounted on plastic collars (Primate Products, Miami, FL, USA) worn continuously by juveniles after weaning. This method for measuring physical activity in NHPs is well established and previously reported in detail (33). Activity levels were collected during the behavior test and in normal social housing the day prior to and following behavioral testing. This allowed examination of the effect of behavioral testing on activity during the testing interval and longer term alteration in activity.
glucose Tolerance Tests
An intravenous GTT was performed on the dam at the third trimester of each offspring's gestational period (average 47.3 days before parturition, SEM = 0.72), and on the offspring at 13 months (average age 12.98 months, SEM = 0.069). Animals were fasted overnight prior to the GTT. The morning of the procedure the animal was removed from their pen and placed in a cage in an adjacent room between 0800 hours and 0830 hours, receiving no food once removed from the group. The animal was then sedated with Telazol (5 mg/kg i.m.), and after 10 min of deep sedation, a baseline sample of 3-5 ml of blood was collected from a catheter placed in the right saphenous vein. Of this sample, 0.5 ml was used to saturate a glucose test strip placed in a OneTouch Ultra2 Blood Glucose Monitor (LifeScan, Milpitas, CA, USA) to record the baseline glucose level. The remainder of the blood was kept in heparinized tubes on ice for insulin, glucagon, and leptin measurements. A glucose bolus (50% dextrose solution) at a dose of 0.6 g/kg was administered intravenously via the right saphenous catheter. Further glucose measures were recorded from 0.5 ml blood samples collected from the left saphenous vein at 1, 3, 5, 10, 20, 40, and 60 min after infusion. The remainder of the blood was kept in heparinized tubes on ice for insulin measurements. After the GTT, samples were centrifuged, and plasma was stored at −80°C until assayed. All glucose and insulin measures from baseline until 60 min post-infusion were then used to calculate the area under curve (GAUC and IAUC, respectively) from 0 using GraphPad Prism Version 6 software (GraphPad Software, Inc., La Jolla, CA, USA).
Blood sample collection
Juvenile blood samples were collected from the femoral or saphenous vein. Blood was collected into a heparin tube, which was placed on ice and centrifuged at 1,125 × g at 4°C for 20 min. Plasma was removed and stored at −80°C until assay.
Preweaning Samples
At 3 months of age (average age 90.27 days, SEM = 0.223) dam and infant pairs were removed from their pens and placed in a cage in an adjacent room between 0800 hours and 0830 hours. The infant was then immediately separated from the dam for a 30-min period. Following this period, the infant was sedated with Ketamine (5-10 mg/kg i.m.) and a 1-2 ml blood sample was collected from the femoral or saphenous vein. At 4 months of age (average age 129.48 days, SEM = 0.398) dam and infant pairs were removed from their pens and placed in a cage in an adjacent room between 0800 hours and 0830 hours. The infant was separated from the dam and transferred in a covered transport box to the behavioral testing suite where they were placed in a standard primate cage. The infant received a behavioral test consisting of a human intruder test and novel objects test, which has been previously detailed (6). Immediately following the behavioral test the infant was hand caught from the cage and restrained while a 2-ml blood sample was collected from the femoral vein. All samples were collected within 5 min following the conclusion of the behavioral test.
Prior to weaning (average age 180.22 days, SEM = 0.512) dam and infant pairs were removed from their pens and placed in a cage in an adjacent room between 0800 hours and 0830 hours. The infant was separated from the dam until 1200 hours. The infant was then sedated with Telazol (3 mg/kg i.m.), and 2 ml of blood was collected from the saphenous vein.
Postweaning Samples
Immediately following the aforementioned 11-month behavioral test, the juvenile was hand caught from the cage in the behavioral testing suite. The juvenile was then restrained while a 2 ml blood sample was collected from the femoral vein. All samples were collected within 5 min following the conclusion of the behavioral test.
hair sample collection
Hair was collected prior to weaning (average age 7.06 months, SEM = 0.130) and at 13 months (average age 13.10 months, SEM = 0.089). The animals were sedated with either Ketamine (5-10 mg/kg i.m.) or Telazol (5 mg/kg i.m.) and a hair sample was collected from the right subscapular region. The hair sample was placed in an aluminum foil packet and frozen at −80°C until the time of assay.
Tissue collection and Processing
Offspring were necropsied at 13 months of age, and brain tissue was collected as previously described (6,34,35). Euthanasia was performed by ONPRC Necropsy staff and adhered to AVMA Guidelines on Euthanasia in Animals and ONPRC standard operating procedures and guidelines. Animals were sedated with Ketamine (15-25 mg/kg i.m.) and transported to the necropsy room in a covered transport box. The animals were deeply anesthetized with a surgical dose of sodium pentobarbital (25-35 mg/kg i.v.). Anesthetic depth was monitored by assessing the loss of palpebral, corneal, pain, and pharyngeal reflex. After adequate plane of anesthesia was reached, the abdomen was incised and terminal blood samples were collected from the aorta or caudal vena cava. The aorta was then severed and the animal exsanguinated. Perfusion of the brain occured via the carotid artery by flushing with 0.9% heparinized saline (0.5-1 l) followed by 4% paraformaldehyde (PF, approximately 1-2 l) buffered with sodium phosphate (NaPO4, pH 7.4) until fixed. The brain was then partitioned into specific areas and placed in 4% PF for 24 h at 4°C, transferred to 10% glycerol buffered with NaPO4 for 24 h, and finally transferred to 20% glycerol solution for 72 h. Tissue blocks were frozen in −50°C 2-methylbutane and then stored in −80°C until sectioning.
Plasma assays
The ONPRC Endocrine Technologies Support Core (ETSC) at the ONPRC performed assays for cortisol (17α-hydroxycorticosterone) and insulin using a chemiluminescence-based automatic clinical platform (Roche cobas e411, Roche Diagnostics, Indianapolis, IN, USA) validated for NHP serum and plasma (36). Companyprovided calibrators and quality control samples were analyzed before each use. The intra-and inter-assay variation of the assay for cortisol was less than 7% and the assay range was 0.36-63.40 ng/ml. The intra-and inter-assay variation of the assay for insulin was less than 7% and the assay range was 0.2-1,000 μIU/ ml. Fasting glucagon was assayed by radioimmunoassay (RIA) (Catalog no. GL-32K; Millipore). The intra-assay variations were less than 8% and the assay range was 20-4,000 pg/ml. As all samples were analyzed in a single assay for each target, no specific inter-assay variations for this study were calculated. Leptin levels were measured using an RIA kit directed against human leptin (Catalog no. HL-81K; Millipore). The intra-assay variations were less than 17% and assay range was 0.78-100 ng/ml. Overall interassay variations for the leptin and glucagon RIAs in the ETSC are less than 20%.
hair assays
Cortisol was measured in a hair sample to measure chronic stress. Hair cortisol reflects the mean cortisol over the past several weeks to months (37). The ETSC at ONPRC analyzed cortisol in hair samples using a modification of an existing protocol (37). Hair was washed with isopropanol (5 ml), filtered with P8 filter paper (Fisher Brand cat. No.: 09-795D), and minced manually with a specially designed multi-blade cutter with blade distance at 2 mm. Next, the cortisol was extracted by gentle shaking in methanol (50 mg/ml) for approximately 22 h. Hair and methanol were then separated by centrifugation and the supernatant was collected and dried under a forced air stream at 45-50°C. Finally, the dried contents were reconstituted in assay buffer and cortisol levels determined by ELISA (Salimetrics, State College, PA, USA).
Recoveries were determined at the same time as sample analysis and used to adjust final sample cortisol values. Intra-assay variation was less than 10% and inter-assay variation was less than 15% (n = 5). The assay range was 0.33-30.00 ng/ml.
immunohistochemistry for serotonin
Coronal sections (25 µm) of the right PFC were collected in 1:24 series using a freezing microtome, as previously described (34). Briefly, sections were washed in KPBS and then blocked in 2% donkey serum in 0.4% Triton-X into KPBS for 30 min. Rabbit anti-5-HT (Lot #082M4831, #S5545; 1:5,000; Sigma-Aldrich) antibody was diluted in 2% donkey serum in 0.4% triton X into KPBS and applied to tissue sections which were incubated at room temperature for 1 h and then at 4°C for 48 h. Tissues were then washed in KPBS, and the secondary antibody (Donkey-Anti-Rabbit Alexa Fluor 488; Lot #1531671 Life Technologies Corporation, Carlsbad, CA, USA) was applied for (38). We imaged six fields of view per section throughout area 10 of the right PFC, two fields of view each of the dorsal, medial, and ventral regions as defined by the Paxinos Stereotaxic Atlas (39, Figures 1-6) in anatomically matched sections for each animal. The observer was blind to maternal and postweaning diet when imaging slides. Images were taken at a format of 1,024 × 1,024, zoom factor 1, 400 Hz, in 2 µm increments along the z-axis of the tissue using a 10× (NA 0.40) objective. The 405-nm line of a blue diode laser and a 488nm argon laser were used sequentially to avoid bleed-through of individual fluorophores into the nearby detection channels. ImageJ software (Wayne Rasband, National Institute of Health, Bethesda, MD, USA) was used to measure the total fluorescent intensity, the percent area and the integrated density of 5-HT by an individual blind to maternal and postweaning diet. DAPI staining was used to identify the layers and three measurements using an oval region of interest were taken of each layer (1-6) for each animal. These three regions were averaged for each layer of each image and were used to calculate the overall average fluorescent intensity, percentage area and integrated density of each layer for each animal.
statistical analysis
Statistical tests were run using SPSS Version 22 (SPSS Inc., Chicago, IL, USA). For all variables, Kolmogorov-Smirnov tests of normality were run for the three pre-determined factors (maternal diet, postweaning diet, and gender) with p < 0.05 indicating a significant deviation from normality. If data were nonparametric a square root or log10 transformation was applied to obtain normally distributed data. Remaining non-parametric measures were rank transformed to achieve normality, with mean rank assigned to ties. Data are presented as mean ± SEM. Alpha values of p < 0.05 were considered statistically significant. All graphs were made with GraphPad Prism Version 6 software (GraphPad Software, Inc., La Jolla, CA, USA).
Parametric Analysis
Physical activity, cortisol, TPH2, SERT, and 5-HT1A mRNA expression, 5-HT immunohistochemistry, and select behavioral measures were determined to be parametric and tested for homogeneity of variance. The effect of juvenile metabolic state on observed outcomes was examined, using GAUC and body weight taken at the 13-month GTT as metabolic parameters. Pearson correlations were run, with p < 0.05 identifying potential covariates. Direct eye contact (r = 0.239, p = 0.048) and vigilance (r = −0.242, p = 0.046) during the 11-month behavior test, as well as MnR TPH2 percent area (r = 0.447, p = 0.007) and density (r = 0.451, p = 0.007) correlated with GAUC. Percent duration of anxiety behaviors during the 11-month behavior test correlated with weight (r = −0.294, p = 0.011). For these outcomes, the associated metabolic variable was used in three-factor univariate ANCOVAs. Change in activity was tested with a three-factor repeated measures ANOVA. Test activity, postweaning plasma and hair cortisol, dorsal raphe (DR) TPH2, SERT, and 5-HT1A mRNA expression, 5-HT immunohistochemistry measures, and remaining parametric behaviors were analyzed with a three-factor univariate ANOVA. Preweaning cortisol measures were analyzed with a two-factor univariate ANOVA. All pairwise analyses following ANOVA or ANCOVAs utilized Bonferroni corrections.
Non-Parametric Analysis
Measures which did not achieve normality by any transformation attempt underwent a series of tests to investigate the same three factors explored in parametric analysis. To examine juvenile metabolic effects Kendall's correlations were run for GAUC and weight.
Only latency to contact apple in the 11-month test correlated with weight (r = −0.232, p = 0.010), with no measures associated with GAUC. The relationship between metabolic parameters and observed outcomes, independent of variation due to maternal diet, postweaning diet, and gender was examined using Kendall's partial correlations, with p < 0.05 indicating unique variance. No measures produced significant partial correlation results so group differences were examined independent of metabolic parameters. Variables were first tested using Mann-Whitney U tests to examine maternal diet, postweaning diet, and gender. Kruskal-Wallis tests were then performed on two sets of four independent groups, classified by an individual's gender and maternal diet and their gender and postweaning diet, in order to examine the relationship between diet and gender. Last, the Jonckheere-Terpstra test was performed on diet groups, ordered with increasing exposure to the HFD, assessing the effect of HFD exposure across maternal and postweaning diets. Jonckheere-Terpstra tests for ordered pattern of medians across independent groups with meaningful order, such as our diet groups. Both Mann-Whitney and Kruskal-Wallis two-tailed p-values, as well as all follow-up pairwise examinations, were adjusted for number of comparisons. Jonckheere-Terpstra one-tailed p-values remain unadjusted.
resUlTs
impact of exposure to a hFD during Development on Juvenile Offspring Behavior
Maternal and Postweaning HFD Consumption Increased Anxiety
The number of anxiety behaviors ( Table 3) during the 11-month behavioral assessment increased with exposure to maternal Figure 1B). In addition, the amount of time engaging in anxiety behaviors was found to be associated with offspring body weight, such that offspring with lower body weight spent more time engaged in anxiety behaviors (F1,65 = 5.819, p = 0.019, Figure 1C). The number of vocalizations, an established measure of anxiety (32,40), similarly increased with HFD exposure (p = 0.033, Figure 2A) according to Jonckheere's test. Offspring exposed to maternal HFD displayed increased number of active anxiety behaviors (p = 0.0006, Figure 2B), which was maintained in males (p = 0.042) and females (p = 0.015). Jonckheere's test detected an interaction between maternal and postweaning diet, with occurrences of active anxiety increasing with HFD exposure (p = 0.0004). Time exhibiting active anxiety likewise increased with maternal HFD (p = 0.003, Figure 2C). Moreover, step-down analysis revealed that any exposure to the HFD increased active anxiety (p < 0.05) compared to no exposure. Females exposed to maternal HFD exhibited more active anxiety than controls (p = 0.042). Stereotypy increased with postweaning HFD consumption (p = 0.027, Figure 2D). Further, any exposure to the HFD across perinatal development increased time exhibiting stereotypy (p = 0.002). In contrast, inactive anxiety ( Table 3) was unaffected by gender, maternal diet, or postweaning diet (F1,67 < 2.50, p > 0.100). Other traditional anxiety behaviors, such as crouch and freeze, were independently examined and produced no significant results (Crouch: all p > 0.400; Freeze: F1,67 < 2.00, p > 0.100).
The percent area of 5-HT1A mRNA expression was examined in the DR and MnR with no difference due to maternal diet, postweaning diet, or gender (all DR: F1,30 < 0.10, p > 0.800; all MnR: F1,37 < 3.00, p > 0.100). SERT expression was examined in terms of percent area and density in the DR and MnR with no significant results (all F1,47 < 2.00, p > 0.200).
DiscUssiOn
This is the first study demonstrating long-lasting effects of HFD consumption during early development on behavior and brain development in NHP offspring. The observed effects of maternal HFD appear to be due to developmental programming as the reduction in TPH2 mRNA expression in the DR and many of the behavioral aberrations persist when animals consume a healthy diet at weaning. These findings indicate that an early nutritional intervention, consumption of the control diet at weaning, was not sufficient to ameliorate many of the changes in behavior induced by maternal HFD consumption, such as increased anxiety. In addition, postweaning HFD consumption reduced serotonin immunoreactivity in area 10 of the PFC, exacerbated behavioral abnormalities, and increased stereotypy independent of maternal diet.
Results from 11-month-old male and female offspring indicate that exposure to maternal HFD increases the risk of anxiety, a risk further exacerbated by postweaning HFD consumption. This increase in anxiety in maternal HFD offspring extends and expands our previous findings of increased anxiety in 4-monthold female HFD offspring (6). It is interesting that by 11 months of age both male and female offspring now exhibit anxiety. The earlier onset of anxiety in female offspring correspond to human studies demonstrating that women are more prone to anxiety than men and that the association between obesity and anxiety disorders is more robust in women than in men (41). Importantly, the effects of maternal HFD remained significant when offspring weight was taken into account. Any exposure to a HFD during early development increased both active anxiety and stereotypy, a component of active anxiety. Total active anxiety was primarily influenced by exposure to maternal HFD, whereas stereotypy was primarily increased by postweaning HFD consumption. Stereotypy is an extreme reaction to stress exceeding the adaptive Representative images of the differences seen in TPH2 mRNA Expression in the DR between the four diet groups. Data shown as mean ± SEM. * denotes maternal diet effect and # denotes a postweaning diet effect, p < 0.05. Magenta lines denote significant overall covariance, p < 0.05. Scale bars are 200 µm. Sample sizes for TPH2 mRNA expression are as follows: DR: CTR/CTR n = 15 (n = 9 males; n = 6 females), CTR/HFD n = 8 (n = 4 males; n = 4 females), HFD/CTR n = 8 (n = 5 males; n = 3 females), and HFD/HFD n = 12 (n = 8 males; n = 4 females) and MnR: CTR/CTR n = 13 (n = 7 males; n = 6 females), CTR/HFD n = 7 (n = 3 males; n = 4 females), HFD/CTR n = 7 (n = 4 males; n = 3 females), and HFD/HFD n = 8 (n = 5 males; n = 3 females). value of anxiety responses, suggesting behavioral dysregulation and reduced ability to use normative methods to alleviate anxiety (32). Our data suggest that while any developmental exposure to a HFD increases anxiety, maternal HFD is the primary determinant of anxiety displayed during behavior testing, manifesting as active forms of anxiety, and that postweaning HFD consumption independently increases stereotypy.
Postweaning HFD-exposed animals exhibited increased behavioral inhibition as evidenced by reduced interaction with novel objects and the test cage. Behavioral inhibition is associated with increased anxiety in animal models (32) and children (42,43), as well as a lowered arousal threshold to novel stimuli (31). Postweaning HFD males displayed the highest level of behavioral inhibition, interacting the least with the novel objects. Overall, we observed that females interacted less with novel objects, a reflection of species-typical gender differences, illuminating the severity of the postweaning HFD reduction in male offspring. As no differences were observed in vigilance toward the novel objects, this decrease is not due to altered attention. Reduced interaction with the test environment provides further support that postweaning HFD consumption inhibits species-typical behavior in response to novel stimuli. Maternal HFD likewise produced a departure from normative levels of cage interaction, indicating exploratory behaviors are particularly sensitive to HFD exposure and potentially indicative of anxiety. Behavioral inhibition in children is specifically associated with social anxiety (31), suggesting that maternal and postweaning HFD exposures result in the development of varied anxiety phenotypes. Our findings that developmental HFD exposure increased anxiety are supported by several rodent studies (14,44,45) and recent evidence from epidemiological studies that report an association between maternal obesity and occurrence of anxiety and depression in children and adolescents (46,47). An elevated prepregnancy BMI was associated with increased fear, sadness, and internalizing behaviors in children (8,12). Also, maternal obesity increases the risk of abnormal birth weight (48), which in itself is associated with anxiety and depression in adolescents (13). Postweaning HFD consumption increased baseline activity in the juveniles' home environment as previously reported (20). We hypothesize that baseline activity is upregulated in order to defend body weight "set point" in an environment of nutritional excess. The "set point" model (49)(50)(51)(52) proposes that circulating metabolic hormones act on the hypothalamus resulting in compensatory metabolic changes that maintain body weight at a predefined level. In contrast, postweaning HFD suppressed activity during the behavior test and in males reduced the number of behaviors performed, further indicating behavioral inhibition (53). Overall, animals exhibited reduced activity 24 h after the behavioral assessment, indicating the prolonged influence of a stressful event. Differences in the change in activity after the behavioral assessment were best explained by postweaning HFD exposure, and in females by maternal HFD exposure. The gender-specific effects of HFD/HFD exposure suggest maternal diet reprograms stress response more effectively in females, and that the postweaning diet has a greater impact on stress response in males. Thus, postweaning HFD exposure is an important regulator of physical activity, elevating baseline levels and inhibiting recovery after the stress of behavioral assessment. A variety of models, including NHPs and humans, show positive social interactions protect against deleterious effects of stress (53). The reverse effect is seen when our postweaning HFD animals return to their social group, suggesting social impairment and the experience or perception of negative social interactions (53).
Postweaning HFD consumption increased plasma cortisol response to the 11-month behavior test, a stress response further amplified with maternal HFD exposure, as HFD/HFD animals displayed the highest level of cortisol implying increased stress sensitivity. While both maternal and postweaning HFD increased acute stress response to the behavioral assessment, only males subsequently presented with an elevated chronic stress response, as measured by hair cortisol. Conversely, maternal HFD consumption increased hair cortisol at weaning. Our results are consistent with the findings from several animal models which indicate that the HPA axis is programed by perinatal HFD consumption (44,54,55). The impact of perinatal HFD on the HPA axis could be direct, or through increased adiposity or an increased inflammatory state induced by HFD exposure. The HPA axis is critical in behavioral regulation; in humans increased cortisol is HFD during Development Programs Behavior Frontiers in Endocrinology | www.frontiersin.org July 2017 | Volume 8 | Article 164 associated with anxiety disorders (56), and NHP studies show that HPA activation and elevated plasma cortisol are associated with abnormal behaviors (57). The observed cortisol results further support our postulation that maternal and postweaning HFD exposure produce differential anxiety phenotypes. Whereas maternal HFD induces an early elevation in hair cortisol and generalized anxiety symptoms, postweaning HFD causes both an elevation in acute cortisol and social anxiety symptoms, especially in males. We further noted that postweaning HFD exposure decreased serotoninergic immunoreactivity in area 10 of the PFC, particularly in layer 1 of the medial aspect and in males. We postulate that this reduction in serotoninergic immunoreactivity relates to decreased serotonergic innervation of area 10 of the PFC. However, differences in the density of serotonin positive fibers could also be due to differences in serotonin release and reuptake.
Here too the alterations in serotonin immunoreactivity reflect the heightened sensitivity of male offspring to programming by postweaning HFD consumption. The impact of the postweaning diet on the medial PFC is not surprising, as this brain region undergoes marked growth during the juvenile period and is one of the last to fully develop (58). The reduction of serotonergic immunoreactivity in area 10 is a potential contributor to the behavioral inhibition observed in postweaning HFD animals as serotonergic innervation of the PFC is an important regulator of behavioral inhibition (59). Moreover, decreased serotonergic immunoreactivity in the PFC may underlie the observed increase in stereotypy in animals consuming the HFD postweaning, as impairments in PFC morphology are associated with increased stereotypy in a NHP model of immune activation during development (59)(60)(61). We previously demonstrated effects of maternal HFD consumption on the dopaminergic systems at 13-months of age, with maternal HFD decreasing tyrosine hydroxylase and dopamine receptor 1 and 2 protein immunoreactivity in area 10 of the PFC (35). Thus, the observed behavioral impairments could also be influenced by the dopamine system, which is modulated by serotonin activity (62).
We report that exposure to maternal HFD reduced TPH2 mRNA expression in the DR, with the programming effects of maternal HFD on TPH2 expression persisting when offspring consumed a healthy diet postweaning. In contrast, postweaning HFD consumption elevated TPH2 mRNA expression in the MnR. Similar results have been found in infant mice, with increased anxiety and depression-like behaviors resulting from decreased DR and increased MnR serotonergic activity (63). The complex projections originating from the DR and MnR are site specific and largely non-overlapping, attune to subnuclei variation (64). This divergence is apparent within cortical targets: MnR projections are concentrated in the dorsomedial components, particularly the medial PFC, and the DR projects to most cortical areas, with the medial PFC innervated more sparingly (65). The raphe nuclei also both send widespread serotonergic projections to the hypothalamus. However, the arcuate and suprachiasmatic nuclei receive input exclusively from the MnR (65). In this study, TPH2 mRNA expression was measured in the cell bodies of the DR and MnR, but not in at the axon terminals or release points along the axon where changes in 5-HT metabolism also occur. Thus, changes in TPH2 mRNA expression may indicate a reduction in the capacity to synthesize serotonin, as seen in a rodent study where inhibition of DR TPH2 mRNA expression resulted in in vivo suppression of serotonin synthesis (66). However, it is important to note we measured mRNA expression of the TPH2 and not the actual activity of the enzyme.
Importantly, our findings in the raphe nuclei are consistent with our observed outcomes, as the brain regions innervated by the DR and MnR are integral in the regulation of metabolism and behavior. For example, the amygdala receives robust projections from the DR (65) and increased amygdala activity indicates potential vulnerability to anxiety pathology, as the region designates learned fear response (67). Fear responses, conditioned or unpredictable, are inhibited with increased ventral PFC activity, an area key to emotional regulation (67). The medial PFC determines stressor controllability, and in humans area 10 exhibits important social functions, distinguishing between perceived and imagined stimuli (58,68). Altered neural serotonin is associated with psychological and neurodevelopmental disorders including depression (69,70), anxiety (71), ADHD (72), and ASD (73). Direct raphe innervation to these brain regions is implicated in the widespread serotonergic impairments seen in anxiety disorders. Serotonergic disruption is likewise involved in metabolic regulation, as serotonergic projections from the raphe nuclei synapse onto the melanocortin neurons critical in regulating energy intake and expenditure (74). In addition to the raphe nuclei, TPH2 is expressed in the hypothalamus and pituitary of humans and mice (75), and in NHPs serotonergic regulation of HPA function was shown to be TPH2-dependent (66). Unambiguously, these target areas of DR and MnR serotonergic innervation constitute a complex neural network of behavioral and metabolic regulation.
We postulate that the differential anxiety phenotypes observed in maternal and postweaning HFD groups originate from the nuclei-specific perturbations seen in the raphe serotonergic system. Exposure to a HFD during gestational and early perinatal development impairs serotonergic function of the DR neural network and results in the development of anxiety pathology. Our group has shown maternal HFD exposure suppressed serotonergic function in the fetal DR, the behavioral effects of which were seen at 4 months of age, with HFD-exposed females displaying increased anxiety (6). Hair cortisol at 6 months was elevated in male and female HFD offspring, indicating both genders experienced chronically increased stress. Intervention with a control diet postweaning had no effect on these outcomes; decreased DR TPH2 expression and increased anxiety behaviors persisted in maternal HFD offspring. Importantly, these results indicate the continued development of anxiety pathology, in spite of the remodeling capacity of the serotonergic system (76). The compounded effect of maternal and postweaning HFD on 11-month plasma cortisol and activity measures further support the long-term effects of HFD on stress response. Active anxiety was particularly increased in maternal HFD offspring, and independent of the influence of maternal HFD exposure on anxiety, low body weight was predictive of anxiety behaviors at 11 months. HFD-induced DR serotonergic deficiency could produce these effects by impairing innervation of targeted anxiety and metabolic circuits. Insufficient neuronal regulation reduces agoutirelated peptide (AgRP) innervation of hypothalamic nuclei and reprograms energy balance, promoting leanness and hypophagia (77,78). Congruent with this, our group found maternal HFD exposure reduced AgRP fibers in the paraventricular nucleus of the hypothalamus (20). Due to DR's extensive control of behavioral and metabolic pathways, maternal HFD impairs serotonergic function and results in widespread and long-term anxiety and energy balance reprogramming.
In contrast to maternal HFD DR outcomes, postweaning HFD increased TPH2 expression, specifically in the MnR, reflecting the complexity of the development of the raphe serotonergic pathways. In depressed suicides TPH2 transcription and protein levels were increased in the DR and MnR (64,79). Increased raphe serotonin synthesis, despite locally reduced serotonin levels, is hypothesized to compensate for insufficient serotonergic transmission at target areas (64). In area 10 of the medial PFC, one such MnR target area (65), our data show postweaning HFD exposure decreased serotonergic innervation. The introduction of a HFD during a time of elevated social stress, resulting from weaning and novel group formation, coupled with a reduced ability to exhibit control over social stressors could lead to the development of social impairment and anxiety (53). We show this to be true: at the 11-month test as postweaning HFD exposure resulted in increased behavioral inhibition, stereotypy, elevated acute stress response, and impaired ability to habituate upon return to social group. The concentration of MnR projections to area 10, coupled with the divergent effects of postweaning HFD, implicate this serotonergic pathway in the aberrant circuitry of depressive and anxiety disorders, particularly social anxiety.
The MnR's exclusive innervation of the arcuate nucleus likewise implicates it in serotonergic regulation of metabolic pathways (65). Accordingly, we observed in this model that postweaning HFD consumption reduced AgRP fibers, this time in the arcuate nucleus (20). The increased baseline activity and hypophagia (20) seen in postweaning HFD offspring could correspond to this reduction in AgRP innervation, consistent with the body weight "set-point" hypothesis and altered energy balance. We observed a predictive effect between MnR TPH2 expression and GAUC, further evidence that postweaning HFD-induced TPH2 changes can disrupt hypothalamic metabolic homeostasis. Long-term metabolic disturbances, such as glucose intolerance and obesity, can originate from impaired function of hypothalamic neurons (77). Our findings in the MnR reflect established links between metabolic and anxiety disorders, and further explain the unique anxiety differences seen in postweaning HFD offspring.
By way of homologous but independent pathways, the MnR and DR express similar influence over frontal and HPA functions, an influence which is impaired by ontogenetic HFD exposure. The timing specific effects of HFD exposure on the direction and location of TPH2 expression support our results indicating maternal and postweaning HFD consumption generate unique behavioral and physiological changes. The incorporation of gender as a risk factor for anxiety further explains our results (80). HFD females display anxiety earlier than males, and are more impacted by the compounded effects of maternal and postweaning HFD exposure. Males appear to be selectively affected by postweaning HFD exposure, with chronic suppression of HPA function and decreased serotonin immunoreactive fibers in the PFC. Importantly, any developmental HFD exposure causes long-term aberrations in the serotonergic system, with risk factors such as exposure period and gender contributing to differential pathology presentation.
Our findings in the central serotonin system suggest neural mechanisms for differential anxiety development in HFDexposed NHP offspring, and provide evidence that childhood diet impacts neural development. While other factors in our model, such as maternal obesity and hyperinsulinemia, may also influence offspring development, chronic HFD exposure is the primary mediator of the observed impairments in behavior and brain development. This hypothesis was generated based on previous findings from our group demonstrating that, independent of maternal adiposity, increasing maternal dietary fat contributes to elevated offspring percent body fat, inflammatory markers, and stress activation (19,21). Investigation of the later postnatal period reaffirmed these results as postnatal HFD consumption had no effect on weight gain or metabolic rate (20), and still increased inflammatory response. In combination with our results of maternal and postweaning HFD exposure altering neural development independent of juvenile metabolic phenotype, evidence strongly suggests that the diet-not resulting metabolic phenotype-is the primary source of mechanistic control.
We postulate that the observed HFD-induced behavioral and serotonergic impairments are due to increased exposure to pro-inflammatory factors. In our NHP model, we previously documented increased circulating and hypothalamic cytokines in fetal HFD offspring at the third trimester (16). Neural development is susceptible to the deleterious effects of inflammatory stress, particularly the serotonin system. Serotonergic neurons are sensitive to inflammatory events, as elevation of inflammatory cytokines in rats reduced the survival of embryonic serotonin neurons in the rostral raphe (81) and resulted in degeneration of serotonergic axons in the amygdala and PFC (82). NHP offspring prenatally exposed to pro-inflammatory Immunoglobulin G class antibodies from mothers of children with ASD displayed increased anxiety responses such as stereotypy and hyperactivity, indicative of serotonergic dysfunction (83). In humans elevated levels of inflammatory cytokines in obese pregnant women are associated with increased risk of anxiety, depression, and ASD (84). As the brainstem raphe nuclei are the main center for serotonin production, they are likely key to these perturbations.
In conclusion, we demonstrated that HFD consumption during early development has long-lasting effects on NHP offspring behavior and brain development. Maternal HFD changes appear to be due to developmental programming as the behavioral and serotonergic pathologies produced are unaffected by early dietary intervention. Female offspring are particularly prone to maternal HFD effects. Postweaning HFD consumption was found to exacerbate behavioral inhibition and increase stereotypy, especially in males. Future studies will address the relative importance of maternal diet and obesity in offspring development, and investigate the impact of specific dietary components. Changes in early postnatal environmental factors including maternal mental health and maternal-infant interactions may explain some of the observed behavioral and neuroendocrinological dysfunctions. Further studies will directly examine the impact of developmental HFD exposure on offspring social behavior and cognition. Given the high prevalence of HFD consumption and obesity in developed countries, and the potential these factors have to increase the risk of developing neuropsychiatric and neurodevelopmental disorders, it is crucial that future studies identify efficacious therapeutic interventions.
eThics sTaTeMenT
All animal procedures were in accordance with National Institutes of Health guidelines on the ethical use of animals and were approved by the Oregon National Primate Research Center (ONPRC) Institutional Animal Care and Use Committee.
|
v3-fos-license
|
2023-02-24T15:14:17.082Z
|
2021-04-21T00:00:00.000
|
257115928
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bnrc.springeropen.com/track/pdf/10.1186/s42269-021-00539-5",
"pdf_hash": "0106ecb6d2e3c17fa5835e1597b35a06ac4f7be6",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46654",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "0106ecb6d2e3c17fa5835e1597b35a06ac4f7be6",
"year": 2021
}
|
pes2o/s2orc
|
Biochemical differences between nano- and normal formulation of tamoxifen and other natural bioactive materials ameliorate breast cancer in experimental rats
Human breast cancer is the most prevalent malignancy in women all-over the world. The aim is to look further into the effectiveness of the nanoformulation of tamoxifen and even certain bioactive compounds (yeast, isoflavone, and silymarin) and their impacts on diminishing the breast cancer progression. A single medication dosage of 7,12-dimethylbenz[a]anthracene (DMBA) was administered intragastrically by fifty-four female Sprague–Dawley rats. After fourteen days of DMBA admission, the procedure protocol actually started out. At long last, all of the experimental findings assessed, tabulated, and statistically analyzed. In contrast to the normal groups, a substantial elevation in apoptosis and lipid peroxide was observed in all nanogroups. The best biochemical outcome and beneficial factors which elevate the occurrence and activation of the apoptosis process have been demonstrated by nanotamoxifen.
Background
Human breast cancer is an uncontrolled growth of the cells of the breast that originate from breast tissue, mostly from the inner lining of the milk ducts or the lobules as a result of mutations in the genes responsible for coordinating and maintain healthy cell growth (Angeline Kirubha et al. 2012;DeSantis et al. 2011). One of the only organs not completely developed during childbirth is the mammary gland. In puberty, pregnancy, and lactation, it undergoes intense evaluative and functional modifications. In certain cases, a single breast tumor can be a combination of invasive and in situ cancer forms or mixture (Girish et al. 2014;Simpson et al. 2010). Risk factors for creating of human breast cancer can be classified into two groups: modifiable risk factors (things that can change the use of drinks, for example) and fixed risk factors (things that have never shown signs of change, for instance, age and sex) (Hayes et al. 2013). Menopause hormone replacement therapy, ionizing radiation, early age from first month to month, more experienced age period, and hereditary factor (Zhang et al. 2015;Brody et al. 2007).
The tamoxifen as selective estrogen receptor modulator is utilized in the amelioration of the early and advanced human breast cancer and to prevent human breast cancer in high-risk subjects in selected cases (Teunissen et al. 2010).
Page 2 of 8 Ezzat et al. Bull Natl Res Cent (2021) 45:78 Cancer nanotechnology with extensive application is an up and coming area. Via early detection, hope, anticipation, customized treatment, and prescription, it offers a specific approach and exhaustive innovation tower cancer. The priority research areas on which nanotechnology will have an indispensable influence are targetspecific drug therapy and early detection techniques for pathologies (Misra et al. 2010). Cancer nanotherapeutics is advancing at an enduring dimension; since the mid-2000s, research and progression in the field have undergone exponential growth (Bertrand et al. 2014).
Nanotechnology applications can aid in nutrition research to obtain accurate spatial level information on the location of a nutrient or bioactive food component in a tissue, cell, or cellular component.
Human breast cancer normally ameliorates within a few weeks of the diagnosis. The type of amelioration recommended depends on the stage of the cancer, the size, the patient's age and the location of a tumor in the breast, the results of laboratory tests conducted on the cancer cells, and the stage or extent of the disease. Depending on their needs, a patient may have either one form of amelioration or a combination (Girish et al. 2014).
This research seeks new formulations that have a different impact and minimize the progression and prevalence of breast cancer through the conversion of tamoxifen and some bioactive compounds including such yeast, isoflavone, and silymarin to nanoparticles,.
The role of these drug and bioactive components in the form of nanoparticles in the progression of human breast cancer and their potential to stimulate apoptosis in human breast cancer cells and the inhibitory effect of nanoparticles on the progression of the induction of toxic material for human breast cancer in experimental animals has not been documented to date.
Nanomaterials preparation
In the Nutrition and Food Science Department at National Research Centre, all the nanomaterials used in the experiment were prepared which included silymarin and isoflavone as bioactive components, nutrients as yeast, drugs as tamoxifen (Ezzat et al. 2018(Ezzat et al. , 2017(Ezzat et al. , 2013. Nanonutrients, drug, and bioactive compounds were assessed by transmission electron microscopy, mass spectroscopy, and ZetaSizer Nano-ZS.
Experimental animals
Fifty-four female Sprague-Dawley rats, weighing between 60 and 80 g, obtained from the Animal House of the National Research Centre, Egypt, were held for one week prior to the acclimatization experiment on a regular laboratory diet and water to ensure normal growth and behavior. In a temperature controlled (23 ± 10C), 40-60% relative humidity and artificially lit (12 h dark/light cycle) rooms free from any cause of chemical contamination, the animals were distributed and housed in individual solid bottom cages. Both animals received human treatment and were used in a compliance with Animal Experiments Guidelines. As defined by Samy et al. (2006), rats were offered a single dose (25 mg/kg body weight) of 7,12-dimethylbenz[a]anthracene (DMBA) administered intragastrically via gavage to induced breast cancer. The timeframe after DMBA administration during which the animals recovered from the toxicity caused by DMBA was two weeks, the animals were categorized into 9 groups, each with six animals, with one group of injected animals feeding as control on the basal synthetic diet. Injected animals were fed nanoparticles (yeast, tamoxifen, isoflavone, and silymarin) combined with a basal synthetic diet in four groups. Another four groups of injected animals were fed the same synthetic basal diet supplemented with normal particles (yeast, tamoxifen, isoflavone, and silymarin) as illustrated in Table 1. We researched the various effects on body weight, body gain, total food intake and food efficiency of yeast, tamoxifen, isoflavone, and silymarin. The salt and vitamin mixture formation can be determined using conventional approaches by Briggs and William (1963) and Morcos (1976).
Plasma and serum biochemistry
The animals were kept fasting for 12 h at the end of the experimental phase that took six months for all groups, during which diethylether was added inside a desiccator to anesthesize the animals. They were then killed and the retro-orbital venous plexus extracted the blood samples. A blood sample of each animal obtained by centrifugation separating serum and plasma (Sigma labour zentrifuge GMBH, West Germany, model 2-15 3360 osterode / Hertz) for 15 min at 3000 rpm and processed in − 20 °C for biochemical parameters calculation.
Histopathological evaluation of the mammary glands
This was performed according to the methods of Bancroft and Stevens (1996). In the buffered formalin solution, the breast was removed and fixed. Then, in flowing tap water, the tissue was washed and dehydrated in an ascending alcohol sequence (50-90) and then in absolute alcohol. The tissue was cleared with xylol and immersed in an oven at 60 °C in a mixture of xylol and paraffin. The tissue was transported to pure paraffin wax (58 °C melting point) and then mounted on blocks until the time of use left at 4 °C. The paraffin blocks were sectioned at a thickness of 4-6 μm on the microtome and mounted on clean glass slides, then left to dry at 40 °C in the oven. The slides were deparaffinized in xylol and then submerged in the downward alcohol sequence (90-50). For histological examination, the ordinary hematoxylin and eosin stain (H&E stain or HE stain) were used.
Statistical analysis
The statistical analysis was done through using Snedecor and Cochron (1989) method. Within the given groups, all parameters were assessed through statistical software packages, namely (SPSS Software, version 16.0 for Windows; SPSS Inc., Chicago, IL). For the evaluation of the statistical significance of discrepancies between sample mean values of quantitative data, the Student's t test is the most suitable.
Discussion
Human breast cancer is one of the most well-known malignancies, representing almost 1 out of every 3 female tumors examined, and it is the second leading cause of cancer death among women (DeSantis et al. 2011). In rats, 7,12-dimethylbenz[a]anthracene (DMBA) can inflict experimental breast carcinoma (Henry and Narendra 2006;Giri et al. 1995). Several tissues are appropriate for DMBA initiating and the mammary gland is integrated into them. DMBA is altered in the breast through epoxides, the active metabolites with the power to damage the DNA molecule, the headliner at the beginning of carcinogenesis. With the higher cell proliferative list of types 1 and 2 lobules, there are more metabolic capacity and more epoxide generation. Mammary carcinomas in rats are professed to emerge in the little mammary ducts (Russo et al. 1977;Sinha and Dao 1975;Middleton 1965) or from hyperplastic alveolar nodules (Beuving and Bern 1972;Beuving 1966). Nanomedicine has many benefits compared to conventional cancer therapies, such as less drug degradation during transport via protection from in vivo chemical or biological conditions, minimized adverse effects via enhanced biocompatibility and targeting, and increased dosage of chemotherapy administered to the cancer tissue. Nanomedicine also possesses tremendous potential Page 4 of 8 Ezzat et al. Bull Natl Res Cent (2021) 45:78 to selectively target and kill stem cells of breast cancer, which is a major factor in the initiation, recurrence and resistance to chemo-/radiotherapy of breast cancer. The findings obtained exhibited the presence of anatomical damage to breast animals due to the DMBA infusion, causing breast cancer. As of now, a breast tumor by microscopic analysis of breast cells infected rats in contrast to normal animals was also orchestrated from the after effects of the current study by means of infusion DMBA (Figs. 1, 2). The results were consistent with numerous previous reports which detailed that numerous mammary gland dysplasia and mammary carcinomas emerge at whatever point 7,12-dimethylbenz[a] anthracene (DMBA) is given to the rat (Beuving et al. 1976(Beuving et al. , 1967Van Duuren and Rubin 1971;Huggins and Brillantes 1961). In some rat mammary glands, transformations are triggered by means of cancer-causing agents. Thus, as revealed in various studies, we will address the connection between nanotechnology in drugs as tamoxifen and certain bioactive compounds (yeast, isoflavone, and silymarin) associated with rat breast cancer. Table 2 demonstrates that the apoptosis level, which is known to be a natural defense of the body to restrict the progression of a cancerous tumor, was correlated with an elevation in the lipid peroxidation level in all nanoprevention groups, including yeast, tamoxifen, and isoflavones, as opposed to the same normal prevention groups. The significance of apoptosis in tissue homeostasis is examined by the undeniable reality that many tumor genes like p53 and ERb tumor suppressor Page 5 of 8 Ezzat et al. Bull Natl Res Cent (2021) 45:78 genes are affected and can be incited by several factors, like beams, medications, and contaminants (Yang and Korsmeyer 1996). The Bcl-2 family is important genes affecting apoptosis. Apoptosis can either strengthen or suppress the proteins encoded by these genes (Kimmo et al. 1999). Current research, including the apoptosis function, focuses on a broader understanding of the response and degree of resistance to amelioration. Data were summarized and integrated on apoptosis and its own role in the progression, anticipation, and amelioration of rat breast cancer. The natural progression of the breast was accounted for by a balance between cell multiplication and apoptosis and there is clear evidence that uncontrolled proliferation is not the only cause of tumor progression yet in addition as a result of lessened apoptosis. In view of different improvements, the balance among multiplication and apoptosis is invaluable in distinguishing the whole progression or relapse of the tumors. It is therefore conceivable, by assessing apoptosis and its own regulation and standards, to delineate the biology of specific tumors at the molecular and biochemical stage (Parton 2001).
The results obtained revealed that rat apoptosis inducing factor was highly elevated in all the investigated nanoformulations with regard to nanoisoflavone that showed the most promising results.
The findings demonstrated a substantial reduction in the amount of estrogen level in all groups of bioactive nanoprevention groups, superior in their natural form to the same bioactive groups. Numerous authors indicated that estrogen is a hormone participating in essential roles amid mammary gland progression, yet essential as a progression risk of breast cancer (Katchy and Williams 2014). Apart from the nan yeast community, which reported a high level of 8-OHdG in conjunction with the same normal yeast group, the findings of 8-OHdG showed a substantial reduction in all nanoprevention groups. 8-OHdG is an oxidized nucleoside excreted by DNA repair in body fluids (Chapple and Matthews 2007;Chapple 1997). Serum 8-OHdG level has been shown to be very sensitive Table 2 Compared the influence of nano-and normal bioactive compounds on apoptosis, 8-OHdG, ErbB-2, plasma estrogen, lipid peroxide and total antioxidant (prevention experiment) **A significant elevation in apoptosis and lipid peroxide levels were observed in the nanoprevention groups (group 1(yeast), group 2 (tamoxifen), and group 4 (isoflavone)) as opposed to the same normal prevention groups. The findings demonstrated a substantial reduction in the level of 8-OHdG in group 2 (nanotamoxifen), group 4 (nanoisoflavone), and group 3 (nanosilymarin) compared with the same normal prevention groups. On the contrary, the substantial of plasma 8-OHdG was matched in group 1 (nanoyeast) in comparison with the same normal group. The findings demonstrated a substantial reduction in estrogen levels in all nanoprevention groups in comparison with the same normal groups. The ErbB-2 level displayed a large elevation in group 1 (nanoyeast) in comparison with group 5 (normal yeast). The findings demonstrated a substantial decrease in the ErbB-2 level in each of group 2 (nanotamoxifen) in comparison with group 6 (normal tamoxifen), and group 4 (nanoisoflavone) when as compared group 8 (normal isoflavone). Total antioxidant findings demonstrated a large reduction for each of group 1 (nanoyeast) relative to group 5 (normal yeast) and group 4 (nanoisoflavone) in comparison with group 8 (normal isoflavone)
Group
Rat apoptosis inducing factor Pg/ml Page 6 of 8 Ezzat et al. Bull Natl Res Cent (2021) 45:78 and precise marker in the medical diagnosis of rat breast cancer, and serum 8-OHdG levels can be utilized as a noninvasive strategy for the screening process for rat breast cancer. A substantial raise in the ErbB-2 level was also shown. The reduction in the amount of total antioxidants may be due to the only elevation corresponding in the level of 8-OHdG in the nanoyeast group, as it is clear that in all nanoprevention groups, the findings revealed a reduction aside from the nanoyeast group, as opposed to the same normal prevention. Native change: both nanotamoxifen and silymarin were matched with a reduction in the ErbB-2 level. ErbB-2 (HER-2) had been accounted for being commonly over-expressed in breast cancer and is approached with the drug trastuzumab (Herceptin). As the level-resistance mechanism hasn't yet been elucidated, only one-third of women respond to trastuzumab. The preponderance of proof demonstrates that amplification of the HER-2/neu gene and overexpressions of proteins are associated with a detrimental outcome in human breast cancer. In addition, the silymarin group additionally demonstrated a slight massive reduction in the total antioxidant level, regardless of the fact that the 8-OHdG level was decreased, interestingly, the yeast group demonstrated a high 8-OHdG level with a reduction in the total antioxidants level. The elevated level of lipid peroxide in that group (yeast) that didn't have any noticeable shift in the nanosilymarin group could be due to it. Additionally, it is conceivable that the elevate in the level of ErbB-2 in the nanoyeast group is attributable to the elevation in the level of 8-OHdG and the lessened level of antioxidants in the nanoyeast group relative to the same normal yeast group. In a previous study, yeast was found to be a promising an anticancer agent that incites significant in vivo levels of apoptosis in malignant cells. Be that as it may, yeast therapy for the breast cancer amelioration can currently not appear to be supervised in restorative clinical trials (Ghoneum et al. 2007). The consequences of biochemical changes associated with rat breast cancer on the prevention of experimental animals have been demonstrated by the results of comparability among nano-and normal bioactive components forms. Where indeed, in comparison with normal tamoxifen, which achieved the highest positive results as indicated by signs and biochemical results or biochemical proof of the level of resistance to rat breast cancer, the nanotamoxifen group showed the best positive results in the level of resistance to rat breast cancer and furthermore as opposed to the consequences of biochemical indications of the other nanoparticle (silymarin-yeastisoflavones). High estrogen levels are a major risk factor for developing hormone-dependent diseases, including cancer of the breast. Tamoxifen, as an anti-estrogen, is perceived to be the first-line endocrine therapy for the prevention and amelioration of human breast cancer (Christinat et al. 2013). The side influence of tamoxifen has been perceived via recent studies. Numerous clinical studies have focused on discovering complementary substances that can synergize and diminish the side influences of tamoxifen (Yaacob et al. 2014;Dias et al. 2013). The beneficial relationship among tamoxifen and certain nutrients have been determined to reduce the medicinal side influences of tamoxifen amelioration or prevention (Tham et al. 1998). Perhaps it is said that the utilization of tamoxifen in the form of nanoparticles demonstrates positive influences, the main of which, in comparison with the natural form of tamoxifen, raise the incident and actuation of apoptosis mechanism besides minimizing the signs or symptoms of rat breast cancer ErbB-2 and 8-OHdG and minimizing the estrogen level and demonstrated the consequences of high lipid peroxidation level. It's conceivable that transformation of tamoxifen into the form of nanoparticles that allows the molecules to be separated has instigated the generation of free radicals that have just incited and raise the level of apoptosis. In antioxidant sources like isoflavones and silymarin, this might not have been demonstrated. The results were consistent with researches that demonstrated the power of free radicals and their relationship with antioxidants to elevate the progression of apoptosis in light of the programmed death of the diseased cells. Where the cellular fate has been accounted for, it is manipulated both endogenously and exogenously via means of various factors inside the cell, including a plenitude of gene level and defences against free radicals. Free radical species are in charge of regulating numerous processes of progression, differentiation, and death including apoptosis. Besides, numerous antioxidants and antioxidant enzymes tend to be equipped for anticipating apoptosis prompted via a variety of agents (Mates 2000).
The results of other positive biochemical indicators demonstrate signs of diminishing the progression of rat breast cancer via nanotamoxifen. The second best biochemical influence and biochemical indicator for the improvement of rat breast cancer is the nanoisoflavone prevention group. Despite the fact that the nanosilymarin group didn't indicate activation of the mechanism of programmed cell death, it demonstrated positive indications of 8-OHdG, ErbB-2, and estrogen levels being reduced. The mechanism of silymarin as an antioxidant could not be reactivated by apoptosis. As of now, the amount of lipid peroxidation was not matched a major change substantially and the level of total antioxidants that may have been devoured in an attempt to create a balance between lipid peroxidation and antioxidants had diminished. In view of its antioxidant power and Page 7 of 8 Ezzat et al. Bull Natl Res Cent (2021) 45:78 free radical scavenging, silymarin has been stated to have cytoprotection activities. Silymarin has been demonstrated to have the growth inhibitory consequence of cell proliferation suppression and the incidence of apoptosis (Ramakrishnan et al. 2009). Likewise, polyphenol research which incorporates silymarin was viewed as a promising field in the amelioration and prevention of human breast cancer was also accounted for.
To block and postpone the microscopic stages of carcinogenesis, three fundamental methodologies were discovered (Mocanu et al. 2015;De Flora and Ferguson 2005;Flora et al. 2001). A preventive approach which obstructs toxic and mutagenic influences is a key technique, which consequently anticipates tumor initiation and promotion. Amid the early stages of carcinogenesis, the secondary strategy poses anti-cancer potential by various mechanisms, including the supermacy of signal transduction, angiogenesis prevention, antioxidant mechanisms, hormones, and immune modulation, which at last causing cancer progression to be blocked. By controlling cell adhesion molecules, shielding the extracellular matrix (ECM) from degradation, and upregulation genes that block metastasis, the third cancer amelioration and prevention strategy involves obstructing the invasiveness and metastatic functions of a tumor (De Flora and Ferguson 2005;Flora et al. 2001).
Conclusion
• Based on our data, we hypothesize that the utilization of tamoxifen in the form of nanoparticles has investigated positive influences, the most essential of which raise the occurrence and activation of mechanism of apoptosis besides minimizing the signs of ErbB-2 and 8-OHdG breast cancer and minimizing the amount of estrogen relative to the natural form of tamoxifen and demonstrated the results of high lipid peroxidation levels. Transforming tamoxifen and some bioactive compounds into a type of nanoparticles that induces molecular spacing could have induced the generation of free radicals that have already helped to induce and elevate the apoptosis level. • One of the domains of modern needs of several considerable types of researchers and researches is nanotechnology in micronutrients and their relationship to health or disease. Nanotechnology is one of the essential domains that may be used to ameliorate certain disease on a massive scale and mitigate their progression.
|
v3-fos-license
|
2023-07-15T15:32:11.740Z
|
2023-04-20T00:00:00.000
|
259900917
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jih.uobaghdad.edu.iq/index.php/j/article/download/3004/1870",
"pdf_hash": "73eea91bbe0044a1440fd613f5458445d218994e",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46655",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1f98a6baa0ead4f07545529c59534be9d84e2f55",
"year": 2023
}
|
pes2o/s2orc
|
Knowledge and Attitude of Using Anabolic Androgenic Steroids Among Male Bodybuilders in Al-Russafa Baghdad Province
Anabolic androgenic steroids are being more popular between bodybuilders and people who are using gym to enhance their physiques. However, according to the increase of the laws prohibiting sales of these substances without physician prescription, the route of getting and administration practices have become more and more dangerous. Anabolic androgenic steroids include many synthetic derivatives of testosterone its play a significant role in medical treatment. These groups of medications are extensively abused at young ages to rising lean body mass and advance the athletic quality of performance. It has been described that anabolic steroids use can cause many long and short-term side effects, this study intended to assess the level of knowledge about AAS and its effects among gym users. 50 bodybuilders were using gym routinely and were using AASs for at least 2 months and another who were using gym routinely but never use AAS before were interviewed to show their knowledge about short and long term AASs using side effects. Participants used androgenic hormones for three main reasons health, social and personal, vast majority of the participants obtained the required results by using AAS, The majority of participants were going into the gym five to six times weekly for an hour daily. Enanthate was the main steroid abused by the participants and the gym coaches were the major source of selling steroids to the participants. The anabolic androgenic steroids had a harmful effect and abused by large number of young ages.
in the sports field, they will try to gain a benefit over their challenger in order to achieve dominance and win the competition [2].
The use of drugs in athletics, for both therapeutics and improvement of performance. Further, both required professional pharmacists to provide information, drug education, and counselling to athletes, parents, coaches, athletic trainers, and the public community at all levels of competition [3].
Sports Pharmacy defines as the training and practicing of pharmacists to authorize them actively participate in anti-doping operations. There has been an enormous need to regulate the use of drugs or acceptable medicines and supplements. Therefore, a developing field, sports pharmacy has been experienced all around the world. Sports pharmacy controls the utilization of these drugs for either medicinal or to enhance the quality of performance. Another module of sports pharmacy is doping control. [4].
Anabolic androgenic steroids use for performance improvement and shape enhancement is an increasing worry in many countries; however, information, awareness, and understanding of health harmful effects connected to AAS appear to be diverse and possibly limited [5]. This has suggestions for the plan of culturally suitable health-associated involvements supported by promoting damage reduction and quitting support in the Eastern Mediterranean. We highlight implications for normalization of this type of drug use among gym goers, and athletes [6] there is developing evidence of the opportunity of harmful effects of long-term androgenic steroid use on the brain health of a user [7] also in the increased dose and long-period, users had a possibility of cognitive defects [8].
There is a study in Saudi Arabia, Iraq, Iran, Jordan, Lebanon, Kuwait, United Arab Emirates, and Pakistan that reported the main sources in these nations were friends, fitness trainers, and coaches. Other finding ways included obtaining from gym users, training partners, the black markets, online sources, veterinary doctors, fitness stores, pharmacists, and physicians [5].
Most Eastern Mediterranean participants were found to be aware of the anabolic steroid effects of AAS, such as increased muscle mass, body weight, bodybuilding effects, and greater muscle power [9][10][11].
The objective of the study: To detect the level of information, and attitudes towards AAS usage among male gym users in Baghdad.
To detect the types of AAS used more than other types and duration of use.
To assess the ways of obtaining AAS and source of informations about using.
To detect the causes that lead participants to use AAS.
Method
In this cross-sectional, questionnaire study, the pharmacist researcher interviewed the gym members who are using AAS in the time of the study with those who were never used AAS before to detect their knowledge of AAS users about its dangerous and possible side effects. This study was carried out in two gyms in Baghdad Al-Russafa and data collection lasted for 3 months from the first of February 2022, until the end of April 2022.
Inclusion criteria include:
Male >18 years of age, Healthy bodybuilders using the gym, Agreement all the participant in the study also the agreement of college of pharmacy university of Baghdad committees had been taken.
Exclusion criteria include:
Female gender, Age under 18 years. A special sheet was designed by the research team to match the study goals, the data was collected of all participants of the study regarding their sociodemographic data, comorbidities, lab investigations, medication history, and BMI. Asking the participants about their knowledge of AAS and its current and future side effects the interview was done by the researcher using phone calls in the Arabic language and the questionnaire obtained from a Saudi study [14].
Ethical concerns
This study was permitted by the scientific committee of the college of pharmacy/university of Baghdad. All of the participants' approval has been taken before starting the samples collection. Anabolic Androgenic Steroids using bodybuilders were advised against substance abuse and also they were given full informations about the adverse effects. Unfortunately almost all of them continued to consume steroids afterward.
Statistical analysis
Data were analyzed using SPSS version 21.0. Descriptive statistics frequency and percentages were used to describe the categorical study and outcome variables also minimum, maximum and standard deviation for non-categorical variables. Fisher's Exact Test was used to assess the binary outcome variable (Yes/No).
Results
The participants were young men with an average age of 29.0 (±6.6) years, BMI of 27.2 (±3.5), and without chronic diseases (98.8%). The vast majority of the participants had a college education (81.5%) in non-medical specialties (91.4%). Approximately 26% of the participants were alcoholics and 43% were smokers. The majority of participants (76.2%) were going to the gym five to six times weekly for an hour (96.3%) daily. The participants were doing four main different exercises for three main reasons: health, social and personal. More than three quarters (82%) of the user group were taking AAS for non-continuous periods. There was no significant (P-value ˃5) association between the participant education level and their awareness of AAS adverse effects.
Discussion
The purpose of this study was to determine how well-informed and aware male gym members were of the health risks associated with AAS misuse. This information may be useful for focusing efforts and modifying laws intended to control the use of AAS from the viewpoint of the healthcare system. The study recruited 81 male gym users from two gyms in Al-Russafa Baghdad province. The participants were categorized into two groups: 50 users were taking androgenic anabolic steroids (AASs) and 31 men were not taking AAS. The participants were young men with an average age of 29.0 (±6.6) years, BMI of 27.2 (±3.5), and without chronic diseases (98.8%). The majority of the participants had a college education (81.5%) in nonmedical specialties (91.4%). About 26% of the participants were alcoholics and 43% were smokers Table 6-1. The majority of participants (76.2%) were going to the gym five to six times weekly for an hour (96.3%) daily. The participants were performing four main different exercises for three chief reasons: health to improve their health status also some of them followed physician instructions about using the gym, social reasons due to the community effect on the body shape standards, and personal reasons. More than three-quarters (82%) of the AAS user group were taking AAS for a non-continuous schedule ( Table 2). Non-continuous dosing means Common patterns for abusing androgenic steroids include cycling (regularly taking several doses in certain time periods, pausing for certain time periods, then restarting), stacking (this permits mixing of oral and injectable steroid dosage forms, and an addition of more two or three distinct steroids), pyramiding (gradually increasing the dose or strength of steroid abused until the peak is reached, then regularly weaning off to zero), plateauing (to break the dependency, overlap, alternate, or substitute with another steroid), however, there is no empirical proof that these ways of steroids application reduce the drugs' harmful effects [15].
More than half (56.8%) of all participants have used supplements with or without AASs following the instruction of gym coaches without medical checking if the body needs these supplements. Some participants (19.23.4%) have used growth hormones. The vast majority (94%) of the AASs users group have obtained the intended results after using androgenic anabolic steroids (AAS). Additionally, one-third of the users group (36%) advised others to use AAS Table 3. also in Saudi Arabia study found that (77%) of participants would still recommend AAS to friends in spite of self-declared knowledge of the side effects of AAS [16] that's mean that the users of AASs had a full conviction about the usefulness of using it .
Four types of AAS were abused by the gym users: Enanthate was the most common type (50%) while cypionate was the least commonly used (10%) (Figure 6-1). More than half (58%) of the participants have used AAS for three month and one-third (34%) have used it more time (4-6 months) in order to reduce the side effects of AASs on the body in long term use (Figure 6-2). Gym coach was the most common source of AAS (78%) for the abusers (Figure 6-3) although anyone who practices the profession of pharmacy without a license shall be penalized by a fine of not more than 300,000 dinars or by a term of imprisonment not to exceed three years, or by both. according to Iraqi Pharmaceutical Profession Law chapter three .
Also, the sourcing of AASs and, how AAS users are highly influenced by the availability of items in the gym and by coaches and trainers was reported and highlighted in a number of studies [17,18].
Social media (86%) and gym coach (76%) were the most common sources of information about AAS among gym users (figure 4) this high percentage reflect the effect of social media on young ages so we can use this effect in a beneficial way to aware more about AAS side effects. Eighty percent of the AAS users were planning to take them again in the future.
Lack of knowledge about AAS and its side effects is not rare and has also been stated in studies in Australia [19] Sweden [20] There was no significant (P-value ˃5) association between the participant education level and their awareness of AAS adverse effects (Table 6-4).
Limitations
The goal of the current study was to provide more light on the factors that are contributing to an epidemic of AAS usage among young people who utilize gyms for fitness and aesthetic reasons, despite the harmful side effects of these medicines. However, it is important to take into account the following study limitations when interpreting the findings: (1) The research was limited to the city of Baghdad Al-Russafa, whose social and economic demography cannot be extrapolated to the rest of the nation, the region, or other nations.
(2) The study was performed only among male gym members (3) Regarding the study participants' doses or frequency schedules for using AAS, no information was gathered.
(4) The prevalence of AAS misuse in particular gyms and the sociodemographic traits of these gyms were not compared.
We think that these restrictions should be addressed in other, independent research that look into whether there is a dose-response relationship between using AAS and its side effects.
Conclusion and future directions
The findings of this study provide convincing proof of a high lifetime prevalence of AAS use among male gym goers. Improving health policies is urgently needed to slow the growth of AAS use among young adults who utilize the gym. Since gym owners and coaches or trainers have been identified as one of the most significant sources of AASs, these improvements may be concentrated on raising awareness among gym users and, more crucially, among these individuals. We think that stricter regulations should be put in place to prevent gyms and gym trainers from dealing with AAS. Abuse of illegal substances is becoming more and more recognized as a severe public health issue in Iraq. However, the current study's findings suggest that the usage of AASs should be a major focus of those efforts.
|
v3-fos-license
|
2021-11-26T00:07:09.119Z
|
2021-10-20T00:00:00.000
|
244604478
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.researchsquare.com/article/rs-981177/latest.pdf",
"pdf_hash": "b64a842b5736f6d48289c2330549a42900f633e9",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46657",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "44d5353d978bc55f54b65c6146795c7ef658867c",
"year": 2022
}
|
pes2o/s2orc
|
Transcriptomic analyses provide insight into adventitious root formation of Euryodendron excelsum H. T. Chang during ex vitro rooting
Euryodendron excelsum H. T. Chang, a critically endangered species endemic to China, is a source of valuable material for the furniture and construction industries. However, this species has some challenges associated with rooting during in vitro propagation that have yet to be resolved. In this study, we optimized rooting and conducted a transcriptomic analysis to appreciate its molecular mechanism, thereby promoting the practical application of in vitro propagation of E. excelsum, and providing technical support for the ecological protection of this rare and endangered species. Results showed that ex vitro rooting performed the highest rooting percentage with 98.33% at 25 days. During ex vitro rooting, there was a wide fluctuation of endogenous levels of indole-3-acetic acid (IAA) and hydrogen peroxide (H2O2) at the stage of root primordia formation. Transcriptome analysis revealed multiple differentially expressed genes (DEGs) involved in adventitious root (AR) development. DEGs involved in plant hormone signal transduction, such as genes encoding auxin-induced protein, auxin-responsive protein, and auxin transporter-like protein, and in response to H2O2, oxidative stress, abiotic and biotic stimuli were significantly up- or down-regulated by ex vitro treatment with 1 mM indole-3-butyric acid (IBA). Our results indicate that ex vitro rooting is an effective method to induce AR from E. excelsum plantlets during micropropagation. DEGs involved in the plant hormone signal transduction pathway played a crucial role in AR formation. H2O2, produced by environmental stimulation, might be related to AR induction as a result of the synergistic action with IBA, ultimately regulating the level of endogenous IAA. Under ex vitro rooting, a synergistic action between H2O2 produced by environmental stimulation and IBA played crucial role in the regulation of AR formation from E. excelsum plantlets during micropropagation.
Introduction
Adventitious root (AR) system is one of plant roots systems that arises from parts of the plant rather than from the roots of the embryo (Barlow 1986). ARs derived from non-root tissues, are the main path by which new plantlets root in vegetative propagation, and usually generated during normal development or stress conditions (Steffens and Rasmussen 2016). In vitro propagation via tissue culture has become an important technology in plant conservation strategies given its advantages, such as high propagation coefficient, Communicated by Paloma Moncaleán.
Yuping Xiong and Shuangyan Chen have contributed equally to this work. freedom from restrictions imposed by season, especially for rare and endangered species (Bhardwaj et al. 2018;Khater and Benbouza 2019;Rameshkumar et al. 2017). However, in some in vitro propagation systems, plants may display rooting-recalcitrance problems, for example by Juniperus thurifera L. (Khater and Benbouza 2019), Zeyheria montana Mart. (Cardoso and Teixeira da Silva 2013), Elegia capensis (Burm. f.) Schelpe (Verstraeten and Geelen 2015), and Cariniana legalis (Lerin et al. 2021). Rooting-related problems limit the application of in vitro propagation for plant breeding and conservation efforts. Therefore, AR formation during plant in vitro culture is a top research objective for plant asexual propagation breeders.
Plant growth regulators (PGRs) are commonly AR inducers used in in vitro culture, such as 1-naphthaleneacetic acid (NAA), indole-3-acetic acid (IAA) and indole-3-butyric acid (IBA), but these tend to show species-and concentrationdependent AR induction efficiency. For Laburnum anagyroides Medic., a low concentration of IAA, IBA or NAA induced AR normally, but high concentrations induced callus formation in shoot tips and subsequently, plant death (Timofeeva et al. 2014). 0.25 or 0.5 mg/L of NAA promoted rooting during in vitro culture of Cornus alba L., while IBA had an adverse effect on root growth and even inhibited AR induction at 1.0 mg/L (Ilczuk and Jacygrad 2016). Moreover, in vitro rooting and ex vitro rooting also display some differences in AR formation. In vitro rooting to induce AR is always performed under aseptic conditions (Barpete et al. 2014;Guo et al. 2019;Nourissier and Monteuuis 2008). In contrast, ex vitro rooting employs unrooted plantlets that are removed from aseptic in vitro conditions culture to induce AR in an open environment (Revathi et al. 2018;Shekhawat and Manokari 2016). Although the two culture methods are affected by various factors, ex vitro rooting can enhance rooting percentage and survival during plant acclimatization, and reducing limiting factors in micropropagation (Benmahioul et al. 2012;Yan et al. 2010). For example, Ceratonia siliqua L. plantlets treated with 4.8 μM IBA displayed a 46.3% rooting response, forming a fragile root system when rooted in vitro whereas the induction of AR from ex vitro shoots treated with 14.4 μM IBA showed significantly higher rooting percentage (91.7%), and a normal morphological appearance, and were successfully acclimatized, showing more than 90% survival (Lozzi et al. 2019).
The mechanism of AR induction involves various key genes, proteins, and pathways (Chen et al. 2020a;Qi et al. 2020;Stevens et al. 2018). Most PGRs promote AR development by regulating the level of endogenous IAA, thus genes and pathways related to the biosynthesis and transport of IAA are considered to play a significant role in AR formation . Transcriptome sequencing revealed that candidate genes involved in AR formation of Mangifera indica L. cv. Zihua cotyledon segments were predicted to encode polar auxin transport carriers, auxin-regulated proteins and cell wall remodeling enzymes (Li et al. 2017). In Arabidopsis thaliana, IBA induced AR formation in thin cell layers by conversion into IAA involving nitric oxide activity, and by positive action on IAA transport and biosynthesis (Fattorini et al. 2017). Genes related to the synthesis, transport, metabolism and recognition of plant hormone were involved in the in vitro induction and elongation of ARs in Populus euramericana (Zhang et al. 2019b). However, knowledge of the molecular aspects of adventitious rooting in plants, especially in woody species that are recalcitrant to rooting, remains scanty. Understanding the mechanism of AR formation is of great importance to strategize plant breeding and conservation efforts to maximize the marketable yield and research value, especially of rare and endangered plants.
Euryodendron excelsum H. T. Chang, a monotypic genus endemic to China, is fine-textured and colorful, making it a source of valuable material for the furniture and construction industries (Chang 1963). However, mainly as a result of habitat destruction and deforestation caused by human activity, only a single population of E. excelsum can now be found at Bajia Zhen, Yangchun County, Guangdong Province, in southern China (Shen et al. 2008;Ye et al. 2002). E. excelsum is naturally propagated only by seeds, but seed germination and seedling growth toward adulthood are fragile stages that limit natural recruitment and regeneration (Shen et al. 2009. Based on the categorization of the International Union for Conservation of Nature and Natural Resources (IUCN), E. excelsum has been listed as a critically endangered plant since 1998, and continues to maintain this status and faces a high risk of extinction, implying that some strategies were put forward for the conservation of E. excelsum populations (Barstow 2020).
In our previous study, a micropropagation system for E. excelsum was established by in vitro culture. When treated with either IBA or NAA, in vitro E. excelsum showed lower rooting percentage in agarized woody plant medium (WPM) (Lloyd and McCown 1980) than in agar-free vermiculitebased WPM after culture for 2 months, and callus formed at the base of stems in these media, hampering the successful transplantation of plantlets (Chen et al. 2020b). We inferred from that there may be other factors that can stimulate AR formation in E. excelsum when cultured in vitro, or that enhance the induction efficiency of PGRs. Thus, the objectives of this study were to improve the micropropagation system of E. excelsum by optimizing the AR induction conditions, and to reveal the key influencing factors and related genes and processes underlying AR formation by transcriptomic analysis. By better elucidating the mechanism of AR formation of E. excelsum in vitro, research on biological conservation and genetic engineering of E. excelsum can be promoted and advanced.
Culture of plantlets
The basic micropropagation system for E. excelsum that was previously established (Chen et al. 2020b), was employed in this study. In vitro plantlets were maintained and propagated on WPM supplemented with 4.44 μM BA (Solarbio, Beijing, China) and 0.53 μM NAA (Macklin, Shanghai, China). Single shoots with more than four leaves and two nodes cut from multiple shoots were inoculated on PGR-free WPM for 30 days. During this period, AR was not induced.
Adventitious root induction
For the in vitro rooting treatment, 2 mm of the base of single shoots cultured on PGR-free WPM for 30 days were cut and trimmed shoots were inoculated on WPM supplemented with 0, 0.05, 0.5, and 5 μM NAA, IBA or IAA (Macklin). Shoots in the control group were inoculated on PGR/auxin-free WPM. Ten shoots were placed in each jar, and four jars were prepared for each treatment. Three replicates were performed for each treatment (n = 12 jars; 120 shoots in total).
For the ex vitro rooting treatment, single shoots cultured on PGR-free WPM for 30 days were removed from culture jars, and about 2 mm was trimmed from the base. Trimmed shoots were treated with 0, 1, 2, and 3 mM NAA, IBA or IAA for 10 min, then transferred to plates (5 cm in height; 27 cm in width; 47 cm in length) for raising seedlings supplemented with vermiculite and perlite (v/v, 1:1). Trimmed shoots cultured on PGR/auxin-free WPM served as the control. Forty shoots were planted in each plate, and three replicate plates were prepared for each treatment (n = 3 plates; 120 shoots in total).
Rooting percentage as well as average root number and root length were calculated for each treatment. After one-way analysis of variance (ANOVA), treatment means were compared by Duncan's multiple range test (DMRT) in SPSS Statistics version 20.0 (IBM, New York, USA) and were considered to be significant difference between the designated treatments at P < 0.05.
Histological analysis
The base of shoots (0.5-1.0 cm) was collected at 0, 2, 4, 6, 8, 10 and 12 days after the optimum treatment method under ex vitro rooting and fixed for 24 h in formalin/acetic acid/alcohol at 25 ± 2 °C. At least 15 bases were collected for each time point. Fixed material was dehydrated in a 70-100% alcohol dehydration series followed by infiltration with molten paraffin (Mackin), and embedded in paraffin wax. Sections (8-10 μm thick) were made with a rotary microtome (KEDEE, Zhejiang, China) and stained in 0.02-0.05% toluidine blue (Mackin). Sections were viewed with a Nikon Eclipse E200 microscope (Nikon, Tokyo, Japan) and micrographs were captured using a HQimage C630 digital camera (Hengqiao, Hangzhou, China).
Determination of endogenous IAA and hydrogen peroxide (H 2 O 2 ) content
To analyze IAA and H 2 O 2 content, the same method and growth conditions were employed as for histological analysis. Material was stored at − 80 ℃. Three biological replicates of 10 cut stem bases were harvested as 0.1 g fresh weight (FW) to assess endogenous IAA and H 2 O 2 content, according to the instructions of an IAA Enzyme Linked Immunosorbent Assay kit (Dogesce, Beijing, China) and Hydrogen Peroxide Assay kit (Solarbio, Beijing, China) . After one-way ANOVA, treatment means were assessed by DMRT in SPSS Statistics version 20.0 and were considered to be significantly different between the designated treatments at P < 0.05.
Isolation of RNA and cDNA library construction
The same samples used to analyze IAA and H 2 O 2 content were employed for RNA-seq analysis. Samples collected from 0, 2, 4, 6, 8, 10, and 12 days were marked as ER0, ER2, ER4, ER6, ER8, ER10, ER12, respectively, and stored at − 80 ℃. The Column Plant RNA OUT Extraction kit (Tiandz, Beijing, China) was used to isolate total RNA from each sample, using the methods suggested by the manufacturer. The concentration and quality of all RNA samples was examined by agarose electrophoresis on an Agilent 2100 bioanalyzer (Agilent Technologies, Palo Alto, CA, USA). Sequencing libraries were generated using the TruSeq RNA Sample Preparation Kit (Illumina, San Diego, CA, USA). Magnetic beads with oligo (dT) were used to purify mRNA, which was fragmented into short fragments (200-300 bp). Cleaved mRNA fragments were primed with a random hexamer primer for first-strand and second-strand cDNA synthesis. After purification, end repair, and ligation to sequencing adapters, 21 cDNA libraries of three biological replicates for each treatment were prepared and sequenced using the Illumina Novaseq 6000 platform by Personal Biotechnology Co., Ltd. (Shanghai, China).
Quantitative reverse transcription-polymerase chain reaction (qRT-PCR) analysis
For qRT-PCR analysis, ten candidate DEGs (Six D E G s , T R I N I T Y _ D N 1 1 7 3 6 _ c 0 _ g 1 , T R I N -ITY_DN14475_c0_g1, TRINITY_DN299_c1_g1, TRINITY_DN4858_c0_g3, TRINITY_DN5423_c0_ g2 and TRINITY_DN5557_c0_g2 were identified as log2(FC) > 5; three DEGs, TRINITY_DN5677_c0_g1, TRINITY_DN2748_c0_g1 and TRINITY_DN5677_c0_g2, were enriched in "plant hormone signal transduction"; One DEGs, TRINITY_DN7281_c1_g1, was enriched in "tryptophan metabolism" pathway) were randomly chosen to validate the transcriptomic data. qRT-PCR was performed with the LightCycler 480 System (Roche Diagnostics, Mannheim, Germany) using PerfectStart Green qPCR Supermix (TransGen Biotech, Beijing, China). E. excelsum actin was used as the internal control and the 2 −ΔΔCt method (Livak and Schmittgen 2001) was used to analyze the differential expression of candidate DEGs. Gene-specific primers were designed by Primer Premier 5.0 and are listed in Table S1. Three biological replicates and three technical replicates were performed for each candidate gene.
Adventitious root formation during in vitro and ex vitro rooting
During in vitro rooting, compared with the control group, NAA did not induce ARs whereas IBA or IAA could. High concentrations of IBA and IAA inhibited rooting percentage (Fig. 1a), root number (Fig. 1c) and root length (Fig. 1e). Highest rooting percentage (72.50%) was obtained by 0.5 μM IAA at 60 d after treatment. During ex vitro rooting, NAA, IBA and IAA significantly increased rooting percentage (Fig. 1b), root number (Fig. 1d) and root length (Fig. 1f) compared with the control group. A low concentration of IBA (1 mM) most effectively induced rooting, resulting in the highest rooting percentage (98.33%) and root length (2.72 cm) at 25 d after treatment.
Ex vitro rooting induced the highest rooting percentage (98.33%) at 25 days, while highest rooting percentage during in vitro rooting was 72.50% at 60 days. Thus, ex vitro rooting induced AR from E. excelsum plantlets earlier (faster) than in vitro rooting. The samples collected from ex vitro rooting were used for the next analysis. 8 days after the 1 mM IBA ex vitro rooting treatment, AR primordia were evident, and ARs emerged from the epidermis after 10 days (Fig. 2a, b). ARs elongated, rooting percentage was almost 100% by 25 days, and plantlet survival reached 100%.
IAA and H 2 O 2 content analysis
In the 1 mM IBA treatment during ex vitro rooting, IAA content in stem bases increased gradually from 0 to 8 days, then dropped at 10 and 12 days. A sharp increase in IAA content was observed at 8 days (Fig. 3a). The trend of H 2 O 2 content was different from that of IAA content (Fig. 3b). H 2 O 2 accumulated rapidly after treatment, peaked at 2 days, then sharply decreased at 8 days. The highest content of IAA and lowest content of H 2 O 2 at 8 days corresponded to the timing of AR primordia formation.
De novo assembly and sequence analysis
To identify genes involved in AR induction of E. excelsum plantlets during ex vitro rooting, 21 cDNA libraries were prepared from three repeat mRNA samples collected from 0 (ER0), 2 (ER2), 4 (ER4), 6 (ER6), 8 (ER8), 10 (ER10) and 12 (ER12) days after 1 mM IBA treatment (Table 1). The total number of raw reads produced for each library ranged from 42,845,192 to 52,575,518 with Q20 > 97.48% and Q30 > 93.42%. After filtering, the clean reads per library ranged from 39,721,158 to 49,142,018 with the percentage of clean reads > 91.07% (Table 1). Trinity software v2.5.1 was used to assemble clean reads and obtain transcripts and unigenes for subsequent analysis. The quality and length distribution of transcripts and unigenes are shown in Table 2 and Fig. S1, respectively.
The unigenes were processed in six databases to perform best hits by Blast with E values < 10 -5 , and inferred putative functions of the sequences were assigned. A total of 52,188 (40.15%) unigenes were matched to known genes in the NR database, 23,159 (17.82%) sequences to Pfam and 37,827 (29.10%) sequences in the Swiss-Prot database ( Table 3). The NR database queries revealed that the annotated unigenes were assigned with a best score to sequences from Adventitious root induction of Euryodendron excelsum shoots during in vitro rooting at 60 days (a, rooting percentage; c, root number; e, root length) and ex vitro rooting at 25 d (b, rooting percentage; d, root number; f, root length) after treatment. Bars indicate means ± SE. Different letters indicate statistically significant differences based on Duncan's multiple range test (P < 0.05) between the designated treatments. Forty shoots were prepared for each treatment, and three replicates were performed for each treatment the top seven species (Fig. 4): Vitis vinifera (21.76%), Theobroma cacao (4.34%), Coffea canephora (4.01%), Nelumbo nucifera (3.88%), Sesamum indicum (3.13%), Ziziphus jujuba (2.67%) and Manihot esculenta (2.25%).
The annotation of GO terms revealed that 24,939 unigenes (19.19%) were assigned to biological processes, molecular functions, and cellular components (Fig. S2). Most annotated unigenes in biological processes were involved in "cellular process", "metabolic process", and "single-organism process". In the cellular component category, most annotated unigenes were annotated as "cell", "cell part" and "membrane". In the molecular functions, most annotated unigenes were categorized as "binding", "catalytic activity" and "transporter activity".
A total of 22,160 unigenes (17.05%) and 33 pathways were assigned based on metabolism, genetic information processing, environmental information processing, cellular processes and organismal systems pathway (Fig. S3). On the basis of KEGG analysis, most unigenes were annotated into "carbohydrate metabolism" of metabolism, "translation" of genetic information processing, "signal transduction" of environmental information processing, "transport and catabolism" of cellular processes, and "endocrine system" of organismal system.
The possible functions of unigenes were predicted and classified by alignment to the eggNOG database. A total of 50,632 unigenes (38.95%) were distributed into 25 categories (Fig. S4). Among them, the NOG category "general function prediction only" represented the largest group, followed by "function unknown", "signal transduction mechanisms", and "posttranslational modification, protein turnover, chaperones".
DEGs in response to IBA-induced ex vitro rooting
Hierarchical clustering was used to analyze the expression patterns of DEGs in ER0, ER2, ER4, ER6, ER8, ER10 and ER12 libraries with three biological replicates. These DEGs were divided into nine main clusters (Fig. S5). DEGs in cluster 1, 2, 3 and 4 always showed high expression in the ER0 library with different trends in the other five libraries. The remaining five clusters represented DEGs with high expression levels induced by IBA treatment. The highest number of up-regulated genes was observed in the ER2 library (7364) and fewest in the ER8 library (5649) (Fig. 5a). Upset plot diagram analysis showed that 4635 unigenes maintained differential expression after IBA-induced treatment from 2 to 12 days (Fig. 5b).
GO enrichment analysis
According to GO enrichment analysis, the degree of enrichment was measured based on the rich factor (higher rich factor represents greater enrichment), the FDR value (range from 0 to 1; a score close to 0 indicates more significant enrichment) and the number of genes enriched to a GO term. The significant enrichment GO terms of DEGs showed a few differences in the six libraries (Fig. S6). In the ER2 library, the significantly enriched terms were "monooxygenase activity", "response to auxin", "oxidoreductase activity, acting on paired donors, with incorporation or reduction of molecular oxygen" and "oxidoreductase activity, acting on paired donors, with incorporation or reduction of molecular oxygen, NAD(P)H as one donor, and incorporation of one atom of oxygen". More enrichment terms were categorized into biological processes (BP) and molecular functions (MF). In the ER4 library, more enrichment terms were categorized into cellular component (CC) and MF, and the significantly enriched terms were the same as the ER2 library, but the term "response to auxin" was replaced by "photosynthesis, light reaction". In the ER6 and ER8 libraries, more enrichment terms were categorized into CC and MF, and "oxidoreductase activity, acting on paired donors, with incorporation or reduction of molecular oxygen, NAD(P)H as one donor, and incorporation of one atom of oxygen", "monooxygenase activity" and "photosystem" were listed as significantly enriched terms. In the ER10 library, more enrichment terms were categorized into CC and MF, and "photosynthesis, light harvesting", "chlorophyll binding" and "photosystem I" represented the main significantly enriched terms. In the ER12 library, more enrichment terms were categorized into BP and MF, and the significantly enriched terms were "hydrogen peroxide metabolic process", "hydrogen peroxide catabolic process" and "phenylpropanoid metabolic process".
Besides "hydrogen peroxide metabolic process" and "hydrogen peroxide catabolic process", several terms related to adversity stress were also identified in the GO enrichment analysis (Fig. 6). The terms "hydrogen peroxide metabolic process" and "hydrogen peroxide catabolic process" shared the same number and type of DEGs, most of which, including DEGs for the "response to oxidative stress" term, were up-regulated at 8 days after ex vitro treatment with IBA (Table S2). Most DEGs were associated with the term "response to abiotic stimulus", followed by "response to oxidative stress" and "response to biotic stimulus", while Different letters indicate statistically significant differences based on Duncan's multiple range test (P < 0.05) between the designated treatments. Three biological replicates of 10 cut stem bases were harvested as 0.1 g fresh weight (FW) to assess endogenous IAA and H 2 O 2 content the fewest DEGs were associated with the term "response to hydrogen peroxide" (Fig. 6a). Most of the DEGs in the terms "response to abiotic stimulus" and "response to biotic stimulus" were up-regulated throughout the entire process of AR formation (Table S2). The terms "hydrogen peroxide metabolic process", "hydrogen peroxide catabolic process" and "response to oxidative stress" encompassed 43 DEGs simultaneously (Fig. 6b), and these were mainly identified Reads (No.)-total number of reads, Clean reads (No.)-number of high quality sequence reads, Bases (bp)-total number of bases, Clean data-number of high quality sequence bases, N (%)-proportion of unknown nucleotides in clean reads. Q20 (%) is the percentage of base recognition accuracy of more than 99.0% of bases. Q30 (%) is the percentage of base recognition accuracy of more than 99.9% of bases Table 2 The quality of transcripts and unigenes in Euryodendron excelsum N50 (bp)-All sequences are arranged from long to short, and the sequence length are added in this order. When the added length reaches 50% of the total length of the sequence, the length of the last sequence is N50, N90 (bp)-All sequences are arranged from long to short, and the sequence length are added in this order. When the added length reaches 90% of the total length of the sequence, the length of the last sequence is N90, N50 Sequence No.-The total number of sequences that the length is greater than N50, N90 Sequence No.-The total number of sequences that the length is greater than N90, GC%-The proportion of guanidine and cytosine nucleotides among total nucleotides as genes related to cationic peroxidase and peroxidase (Table S2).
The 12 up-regulated genes related to auxin-induced proteins, identified as AX6B, AX10A, A10A5, AX15A, AUX22, AUX22D, and AUX28, were sharply enriched in the "plant hormone signal transduction" pathway (Fig. 8a). Most of those genes were also differentially expressed at an early stage (2, 4 days) after IBA treatment. Among these genes, TRINITY_DN39834_c0_g1 and TRIN-ITY_DN31248_c0_g1 were extremely highly differentially expressed with log2(FC) > 8 in the ER0 vs ER2 and ER0 Fig. 4 Species distribution of the top BLAST hits against the NR database in Euryodendron excelsum. Pie charts were generated by R software vs ER4 comparisons. Four DEGs were up-regulated with log2(FC) > 1 by IBA treatment under ex vitro rooting in the ER0 vs ER2, ER0 vs ER4, ER0 vs ER6 and ER0 vs ER8 comparisons while TRINITY_DN5677_c0_g1 maintained up-regulated expression at all stages.
In addition, five up-regulated LAX genes (Fig. 8b) were significantly enriched in the "plant hormone signal transduction" pathway. Only TRINITY_DN9654_c0_g1 was up-regulated at all stages while the other four LAX genes were up-regulated at ER10 and ER12, except for TRIN-ITY_DN380_c4_g1, which was up-regulated at ER6.
QRT-PCR analysis of gene expression
To further validate the results from the RNA-seq data, 10 candidate DEGs related to adventitious root formation were selected for qRT-PCR analysis of E. excelsum samples that were collected 0, 2, 4, 6, 8, 10 and 12 days after 1 mM IBA treatment during ex vitro rooting. In the seven time points, the expression trend of the unigenes from qRT-PCR and RNA-seq analysis were largely consistent (Fig. 9). These results demonstrate that the transcriptome data accurately reflects the ex vitro response of IBA-induced AR formation of E. excelsum plantlets. Upset plot diagram was generated by R software. a Number of upand down-regulated DEGs in 2 (ER2), 4 (ER4), 6 (ER6), 8 (ER8), 10 (ER10) and 12 (ER12) days treatments compared with 0 days (ER0). The x-axis represents the comparison group, while the y-axis represents the number of DEGs; b Upset plot diagram analysis of the DEGs. The connections on the x-axis between points in vertical lines represent the intersection between corresponding data sets, while the y-axis represents the number of DEGs in each intersection
Discussion
AR development is a vital step in plant vegetative propagation, such as in vitro propagation and cuttings. Rooting recalcitrance is a critical factor limiting the application and further development of vegetative propagation (Diaz-Sala 2020; Stevens et al. 2018). IBA is the most frequently used plant hormone for clonal propagation in horticulture and forestry. Although IAA is a primarily native auxin in plants, IBA is more stable and effective in promoting ARs (Ludwig-Müller et al. 2005;Quan et al. 2017;Rout 2006). It is necessary to screen PGRs to find the optimal speciesconcentration ratio for AR induction during in vitro culture. In this study, IBA and IAA treatment significantly promote AR formation of E. excelsum, especially during ex vitro rooting. Furthermore, ex vitro rooting was more suitable for E. excelsum plantlets, with a higher rooting percentage and earlier rooting than in vitro rooting. Ex vitro rooting of shoots has also been applied to many difficult-to-root woody plant species, such as pistachio (Benmahioul et al. 2012), Dalbergia sissoo Roxb. (Vibha et al. 2014) and Bauhinia racemosa Lam (Sharma et al. 2017). In general, the chances of root damage during transplantation to substrates are less during ex vitro rooting, and plantlets tend to be more vigorous, allowing them to cope with environmental stresses during hardening (Arya et al. 2003;Vengadesan and Pijut 2009). Thus, ex vitro rooting is an obvious choice for AR induction during the micropropagation of woody species with further improvement in the choice of PGRs, substrates and other factors.
E. excelsum plantlets experienced a radical environmental change from in vitro aseptic conditions to open ex vitro rooting conditions, which may constitute an adversity stress. In the annotation and enrichment analysis of GO terms, we identified multiple DEGs involved in H 2 O 2 -related biological activities, oxidative stress, abiotic and biotic stimulus. AR formation is also a stress response of plants under adversity stress, and plays a key function in the adaptation of plants to abiotic and biotic stresses (Bellini et al. 2014;Steffens and Rasmussen 2016). The external environmental change may stimulate oxidative damage and increase the production of reactive oxygen species in plants (You and Chan 2015). H 2 O 2 is viewed mainly as a type of reactive oxygen species and a signaling messenger of many biological processes in plants, such as fruit growth and development (Khandaker et al. 2012), leaf senescence (Lin et al. 2019), stomatal closure (Zhang et al. 2019a), and root growth (Xiong et al. 2015). H 2 O 2 and IBA may also act synergistically to regulate adventitious rooting, dependent on the auxin pathway, in marigold explants (Liao et al. 2011). The exogenous application of H 2 O 2 to cucumber plants significantly increased the emergence of ARs (Li et al. 2016b). In this study, the wide fluctuation of endogenous IAA and H 2 O 2 content in E. excelsum plantlets were observed at the stage of root primordia formation. And most DEGs involved in significantly enriched pathway of "plant hormone signal transduction" were up-regulated at the stage corresponded to the timing of H 2 O 2 accumulation. Those results indicate that adversity stress may have a positive effect on AR induction of E. excelsum plantlets under the synergistic action of AR formation involves a series of responses by genes, proteins and metabolites. Multiple biological activities and pathways have specific roles during AR development (de Almeida et al. 2020;Wei et al. 2014). In mungbean seedlings, KEGG pathway enrichment during transcriptomic analysis showed that ribosome biogenesis, plant hormone signal transduction, pentose and glucuronate interconversions, photosynthesis, phenylpropanoid biosynthesis, sesquiterpenoid and flavonoid biosynthesis, and phenylalanine metabolism were the pathways most highly regulated by IBA-induced AR formation, indicating their potential contribution to adventitious rooting (Li et al. 2016a). For apple rootstocks, the most heavily enriched KEGG pathways involved in AR formation were metabolic, biosynthesis of secondary metabolites, plant hormone signal transduction, phenylpropanoid biosynthesis and phenylalanine metabolism pathways, etc. (Li et al. 2018). In sugarcane shoots, DEGs associated with plant hormone signaling, flavonoid and phenylpropanoid biosynthesis, cell cycle, and cell wall modification, and transcription factors were involved in AR formation . During AR development of E. excelsum, we found that more DEGs were enriched in "plant hormone signal transduction" and "phenylpropanoid biosynthesis" pathways, similar to a number of previous studies. Therefore, we conclude that these two pathways have a vital influence on AR formation in E. excelsum plants.
IAA is the most abundant natural auxin, and endogenous IAA is closely related to the development of ARs in plants. The conversion of exogenous hormones to endogenous auxin and the synthesis of auxin are key factors regulating AR development (Olatunji et al. 2017). Tissue that produces ARs requires high levels of auxin, and the enrichment of high concentrations of auxin depends on polar auxin transport (Ahkami et al. 2013;Garrido et al. 2002). Thus, genes related to the synthesis, signaling and polar transport of auxin, like AUX, LAX, and PIN, are closely related to plant adventitious rooting (Druege et al. 2016). For example, auxin influx carriers MiAUX3 and MiAUX4 might play important roles during AR formation in mango cotyledon segments, and the expression levels of MiAUX3 and MiAUX4 resulted in a significant promotive effect of IBA on adventitious rooting (Li et al. 2012). Papaya plantlets not exposed to IBA could not form ARs and displayed a low expression of all auxin transporter genes in stem base tissues whereas IBA-treated plants were able to produce ARs and showed significantly increased expression of most auxin transporter genes, especially CpLAX3 and CpPIN2 (Estrella-Maldonado et al. 2016). In E. excelsum, DEGs for AUX and LAX, which were significantly enriched in plant hormone signal transduction, showed a high fold change during AR development. This implies that the expression patterns of those genes were linked to AR induction from E. excelsum plantlets. AUX/IAA protein is an early auxin response protein that always participates in the auxin signaling pathway by interacting with auxin response factor (ARF) protein or other genes (Salehin et al. 2015). During AR formation in petunia cuttings, the expression of genes of the Aux/IAA family showed strong temporal variation, supporting their important role in the induction and transition to subsequent root formation phases (Druege et al. 2014). The auxin receptor (TRANSPORT INHIBITOR1) TIR1 homolog gene, PagFBL1, interacted strongly with both PagIAA28.1 and PagIAA28.2 in the presence of NAA to regulate AR induction in poplar stem segments (Shu et al. 2019). In Arabidopsis thaliana, Aux/IAA proteins, IAA6, IAA9, and IAA17, interacted with ARF6 and/or ARF8 and likely repressed their activity in AR development, and complexed with TIR1 and (AUXIN-SIGNALLING F-BOX) AFB2 to form specific sensing to modulate jasmonic acid homeostasis and control AR initiation . In this study, 13 IAA genes were significantly enriched in the plant hormone signal transduction pathway, suggesting a significant relationship between AUX/IAA and AR formation in E. excelsum. The mechanism and interaction with other IAA genes would need to be revealed in future research.
SAUR , the largest family of early auxin response genes in plants, mediate the regulation of several aspects of plant growth and development (Ren and Gray 2015). SAUR proteins showed positive or negative effects on primary, lateral and adventitious root development. In A. thaliana, plants overexpressing SAUR41 exhibited increased primary root growth and a higher number of lateral roots (Kong et al. 2013). AtSAUR15 acts downstream of the auxin response factors ARF6,8 and ARF7,19 to regulate auxin signaling-mediated lateral root and AR formation, and plants overexpressing AtSAUR15 exhibit more lateral roots and ARs (Yin et al. 2020). In contrast to AtSAUR41 and AtSAUR15, overexpression of OsSAUR39 in rice resulted in reduced root elongation and lateral root development (Kant et al. 2009). SAUR proteins may display a species-or type-dependent positive function in AR formation. In E. excelsum, three SAUR genes maintained up-regulated expression after IBA-induced treatment, indicating a close association with AR formation.
We also found several highly up-regulated GH3 genes at all stages of AR formation in E. excelsum. GH3 proteins are also an early auxin response protein, play a crucial role in conjugating IAA to amino acids, and are critical in maintaining auxin homeostasis (Brunoni et al. 2020). Three GH3 genes, GH3.3, GH3.5, and GH3.6, were required for fine-tuning AR initiation in A. thaliana hypocotyls (Gutierrez et al. 2012). In cucumber hypocotyls, salicylic acid plays an inducible role in AR formation through competitive inhibition of the auxin conjugation enzyme CsGH3.5, and salicylic acid-induced IAA accumulation was also associated with the enhanced expression of CsGH3.5 (Dong et al. 2020). In apple plants, overexpression of MsGH3.5 significantly reduced the content of free IAA and increased the content of some IAA-amino acid conjugates, and MsGH3.5-overexpressing lines produced fewer ARs than the control (Zhao et al. 2020). Those results demonstrated that GH3 proteins were intricately involved in AR development, but did not only perform a positive role.
Conclusion
Here, we confirmed that ex vitro rooting was an obvious choice for AR formation during the micropropagation of E. excelsum plantlets. DEGs enriched in the pathway of plant hormone signal transduction played a crucial role in AR formation. H 2 O 2 produced by environmental stimulation might be related to AR induction in E. excelsum ex vitro by the synergistic action with IBA, ultimately regulating the level of endogenous IAA. The knowledge gained from this study will help researchers understand the molecular traits of IBA-based regulation of adventitious rooting of E. excelsum plantlets. These results will provide technical support for the ecological protection of this rare and endangered species and are important for research and commercial applications aimed at overcoming rooting recalcitrance in plant species of economic value, in difficult-to-root woody plants, or in rare or endangered plants. Fig. 9 Analysis of the fold changes of 10 candidate genes in Euryodendron excelsum determined by RNA-seq and qRT-PCR. Bars indicate means ± SD. The x-axis represents the time point after IBA treatment under ex vitro rooting while the y-axis represents the log2(fold change). Graphs were generated by Microsoft Excel and Adobe Photoshop CC 2018 ◂
|
v3-fos-license
|
2021-07-24T05:27:33.091Z
|
2021-05-19T00:00:00.000
|
236196525
|
{
"extfieldsofstudy": [
"Medicine",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/advs.202100552",
"pdf_hash": "0c3e5bdbfe2e041339630ee5ff5dab0bc7c5852d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46658",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "0c3e5bdbfe2e041339630ee5ff5dab0bc7c5852d",
"year": 2021
}
|
pes2o/s2orc
|
A Review of Integrated Systems Based on Perovskite Solar Cells and Energy Storage Units: Fundamental, Progresses, Challenges, and Perspectives
Abstract With the remarkable progress of photovoltaic technology, next‐generation perovskite solar cells (PSCs) have drawn significant attention from both industry and academic community due to sustainable energy production. The single‐junction‐cell power conversion efficiency (PCE) of PSCs to date has reached up to 25.2%, which is competitive to that of commercial silicon‐based solar cells. Currently, solar cells are considered as the individual devices for energy conversion, while a series connection with an energy storage device would largely undermine the energy utilization efficiency and peak power output of the entire system. For substantially addressing such critical issue, advanced technology based on photovoltaic energy conversion–storage integration appears as a promising strategy to achieve the goal. However, there are still great challenges in integrating and engineering between energy harvesting and storage devices. In this review, the state‐of‐the‐art of representative integrated energy conversion–storage systems is initially summarized. The key parameters including configuration design and integration strategies are subsequently analyzed. According to recent progress, the efforts toward addressing the current challenges and critical issues are highlighted, with expectation of achieving practical integrated energy conversion–storage systems in the future.
Introduction
Due to the resource shortage of fossil fuels and environmental crisis caused by CO 2 and other greenhouse gases emissions, the DOI: 10.1002/advs.202100552 global demands for green sustainable energy resources have attracted increasing attention. Currently the oil resources can only support exploitation for about 50 years. [1] According to the statistics, the global energy consumption is estimated to reach approximately 27 (TW) by 2040. [2] Although improving energy efficiency and conservation is beneficial to alleviate the energy crisis, investment of sustainable clean energy resources is the substantial to implementation and update of the global energy strategy.
As a typical form of solar system, sunlight is an essential renewable energy resource. In recent years, solar energy plays a critical role in water splitting, organic contaminant decomposition, energy conversion, and storage. [3] Additionally, the development of solar cell with capabilities of converting solar energy to electricity is a direct strategy for utilizing energy resource. In the past several decades, great efforts have been paid to promote the stability and safety of solar cells. At present, silicon-based solar cells, involving monocrystalline silicon, [4,5] polycrystalline silicon, [5,6] and amorphous silicon thin film solar cells, are the dominant products in the market. [7] Nevertheless, there are still many limiting factors, including high energy consumption, large expenses, limited bandgap adjustability, and even the theoretical power conversion efficiency (PCE) of a single-junction cell is only 29.1-29.4%. [8] In addition, the thinfilm solar cells with alloys or compounds were also extensively investigated in early study, such as Sb 2 Se 3 , [9] CdTe, [10] GaAs, [11] CuInSe 2 , etc., [12] while practical application of these materials is still restricted due to their toxicity. Moreover, dye-sensitized solar cells (DSSCs) and organic compound solar cells show lower PCE (<14.3% for the former and 16% for the latter) than Si-based solar cells. [13,14] Thus, the next generation solar cells are required to be low-cost, high-efficiency, and environmentally benign. In recent years, perovskite solar cells (PSCs) have attracted great attention as a promising candidate due to the unique advantages. i) Different from DSSCs, solid electrolytes could be employed into PSCs, which effectively overcomes the challenges such as electrolyte volatilization, electrolyte leakage, and encapsulation difficulty. ii) The Shockely-Queisser (S-Q) theoretical prediction suggests that the PCE is as high as 30%. [15] iii) The raw materials of PSCs are mostly liquid, which can be easily used to prepare large-area, low-cost, and environment-friendly flexible cells and devices. [16] iv) In PSCs, unique features are desirable, including flexible bandgap, high optical absorption coefficient, low exciton bind energy, equilibrium carrier mobility, and long photocarrier life. [17] However, solar cells possess the abilities of converting sunlight into electricity, while the converted energy cannot be harvested or stored. Therefore, it is necessary to exploit high-performance integrated energy conversion-storage systems to meet the high demand for uninterrupted energy resource. Such integrated system is defined as the combination of the energy conversion unit (solar cells) and storage unit (metal-ion batteries and supercapacitors). Noticeably, the overall photoelectric conversion and storage efficiency is an important indicator, which is substantially related to the PCE of solar cells. Although the integrated power packs upon tandem DSSCs and energy storage devices (Li-ion batteries, LIBs for short, and supercapacitors) have been well fabricated, the overall photoelectric conversion and storage efficiency are still unsatisfied due to the low PCE of the DSSC module. [18] Therefore, PSCs with higher efficiency exhibit greater potential as energy conversion unit in the integrated system.
For well understanding current state and challenges of the integrated energy conversion-storage systems, in this review, the integration of PSCs and energy storage devices is discussed and evaluated. First, the fundamental of PSCs is summarized, which includes operation principles, key parameters, critical problems, and challenges. As the critical support, design and fabrication techniques are specifically analyzed in the realization of integrated energy conversion-storage systems based on PSCs. In addition, the currently reported conversion systems will be discussed with consideration of various energy storage units, such as PSCs-LIBs, PSCs-supercapacitors, and PSCs with other types of energy storage devices. Finally, the challenges and future perspectives of conversion systems are highlighted, with expectation of paving the pathway from laboratory to industry.
Developing Demands
Recently, smart consumer electronics, electric vehicles, and smart grids are widely applied in the market, and rechargeable batteries are considered as the key energy storage devices in these products. In smart electronic devices, the capacity and energy density of batteries is still limited. Currently, the electrical power for rechargeable batteries mainly comes from the conversion of fossil energy. On the contrary, electrical power from solar energy conversion brings a green sustainable approach for battery charge due to the high-power density of 100 mW cm −2 from the outdoor sunlight. On the other hand, electric vehicles assembled with power LIBs have activated a booming market, while the generated grid electricity is mainly from the coals and fossil fuels with emission of unexpected CO 2 . In addition, the requirement of wide distribution of charging stations is still a challenge to fulfill the large-scale charging demand. Therefore, the power gener-ation and distributed charging stations are the essential parts in the market of electric vehicles.
For addressing such energy bottleneck, efforts have been largely paid to the strategies of generating electricity from renewable energy sources with reduced harmful climate impact. In a typical progress, electric grid has been electrically connected to the photovoltaic power station. For further extending the utilization from daytime to nighttime, development of integrated energy conversion-storage systems could be considered as a potential strategy to connect to the grid. In this system, the generated electricity in the daytime could be stored into the integrated rechargeable batteries or supercapacitors, while the stored energy could be output in the nighttime to achieve a sustainable energy utilization.
Technique Requirement
In the applications, PCE is known to be one of the critical criteria and substantial improvement has been made in the PSCs. In principle, higher PCE implies the increased photon energy that is converted into electricity for charging energy storage device. PSC-based integrated energy conversion-storage systems are attractive in the potential development, due to their unique advantages, such as all-solid-state form, high open circuit voltage, structural compliance, flexibility, active contact area shared with the coupled unit, and high theoretical PCE.
For rationally promoting the practical PCE, it is highly important to understand the fundamental of the integrated system. To achieve the goals, critical requirements, including high compatibility, ideal compactness (small integration volume), and lightweight portability, are the challenges for applications. Another key target is high energy storage efficiency, which can be calculated by the following equation [19] where 1 is PCE and 2 is energy-conversion and storage efficiency for the entire integrated system, which could be described as (c). e,f) The two different structures of heterojunction involve the generation of excitons, diffusion, and dissociation. Reproduced with permission. [25] Copyright 2017, American Chemical Society.
Configuration and Operation Principles
PSCs are simply divided into organic-inorganic hybrids and all inorganic PSCs, which possess an all-solid-state light-absorbing perovskite. In a typical configuration, PSCs consist of substrate materials (indium tin oxide(ITO) or fluorine-doped tin oxide (FTO)) , electron transport layer (ETL) (TiO 2 , SnO 2 , and ZnO), [20] perovskite absorption layer, hole transport layer (HTL), and metal electrode (Figure 2a-d). PSCs exhibit regular (n-i-p) and inverted (p-i-n) structures, which depend on the transport (electron/hole) material that is presented on the exposing surface for interacting with the incident light. According to the design principles, there are four types of sandwiched PSC structures: n-i-p mesoscopic PSCs (Figure 2a), p-i-n mesoscopic PSCs (Figure 2b), ni-p planar PSCs (Figure 2c), and p-i-n planar PSCs (Figure 2d). To date, both mesoporous and planar structures of PSCs have exhibited high performance and stability, while their stability is still under debate. [21] The planar architecture is an evolution configuration of the mesoscopic structure, where the perovskite light-harvesting layer is sandwiched between the ETL and HTL. With the same materials and approaches, a planar n-i-p PSC exhibits higher value of V oc and J sc in comparison with mesoscopic ones. However, more severe J-V hysteresis would be obtained probably owing to the incompatibility of p-type materials, voltage scan direction, scan rate and range. [22] In an inverted (p-i-n) structure, typical feature includes low-temperature processing, negligible hysteresis behavior and optically attractive for (two/four-terminal) tandem applications. [23] However, further attempts should be made to understand the difference between mesoscopic inverted structure and planar inverted structure.
In the operation, the perovskite layer of PSCs first absorbs photons (E hv > E g ) to generate electron-hole pairs. [2] Upon absorption of a photon, the electron could be excited from the semiconductor valence band to the conduction band, leaving a hole in the original position. [24] Since the dielectric constants in perovskite materials (organic and inorganic materials) are different, free carriers or excitons could also be generated. [25,26] Subsequently, these uncombined electrons and holes (or dissociated excitons) are collected by the ETL and HTL, respectively. Specifically, the electrons transport from the perovskite layer to the ETL, and then are collected by the ITO. On the other hand, the holes transport from the perovskite layer to the HTL, and are further collected by the metal electrode. Finally, the photovoltage and photocurrent are generated by the electrically connecting the ITO and metal electrode.
Generally, the essential difference between inorganic semiconductors and excitons of organic semiconductors refers to the generation of photons excitation, while they are difficult to transfer to the electron and hole transport layers for generating photocurrent and photovoltage. Excitons are composed of Coulombbound electron-hole pairs, which are usually defined as singlet excitons with a binding energy between 0.1 and 0.4 eV. [27] In order to utilize these excitons in an external circuit, they must be first dissociated into free electron-hole pairs by using a heterojunction (p-n) assembled with electron donor and acceptor materials. [28] Upon absorbing a photon, one of the two electrons with opposite spin directions in the highest occupied molecular orbital (HOMO) will be driven to the lowest unoccupied molecular orbital (LUMO), due to the spin conservation ( Figure 2e). Meanwhile, the HOMO and LUMO energies of the acceptors are lower than those of the donors, resulting in the dissociation of electrons and holes driven by the energy offset. [29] Generally, there are two different structures of heterojunction to harvest as much light as possible, i.e., planar heterojunction (Figure 2f, left) and bulk heterojunction (Figure 2f, right). In particular, the bulk heterojunction is defined as a uniform mixture of the donor and acceptor materials, in which excitons can spontaneously dissociate into electron-hole pairs. [30] Primarily, the excitons generated from donors are unable to be dissociated until they reach the donor-acceptor interfaces through the different diffusion paths, which indicates that excitons are dissociated at the interface. Furthermore, the thickness-scale distribution of the heterojunction is important for capturing excitons. In principle, a large thickness distribution can result in the loss of excitons while a small thickness distribution is beneficial to harvest excitons to achieve dissociation. Regarding the harvesting and dissociation of excitons in different heterojunctions, Samuel et al. [25] have presented a deeper comprehension in the previous studies. [2,26,31]
Key Parameters
For over decades, considerable efforts on PSCs have been made to achieve the PCE values to approach the S-Q limit. In the past 10 years, PSCs have achieved remarkable increased PCE from 3.8% (data recorded in 2009) to 25.2% (data recorded in 2019). [32,33] In the photoelectric conversion process, solar energy is directly converted into electricity by photovoltaic effect. The PCE ( 1 ) is determined by several key parameters, such as V oc , FF, and J sc , which can be calculated by the following equation [34] 1 =V oc ×FF×J sc ∕P×100% (3) where FF, V oc , J sc , and P are the fill factor, open-circuit voltage (V), short-circuit current density (mA cm −2 ), and incident light power density (mW cm −2 ), respectively. Therefore, maximizing these three key parameters is necessary to improve the photovoltaic performance. Note that these parameters are interrelated, and the corresponding values could be also impacted by other physical properties. Generally, V oc originates from the splitting of quasi-Fermi energy levels for electrons and holes, which is activated by radiation of light [35] where E Fn and E Fp represents the electron and hole quasi-Fermi levels, respectively. V oc is significantly controlled by light absorption and carrier recombination. The recombination includes nonradiative recombination and radiative recombination. In the radiative recombination of PSCs, the upper limit for V oc is usually determined as 1.32-1.34 V. On the other hand, the V oc for nonradiative value is much lower than 1.1 V. [36] Practically, several studies have demonstrated that the recombination of electronhole pairs is mainly attributed to nonradiative recombination. [37] Hence, it is an effective way to increase V oc by eliminating nonradiative recombination. Additionally, another important parameter is FF, which can be calculated by following equation [38] where V mp , J mp , V oc , and J sc are the maximum power point voltage, maximum power point current, open-circuit voltage, and short-circuit current density, respectively. Simply, the value of FF depends on the ratio of the transmission rate to the recombination rate of the device during operation. According to S-Q limit theory, however, it depends on the maximum power point voltage. Actually, series resistance (R s ) and shunt resistance (R sh ) are non-negligible limiting factors in realistic device system, which can impact the value of FF and V oc . [39] Generally, FF largely depends on the R s and R sh , and both high R s and low R sh decrease the value of FF, i.e., higher R sh can improve FF and electron mobility. [40] Although high FF is necessary in high-performance PSCs, it is difficult to investigate the effects of FF in a targeted manner. In a specific experiment, it is still a challenge to observe the changes of FF by varying one parameter while the other parameters remain constant.
High short-circuit current density (J sc ) is a critical parameter to achieve the theoretical limit of PCE, and J sc is affected by light reflection losses, trap density, and interface control (electron transport layer and light absorbing layer). However, most experimental values of PSCs reach only 80% of its maximum J sc , in comparison with 90% achieved in the solar cells using Si and GaAs. [41] Generally, the quantum efficiency is defined as the ratio of photon numbers collected on the absorber layer to the total ones on the solar cells. The photons are absorbed by the absorber layer, which would generate electrons and holes to transfer to their corresponding transport layer. This behavior could in turn contribute to the short-circuit current. Therefore, the efficient photon management, trap density, and interface control (high conductivity and matching configuration) are required to be considered for maximizing the J sc . [42,43] For instance, Qarony et al. used nonresonant metal-oxide metasurfaces to demonstrate the potential improvement in short-circuit current density. [42]
Critical Problems and Challenges
Although the photovoltaic technology upon PSCs has achieved rapid development in recent years, critical problems and challenges have largely hindered the commercialization process (Figure 3a and Table 1). In the PCE, the value is close to those of Si-based commercialized solar cells, while it still needs to be improved according to S-Q limit as aforementioned. Theoretically, maximizing V oc , FF, and J sc via appropriate improvement strategies would promote PCE. Therefore, the entire structure configuration of PSCs is also required to possess high compatibility, such as the light-absorbing layer, electron and hole transport layer. As shown in Figure 3b, the annual top PCEs of hybrid and all inorganic PSCs are listed. [32,44,45,33,46,47,48] Additionally, stability is a necessary prerequisite for the long-term operation in the PSCs. Although the stability of some PSCs devices has achieved a stable operation more than 30 days under a specific environment, there is still a huge gap in comparison with the commercial photovoltaic devices (such as stability and cost). Currently, the major challenges of perovskite materials refer to the many aspects, including crystal structure stability, environmental factors (such as temperature, O 2 , and H 2 O), and phase purity. There features would promote the processing requirements for achieving high-quality perovskite films. Therefore, PSCs should operate efficiently under the environment with high oxygen concentration, high atmospheric humidity, or high temperature (Figure 3c). Compared with the stability and environmental factors, a more severe problem is the scalable processing for realizing large-area manufacturing (Figure 3d). In detail, the main challenge of commercialization is inevitably efficiency loss, which is caused by the inferior quality of the film (e.g., pinhole, crack, and defect) during the scalable fabrication of perovskite films. Currently, perovskite films mainly rely on three preparation methods, such as single-step solution deposition, two-step solution deposition and vapor-assisted solution deposition. Among these methods, vapor-assisted solution deposition has been evidenced to be the most effective. Although lead (Pb) is abundant in the earth crust and lead-based PSCs possess impressive potential PCE, it is necessary to investigate lead-free PSCs from a safety concern. Therefore, the potential crisis of lead leakage should be substantially avoided. In the integrated energy conversionstorage systems, the overall stability, energy density, safety, and long-term operation are highly dependent on PSCs. Therefore, considerable improvement on the PSCs is also an important factor for achieving high-performance practical integrated energy conversion-storage systems.
Large-Scale Preparation and Low Toxicity of PSCs
Since 2012, the efficiency of PSCs has evidenced an amazing growth trend. [32] Up to date, the certified efficiency has reached 25.2%, [33] which has exceeded the traditional photovoltaic technology. However, the efficiency of large-area devices is still low. In detail, the efficiency loss would occur during the scalable fab- [ 128] FTO/TiO 2 /MAPbI 3− x Cl x /sputtered NiO x /Ni 7.28 >2 months stability 2015 [ 129] FTO/c-TiO 2 /mp- [ 139] ITO/PTAA/perovskite/PC61BM/EEL/Ag 22.02 1000 h at 85°C 19.8 2020 [ 140] rication of perovskite films, which is mainly due to the quality degradation of the film as the fabrication area increases (e.g., pinhole, crack, and defect). The reliable high-efficiency fabrication of high-quality large-area perovskite films should be the critical in upscaling commercialization of PSCs. Therefore, the fabrication method of the perovskite film is of great significance for realizing large-scale preparation. In this section, we have reviewed several main preparation methods and evaluated their feasibility in large-scale fabrication, such as solution deposition (spincoating, blade coating, slot die coating, and spray coating), chemical vapor deposition (CVD), and hybrid chemical vapor deposition (HCVD). [49][50][51] First, solution-based scalable techniques have been widely employed in the preparation of high-quality perovskite films due to its low-cost and facile feature. In spin-coating, numerous efforts on the lab-scale small-area and scale-up-area setups have been made, which suggests that it is common and available. [52,53,54] The corresponding PCE reached as high as ≈13% and 17.1% with a reported active area size of 50.6 and 24.94 cm 2 , respectively. [53,55] However, it is a challenge to use this technique to deposit uniform perovskite films in a reproducible manner when the size is larger than 100 cm 2 . [56] Therefore, the techniques of blade coating, [57,58,59] slot die coating, and spray coating were reported in order to break the challenge. [60,61,62] Among them, the cells obtained using blade coating [61] (active area: 151.9 cm 2 , PCE: 11.1%) and slot die coating (active area: 57.2 cm 2 , PCE: 15%) [57] delivered greater performance. When the films are in wet state, there would be a kinetic process of nucleation and crystal growth using blade coating, which is similar to that of spin-coating. However, rapid removal of the solvent during film drying is the current challenge because the film quality would be significantly impacted using nonrotating coating technique. In the spin-coating process, the substrate is rotated at a high speed in order to efficiently spread the extra solution out from the www.advancedsciencenews.com www.advancedscience.com [ 144] Glass/FTO/c-TiO 2 +G/m-TiO 2 +G/perovskite/spiro-OMeTAD/Au 70 Spin-coating 14 2019 [ 145] FTO/c-TiO 2 /m-TiO 2 /MAPbI 3 /P 3 HT/Au 100 Blade coating 7.5 2015 [59] ITO/PTAA/MAPbI 3 /C 60 [ 149] film owing to centrifugal force. For example, Lee et al. [63] employed lead acetate (PbAc 2 ) in the mother solution to form methylammonium acetate (MAAc) with purpose of controlling crystal growth during the drying process of the cast film. Other improvement strategies, such as introduction of surfactant (e.g., l--phosphatidylcholine) [57] to form amine complex precursors (CH 3 NH 3 I·mCH 3 NH 2 and PbI 2 ·nCH 3 NH 2 ), have been also proposed for fabricating MAPbI 3 perovskite films. [64] Vapor-based deposition (e.g., CVD and HCVD) methods have also been demonstrated as a promising route to fabricate a largearea, uniform, and pinhole-free film. CVD process refers to the formation of a thin solid film on a substrate via a chemical reaction of vapor phase precursors at a precise temperature. Compared with solution coating processes, CVD deposition process exhibits unique advantages, such as easy formation of perovskite heterojunction structures, [65] construction of full textured tandem-structure solar cells, [66] elimination of using harmful organic solvents, etc. [67] Fan and co-workers [50] reported a facile one-step CVD method to fabricate planar heterojunction PSCs (MAPbI 3 and MAPbI 3−x Cl x perovskite) with a PCE up to 11.1%. Notably, all of the precursors in CVD process were solid state. Specifically, perovskite thin films were deposited onto a c-TiO 2coated FTO glass substrate by a one-step method. The lead chloride (or lead iodide) and methylamine iodide were placed in the high temperature zone and the exact position of each source was determined according to their vaporization temperature. In the growth process, the substrates were placed in the left side low temperature zone. Due to the discrepancy in physical properties between different precursors, more stringent requirements (higher accuracy in carrier gas, pressure, and temperature) are necessary to fabricate high-quality perovskite thin films. Qi and co-workers proposed the HCVD technique in order to achieve a more flexible and simple preparation process. [51] Typically, the perovskite film grown by the HCVD includes a two-step process. The individual step can be optimized separately, which allows for controlling the process more accurately. Apart from this, vaporassisted solution process is also a promising approach, which combines solution process and vapor process. Such combination would effectively simplify the preparation process and technical requirements. In 2013, [45] Snaith and co-workers successfully fabricated MAPbI 3−x Cl x thin film by vapor-assisted solution process, in which both ETL and HTL were obtained by spin-coating process via the evaporation of organic and inorganic source.
In summary, the technologies aforementioned possess advantages and challenges, and they have been used for large-scale preparation. The high quality of the perovskite film is the main technical barrier for both large-area preparation and high PCE. Meanwhile, large-area preparation is a necessary prerequisite for the commercial application of PSCs. Herein, we have collected prominent achievements in large-scale preparation in recent years ( Table 2).
Although lead is abundant in the earth crust and lead-based PSCs possess an impressive PCE, it is necessary to investigate lead-free PSCs. Therefore, another development of PSCs is to delead in the technique, avoiding the threat of lead leakage to both humans and environment. The radius of Sn 2+ is almost equal to that of Pb 2+ , which is considered as the substitute of Pb in terms of forming ASnI 3 (A = MA, FA, etc.; X = Br, I, etc.) and CsSnI 3 (1.3 eV). The Sn-based PSCs possess a lower E g than Pb-based ones, which is closer to the ideal bandgap (1.34 eV) of the Shockley-Queisser limit for photovoltaic devices in comparison with that of lead-based perovskites. [68] Theoretically, tinbased perovskites are very promising materials for PSCs. However, Sn 2+ is extremely unstable because it can be easily oxidized to Sn 4+ if exposing in the air or even in inert atmosphere. The self p-doped Sn 4+ (so-called self-doping effect) breaks the charge neutrality and forms high density of recombination centers in perovskite, which leads to PCE reduction. [69] Additionally, "yellow" phase is also an unavoidable challenge. Therefore, it is a challenge to achieve Sn-based PSCs with long-term stability and desired PCE. In this part, we will mainly focus on the progress and discussion of inhibiting the oxidation of Sn 2+ in Sn-PSCs.
Actually, Noel et al. [70] proposed lead-free organic-inorganic hybrid PSCs (MASnI 3 , E g = 1.23 eV) as early as 2014. They have www.advancedsciencenews.com www.advancedscience.com determined the mobility of the as-prepared films, with a value of 1.6 cm 2 V −1 s −1 and a diffusion length of ≈30 nm. However, the main challenge is that the instability of Sn 2+ oxidation state results in PCE of 6% and V oc of 0.88 V. In the same year, MASnI 3−x Br x was employed as a light harvester into lead-free PSCs. However, the as-obtained PCE of 5.73% was still lower than that of the Pb-based PSCs, which should be mainly attributed to the oxidation of Sn 2+ . [71] Later, a 3D MASnI 3 perovskite structure was designed by Kanatzidis and co-workers, [72] with a slightly improved PCE (6.63%) achieved. The oxidation of Sn 2+ is a key challenge, leading to a higher carrier density and conductivity. Such results would short-circuit the devices, which are needed to be solved urgently. Therefore, some groups attempted to employ SnF 2 , SnF 2 -pyrazine complex, Sn powder, FABr, (N 2 H 5 Cl), BAI, and EDAI 2 as additives to suppress the oxidization of Sn 2+ , with purpose of reducing the hole density in the resulting films. [48,73] Another strategy is so-called hollow perovskite using propylenediammonium c), trimethylenediammonium , and ethylenediammonium. [74] Although the content of Sn 4+ has been reduced, the PCE is still maintained at a low level below 7%, which is mainly attributed to the poor carrier transport properties. Similarly, Zhu et al. [75] reported a Lewis acid-base adduct strategy using trimethylamine (TMA) as the additional Lewis base in the tin halide solution to form SnY 2 -TMA complexes (Y = I − , F − ), thus achieving a PCE of 7.09% in the inverted structure. In order to further suppress the oxidation of Sn 2+ , Tai et al. [76] employed phenolsulfonic acid (PSA), 2-aminophenol-4-sulfonic acid (APSA), and potassium salt of hydroquinone sulfonic acid (KHQSA) as antioxidant additives into the perovskite precursor solution along with excess SnCl 2 . Note that KHQSA contains two hydroxyls (-OH) groups, and have stronger interaction with Sn 2+ . As a result, higher antioxidant activity could be obtained. As expected, improved PCE (6.76%) and stability (80% efficiency maintenance over 500 h upon air exposure without encapsulation) was achieved due to the precise control of Sn 2+ oxidation. Low-dimensional perovskite (LDP) interlayer between ETL and perovskite was also reported to possess a PCE of 7.05%. [77] Additionally, Ke et al. [78] designed a novel tetrakistriphenylamine (TPE) small molecule as HTL to take place the conventional 2,2′,7,7′-tetrakis(N,N-dip-methoxyphenyl-amine) 9,9′-spirobifluorene (spiro-OMeTAD) and poly(bis(4-phenyl)(2,4-bimethylphenyl)amine (PTAA). Owing to the suitable band alignment and excellent hole extraction/collection properties, TPE HTL presented a PCE of 7.23% (V oc = 0.459 V, J sc = 22.54 mA cm −2 ; FF = 69.74%). Jokar et al. [79] proposed a mixing strategy of a "A" site organic cation and additive engineering, and fabricated the GA x FA 1−x−2y SnI 3 -yEDAI 2 perovskite film, in which the EDAI 2 effectively suppressed the oxidation of Sn 2+ on the surface. An optimized performance (maximum PCE = 9.6%) was achieved at a precursor ratio (guanidinium iodine:formamidinium iodine (GAI:FAI))of 20:80 in a glove-box environment for 2000 h. Notably, the V oc and PCE from the above results are still lower than Pb-based PSCs (especially V oc generally below 1.0 V), which is mainly attributed to the transition of Sn 2+ to Sn 4+ and poor carrier transport capability.
Sn-based all-inorganic PSCs also become a significant developing direction for free-lead PSCs due to the bandgap close to the S-Q limit. CsSnI 3 is a unique phase-transition material, which exhibits two polymorphs at room temperature. One has a 1D yel-low double-chain structure (Y-CsSnI 3 ) and the other has a 3D black perovskite structure (B--CsSnI 3 with low exciton binding energy of 10-20 meV). The black phase is a highly conductive p-type direct semiconductor with a bandgap of 1.3 eV, which possesses a photoelectric response. In contrast, the yellow phase is an indirect semiconductor with a 2.55 eV bandgap. [80] However, individual CsSnI 3 is not effective, because it exhibits metallic conductivity and is prone to form intrinsic defects of both Sn vacancies and Sn 4+ centers. In 2012, B--CsSnI 3 -based PSCs were first reported and a PCE of only 0.9% was achieved due to the oxidation of Sn 2+ . [47] Thus, it is critical to control the inherent defect concentration to optimize the all-inorganic Sn-based PSCs. Later, Sn-containing compounds, such as SnF 2 , SnCl 2 , and SnI 2 , were demonstrated, and they were beneficial to enhance the performance via decreasing the intrinsic defects density. For example, Mathews et al. [48] demonstrated that the carrier density of CsSnI 3 was observed to decrease with increasing SnF 2 . Thus, the addition of SnF 2 would reduce the concentration of Sn vacancies. The devices with the configuration of FTO/compact TiO 2 /mesoporous TiO 2 /CsSnI 3 /HTL/Au exhibited PCE of 2.02%, V oc of 0.16 V, J sc of 22.7 mA cm −2 , and FF of 37%. SnF 2 can improve the stability, it also remains intact in the film owing to its chemical stability. Thus, the formation of Sn defects from such oxidization might be rarely prevented with addition of SnF 2 . Hatton and co-workers used SnF 2 , SnCl 2 , and SnBr 2 as additives in CsSnI 3 -based PSCs, where SnCl 2 served as a particularly beneficial additive. Therefore, it was proved that Cs 2 SnI 6 could be formed with the combined action of water and oxygen. The addition of 10 mol% SnCl 2 has hindered the oxidation on the perovskite film, thereby improving the device stability. Furthermore, they simplified the device architecture by removing the ETL without reducing device PCE (3.56%), which is beneficial to fabrication process. [81] The SnI 2 is also considered to be an effective additive to stabilize the B--CsSnI 3 . Kanatzidis and co-workers [82] used excess SnI 2 in the Sn-based halide perovskite solar cells (CsSnI 3 ) with combining a reducing atmosphere to stabilize Sn 2+ state. During the growth of the perovskite film, excess SnI 2 could provide the system with more Sn 2+ and compensate for Sn 2+ lost in the oxidization from Sn 2+ to Sn 4+ , which would effectively reduce the p-type conductivity. Finally, a maximum PCE of 4.81% could be achieved in the optimized CsSnI 3 devices.
In the removal process of lead, various additives have been introduced to improve oxidation of Sn 2+ and reduce intrinsic defects. However, both hybrid and inorganic PSCs still remain at a lower PCE level than Pb-based PSCs. Because of low efficiency and insurmountable challenges, they have not attracted sufficient attention. Therefore, more attempts should be paid to break through the challenges.
PSCs-LIBs Integrate Technology
As is known, rechargeable LIBs are the commercialized energy storage devices in the past decades. Owing to the advantages of high energy density and stable positive/negative electrode materials, LIBs could be introduced as the competitive devices in the integrated energy conversion-storage systems. Initially, LIBs were employed to integrate with Si-based photovoltaic devices. [83] With the development of photovoltaic technology, they were also used to integrate with dye-sensitized solar cells. [84] However, the output voltages of the integrated Si-based photovoltaic technology and integrated DSSCs presented <0.7 and <0.8 V, respectively. The suppressed voltages might be linked with the insufficient capabilities to create adequate potential in a power storage system. As a result, more series units are needed to be connected, which is contradictory with the technical development of lightweight and compact integrated systems. In this aspect, further studies have demonstrated that the PSCs possess the ability to provide an output voltage above 1.0 V, and thus the integrated systems upon PSCs-LIBs could be potentially concerned. Generally, the integrated strategy between light harvesting devices and energy storage devices could be divided into three prototypes, i.e., wire connection, three-electrode integration (shared positive or negative electrodes), and two-electrode connection (Figure 1). In the review by Lennon and co-workers, certain systems integrated with sensors, wearable electronics, and autonomous medical monitoring have been discussed, and the corresponding integrating strategies have been also summarized. [14] According to the definition, wire connection is a direct and the easiest way to integrate two devices through additional conductive wires in series to achieve energy conversion and storage (Figure 4). In 2015, Xu et al. [19] The system could deliver an overall photoelectric conversionstorage efficiency of 7.80%, with a stable self-charge cycle un-der a constant illumination (with AM1.5G for 17.8 h). The results highlighted a significant progress in the development of PSC-based hybrid integration systems. Similar integration system was also designed by Weng et al., [86] which demonstrated the high feasibility of the aforementioned wire connection strategy. Noticeably, aqueous electrolytes are easier to leak in the threeelectrode or two-electrode integrated system. However, the aforementioned hybrid device exhibited an obvious power deterioration during constant photocharging process, and meanwhile it may lead to the decline in the life and stability of the entire system. For improving the operation, Qiao and co-workers reported a feasible photocharging approach to charge LIBs (Li 4 Ti 5 O 12 as the negative electrode and LiCoO 2 as the positive electrode) with a single MAPbI 3 -based PSCs (PEC = 14.4%; V oc = 0.96 V; J sc = 21.71 mA cm −2 ; FF = 0.68), showing an ultralow power direct current-direct current (DC-DC) boost converter. [86] This DC-DC converter can provide maximum power point (MPP) tracking for the PSCs devices along with overcharge protection for the LIBs. An overall efficiency of 9.36% and average storage efficiency of 77.2% was achieved in this approach. Although the wire-connection stacking prototype demonstrated effectiveness, it seems incompatible to meet the requirements of flexibility, lightweight, and compactness in the mobile devices. More importantly, there would be a portion of current loss from the connecting cables. Therefore, the key factors including flexibility, compactness, lightness, mobility, easy installation, and wide applications are highly necessary, which can be achieved by a shared electrode strategy (either three-electrode or two-electrode). In addition to these advantages, area match between PSCs and LIBs is also important to optimize the maximum power output. For example, Kin et al. [87] designed a three-electrode integrated system (with a shared positive electrode) via DC-DC boost converter. [118] Copyright 2020, American Chemical Society. b) Device operation schematic. Reproduced with permission. [88] Copyright 2020, Wiley-VCH. c) SEM image of drop-cast 2D perovskite electrodes taken at 45°tilt. The inset shows a PL image of the corresponding perovskite film ( ex ≈ 300 nm LED source). Schematic of perovskite photobatteries. d) Energy level diagram of perovskite photobatteries. Reproduced with permission. [89] Copyright 2018, American Chemical Society.
Combined with a single PSCs and battery, the as-integrated system provided an overall efficiency of 9.8% (Figure 5a). They demonstrated that the boost converter resulted in a constant voltage over time. Meanwhile, the system could convert an almost constant power input to the battery cell in different areamatched integrated systems (0.64 and 0.9 cm 2 ). Gurung et al. [88] reported a similar three-electrode integrated configuration (with a shared negative electrode), which consisted of a LIB (positive electrode: LiCoO 2 ; negative electrode: Li 4 Ti 5 O 12 ) along with PSCs on the top (PCE = 10.96%; V oc = 1.09 V; J sc = 15.45 mA cm −2 ; FF = 0.656). Particularly, a common Ti metal substrate was employed between the LIBs and PSCs (Figure 5b). Meanwhile, the DC-DC boost converter provided efficient manipulation on the battery management and maximum power point tracking. In this integrated system, an overall photoelectric conversionstorage efficiency of 7.3% along with light charging cycle performance (30 cycles) was achieved. In addition, two-electrode integrated system (2D (C 6 H 9 C 2 H 4 NH 3 ) 2 PbI 4 )/reduced graphene oxide (rGO)/poly(vinylidene fluoride) (PVDF) as the positive electrode and Li metal as the negative electrode) was also successfully fabricated by Ahmad et al. [89] In this integrated system, perovskite thin film (2D (C 6 H 9 C 2 H 4 NH 3 ) 2 PbI 4 ) could act as the role for both energy generation and storage (Figure 5c,d), and the corresponding possible mechanism for energy conversion and storage was described as following This appears an ideal compacted solution for energy generation and storage. However, there is a strict requirement on the compatibility of active materials in such system. Additionally, the light absorptivity and stability of the perovskite film should be also concerned for long-term cycle of energy conversion and storage. Recently, some semiconductor materials (VO 2 , V 2 O 5 , g-C 3 N 4 , organic molecules (tetrakislawsone (TKL)), etc.) with suitable bandgaps have been employed as both electrode active materials and photosensitizers in the two-electrode systems. [90] Lightexcited electrons can be output to the external circuit through the electron transmission medium during the charging process. Thus, the rate performance and energy density of the energy storage units could be significantly enhanced via the effect of photoelectrochemistry.
According to the integrated system upon PSCs-LIBs, there are sufficient studies to illustrate the feasibility and challenges of the integrated system. First, the wire connection could increase the package and energy loss of the integrated system, and it is difficult to achieve the maximum power matching between PSCs and LIBs. Although introduction of DC-DC boost converter ensures the direct power output of single PSCs, the original design could be varied. In fact, it seems the most stable device up to date. In addition, rational design of the shared electrode is the key in the three-electrode integration, which is related to the efficient transportation of electrons. Finally, both the thermal and ambient stability should be another key technical requirement for the long-term stable integrated system upon PSCs-LIBs, because such parameter is substantially responsible for stable power output and efficient energy storage and utilization. www.advancedsciencenews.com www.advancedscience.com
Integrated Technology for PSCs-Supercapacitors
Apart from the important performance factors, the poor energy and power density of the integrated systems are also crucial for the overall efficiency of the device. It is well known that LIBs possess a high energy density, while supercapacitors presented high power density. Furthermore, the ultralong cycling stability of supercapacitors (commercial products more than 100 000 cycles) is also greater than the other energy storage devices. More importantly, supercapacitors in the integrated systems are commonly assembled with carbon-based electrodes (carbon nanotubes, graphene, carbon composites, etc.), which can be also used as the back and front contact layers in the PSCs. Because of the hydrophobic nature and chemical stability of the carbon derivatives, the mechanical and chemical endurance of the device can be ensured (Figure 4). Actually, various types of off-grid electrochemical capacitors have been reported, such as piezosupercapacitors, optically rechargeable supercapacitors, thermally rechargeable supercapacitors, and integrated triboelectric nanogenerators and supercapacitor systems. [91] In this section, we would mainly focus on the PSCs-supercapacitors.
Li et al. [92] fabricated a flexible self-powered system for strain sensors, which was composed of a flexible PSC module (four flexible PSCs connected in series by silver wires), flexible lithium-ion hybrid capacitor (LIHC) module, and a graphene-based strain sensor. As an energy conversion, tandem PSCs can easily deliver a remarkable voltage output of 3.95 V and a high PCE of 10.20%, which is sufficient to charge the integrated LIHCs. Accordingly, the LIHCs device (Li 4 Ti 5 O 12 /reduced graphene oxide as the negative electrode and activated carbon as the positive electrode) delivered a favorable energy/power density (60.2 Wh kg −1 /50 W kg −1 and 40 Wh kg −1 /2000 W kg −1 ). Meanwhile, the flexible integrated system of PSCs-LIHCs could present an overall efficiency of 8.41% at a discharge current density of 0.1 A g −1 .
Using the same integration approach, Xu et al. [93] reported an integrated system with a CH 3 NH 3 PbI 3 -based PSCs connected by a bacterial supercapacitor, which was assembled with cellulose membrane/polypyrrole (PPy) nanofibers/multiwalled carbon nanotubes (MWCNTs). The hybrid devices exhibited a high energy storage efficiency (10%) and output voltage of 1.45 V, with low interruptions in the cycles. However, active area mismatch between the supercapacitors and solar cells would result in a long charging time (300 s). Different from the above cases, Du et al. [94] reported a flexible all-solid-state wire-connection integrated device upon PSCs/supercapacitors. In this device, CH 3 NH 3 PbI 3−x Cl x was employed as the light absorbing layer in the PSCs, and the self-stacked solvated graphene (SSG) film simultaneously plays as the positive electrode and negative electrode in the supercapacitors. The strategy of using solid electrolytes greatly reduced the degradation that are induced from the leakage of aqueous electrolyte, and such strategy may meet the technical requirements of integrated systems.
In addition to the direct stacking integration solution, the design strategy on the shared electrode is still an important development direction. Liu et al. [95] integrated an all-solid-state photocharging capacitor based on PSCs (CH 3 NH 3 PbI 3 ) and supercapacitors (polyaniline (PANI)/carbon nano tube (CNT)), in which a CNT bridge was employed to inhibit the water from the aqueous gel electrolytes (Figure 6a). Under fluctuating sunlight, the hybrid device exhibited a specific area capacitance of 422 mF cm −2 with a Coulombic efficiency ≈96% and energy storage efficiency ≈70.9%. However, the overall efficiency of 0.77% is even lower than that of some wire-connected devices, which may be attributed to the low energy conversion efficiency and deterioration of PSCs. Liu et al. [96] designed a similar hybrid device (PSCssupercapacitors) via combining photoelectric conversion and energy storage with a shared carbon electrode. Such shared electrode served as both the cathode for PSCs (PCE = 7.79%) and the anode for MnO 2 -based supercapacitors (Figure 6b). When the supercapacitor was charged by the PSCs under the AM1.5G white light illumination (0.071 cm 2 active area, a 0.84 V voltage, and a 76% energy storage), an overall conversion efficiency of ≈5.26% could be achieved. In the study by Sun et al., [97] an integration structure upon PSCs-supercapacitors was fabricated by incorporating the electrically conducted carbon nanotube and a self-healing polymer (CNT/SHP) film electrode (Figure 6c). In addition to the shared carbon electrode, electrically conductive metal electrode was also regarded as a promising shared integrated electrode, according to the report by Li et al. [98] They proposed an integrated system with connecting a PSCs on the top of a symmetric supercapacitor via an all-solid-state copper ribbon for energy harvesting and storage (Figure 6d). This Cu ribbon can serve as the shared electrode for the system and also as an electrode for generating copper hydroxide nanotubes (CuOHNT) in the supercapacitor. Hybrid ribbons upon PSCssupercapacitors can be woven into a textile to form supporting cotton yarns. When the solar ribbon is illuminated by the simulated solar light, the supercapacitor shows an energy density of 1.15 mWh cm −3 and a power density of 243 mW cm −3 . Xu et al. [99] also reported a three-electrode integrated hybrid device using a shared electrode of poly(3,4-ethylenedioxythiophene) (PEDOT)carbon. In this electrode, PEDOT-carbon was employed as the positive electrode in the PSCs and symmetric supercapacitors (Figure 6e). In this system, the overall efficiency and energy storage efficiency was 4.70% and 73.77%, respectively. However, the efficiency can be influenced by both the PSCs and working function of PEDOT-carbon electrode. Zhou et al. [100] reported a perovskite (CH 3 NH 3 PbI 3−x Cl x ) photovoltachromic supercapacitor with all-transparent electrodes by coanode (MoO 3 ) and/or cocathode (WO 3 ). Such hybrid system provides an integration of energy harvesting and storage device, an automatic and wide-color smart switch, and enhanced photostability of PSCs. Along with energy storage process, the color could change from semitransparent to dark-blue. Because the colored PSCs-supercapacitors blocks off most of the illuminated light, the photocharging process could be automatically switched off. The PCEs of the coanode and cocathode PSCs-supercapacitors dropped to 3.73% and 2.26%, respectively.
Consequently, supercapacitors are energy storage devices with high power density. In most cases, carbon materials are used as the working electrode for symmetrical or asymmetrical supercapacitors due to the obvious advantages, such as high-power density, great chemical stability, high flexibility, low mass, and high conductivity. The shared electrode is an important factor for the integration system upon PSCs-supercapacitors. More importantly, the carbon exhibits a hydrophobic characteristic due to excellent negative surface zeta index. This would guarantee the stability of perovskite layers in the presence of oxygen and water. [101] Figure 6. Three-electrode configuration: supercapacitors. a) Schematic of the photocapacitor and energy level schematic. Reproduced with permission. [95] Copyright 2017, The Royal Society of Chemistry. b) Schematic diagram and structural schematic of the integrated device connected in parallel. Cross-sectional SEM image of the integrated device; inset: the close-up of the PSCs part. Reproduced with permission. [96] Copyright 2017, American Chemical Society. c) Schematic illustration and photocharging. Schematic illustration of a fusible perovskite solar cell. Reproduced with permission. [97] Copyright 2015, The Royal Society of Chemistry. d) Charge transfer mechanism of the combination device. Reproduced with permission. [98] Copyright 2016, Nature Publishing Group. e) Schematic illustration and work mechanism of the photosupercapacitor device constructed based on a printable perovskite solar cell. Reproduced with permission. [99] Copyright 2016, Wiley-VCH.
In contrast, metals or metal oxides were employed as the coelectrode of the PSCs-supercapacitors, which would suffer severe degradation from interfacial chemical conversion. Therefore, carbon materials are promising candidates, while they would possess a low energy density.
Integration Technology upon PSCs-Other Energy Storage Devices
Many efforts have been employed to the development of highperformance integrated energy conversion-storage systems to meet the diverse energy demands, while both high power density and high energy density are still required. However, the stateof-the-art of the integrated energy conversion-storage systems upon LIBs suggests that there are fundamental limitations under high charge-discharge rate, due to the limited rate performance of LIBs. To address these issues, considerable efforts have been devoted to design combined systems upon PSCs and supercapacitors. Currently, improved power density could be obtained, while low operation voltage (<0.8 V) and limited energy density (usually as low as 15 Wh kg −1 ) are also the shortage. [102] Alternatively, emerging aluminum-ion batteries (AIBs) with fast charge-discharge feature demonstrate a new direction to realize both high power density and high energy density. [103] In addition, abundant natural resources and good safety feature would allow AIBs for serving as a promising candidate beyond LIBs. In the recent attempt, Hu et al. [104] designed the integrated energy conversion-storage systems by integrating tandem PSCs (MAPbI 3 ; PCE = 18.5%, V MPP = 2.62 V) and graphitebased AIBs on a shared aluminum electrode without any external circuit (Figure 7). With the charging voltage of AIBs, the rationally matched maximum power voltage of the tandem PSCs could reach a voltage ratio of V MPP /V Battery Charging = 1.09, along with excellent solar-charging efficiency ≈15.2% and a high overall efficiency ≈12.04%. The results apparently provided a novel platform for advancing portable integrated energy conversionstorage systems. Therefore, the integrated system upon PSCs-AIBs is a promising energy conversion-storage strategy. In the future, more efforts should be paid to boosting such technology in a more practical form.
Moreover, the PCE and overall efficiency of the integrated system upon energy conversion-storage should be substantially improved. Because the PSCs-supercapacitors are still at the developing level (Figure 8 and Table 3), the integrated systems upon PSCs-AIBs have exhibited substantial potentials.
In addition to AIBs, smart electrochromic window is also a promising technology, which provides multifunction support in solar energy harvesting, storage and reutilization of PSCs. For instance, Tu and co-workers [105] reported a wireconnected integrated system based on perovskite solar cell (FTO/TiO 2 /ZrO 2 /MAPbI 3 /carbon) and it could be used for powering solid-state electrochromic batteries, with application in smart windows. In the energy storage unit, an rGO-connected bilayer NiO nanoflake array and a WO 3 nanowire array were employed in the positive electrode and negative electrode, respectively. The electrochromic battery presented fast optical switching ability, within 2.5 s for coloring (charge process) and 2.6 s for bleaching (discharge process). Smart windows based on photovoltaics offer great interesting in semitransparent windows, colorful wall facades, electrochromic windows, and thermochromic windows. [106] Note that heat would be generated during the photon-electron conversion process due to the thermodynamic relaxation in the short wavelength range, which is harmful to PSCs and integrated systems. [107] To address this problem, Lin and co-workers have recently proposed an integrated system by connecting tandem PSCs with a thermoelectric device. In such system, additional thermoelectric energy induced by temperature difference between the solar cell and environment could be stored. [108] Apparently, other types of photovoltaic technologies could be also integrated with thermoelectric devices. [109]
Overall Critical Challenges
Besides the cost of commercialized products, the parameters of integration strategy, stability, and energy density are the three critical concerns in the integrated energy conversion-storage systems.
Integration Strategies
There are three integration strategies that are reviewed in Section 2.4, such as independent connection (i.e., wire connection), three-electrode and two-electrode configurations. All the structures are designed for achieving flexibility, compactness, lightness, and easy assembly. The functional applications, geometry, and size of the entire hybrid devices are determined by each unit. Generally, LIBs, supercapacitors, AIBs, and other forms of electrochemical energy storage units are assembled based on the liquid electrolytes (with a feature of high ionic conductivity). The hydration or decomposition of the liquid electrolytes would undermine the stability of the perovskite films. From this perspective, the wire connection seems to be more suitable for liquid energy storage units, while this would constrain the applications of PSCs. On the other hand, wire connection is a traditional integration strategy, which is contrary to compactness and lightness in the integrated energy conversion-storage systems. A feasible development direction is the integration of all solid-state units.
A three-electrode combination is commonly introduced in the construction of photocapacitors and photobatteries. In this complicated system, a functional layer (as the shared positive electrode or shared negative electrode) is a critical factor, which would inject photoexcited carriers from the PSCs into the electrochemical energy storage system. This requires the shared electrode to possess excellent conductivity and stability. Theoretically, carbonbased electrodes or appropriate alkali metals are feasible as the shared electrode for LIBs, supercapacitors, and AIBs. Therefore, chemically stable and safe carbon electrodes seem more suitable candidates, while the low energy density of graphite electrodes is difficult to meet actual demands.
Alternatively, the high thermal conductivity of the shared electrode is necessary to evacuate the heat generated by light radiation and avoid the degradation of the perovskite light-absorbing layer. Simultaneously, this behavior will vary the working temperature of the electrochemical process. Therefore, the thermal dissipation function of the entire integrated system should be considered.
In addition to the three-electrode system, the two-electrode integration is the most attractive integration strategy. The perovskite light-absorbing layer can simultaneously convert and harvest energy. Although the space utilization can be maximized, it is difficult to achieve a balance between the electrochemical process and light conversion process.
Overall Efficiency
Overall efficiency is an important criterion for evaluating the high-performance integrated energy conversion-storage systems. However, the highest overall efficiency with lab-scale PSCs-AIBs delivered 12.04% for a three-electrode system, which is not sufficient to meet commercialization of integrated energy conversion-storage systems. The overall efficiency of integrated energy conversion-storage systems refers to the conversion efficiency of PSCs and storage efficiency of the batteries. The storage efficiency was determined by the electrode and electrolyte, and therefore it is important to choose a reliable electrochemical system in the integrated devices.
Note that the integrated energy conversion-storage systems are needed to operate at the maximum power of the PSCs, in order to achieve the maximum overall efficiency. Namely, the V oc of PSCs must be higher than the maximum electrochemical window of the energy storage, which enables for full charging. Therefore, the power matching between PSCs and electrochemical energy storage units is a key factor. Generally, the voltage ratio of the maximum V oc (PSCs) to the maximum charging voltage approaches 1.0, which suggests an efficient maximum power tracking. On the contrary, it would result in overcharging or undercharging in the battery and increase the rate of the battery aging and stability degeneration. In addition, the maximum power tracking (MPPT) by DC-DC converter is a feasible strategy to improve power matching. The PSCs are connected to power electronic units with charge controllers and inverters, which are combined with the maximum power tracking. By this way, flexible selection of the integrated units could be available. For example, the current of low-V oc PSCs can be used to charge high-voltage electrochemical batteries through converter, which indicates that insufficient voltage will be compensated by MPPT. This greatly improves the adaptability, safety, and stability of the energy storage units for stabilizing the power output. However, the use of DC-DC converters limits the integrated structure of PSCs and energy storage units, which implies that independent connection is different in a complicated integration.
Overall Stability
The stability is a significant factor for long-term operation in the integrated energy conversion-storage systems, which involves the photostability of the perovskite films, electrochemical stability of the energy storage unit, and thermal stability of the solar units. Note that the integrated energy conversion-storage systems are highly dependent on the stability of PSCs, and therefore the stability of PSCs is a prerequisite for the long-term stable operation.
For the photostability in the perovskite films, the environmental factors, such as H 2 O, temperature, O 2 , etc., are the main challenges. In recent years, numerous improvement strategies have been made to enhance the stability of perovskite films (both hybrid PSCs and all-inorganic PSCs). Typical strategies include interface engineering, [110] additive engineering, [111] 2D/3D perovskite design strategy, [112] metal cation doping strategy, [113] and defect passivation strategy. [114] Similarly, inevitable phase transition (from phase to phase) is a critical obstacle in the allorganic perovskite solar cells. Two potential strategies, i.e., doping engineering and quantization, are promising to overcome the challenges of phase transition and water intrusion. [115] Fortunately, these progresses can be directly incorporated into the integrated system, and PSCs are expected to be commercialized with further efforts.
Subsequently, the integrated energy conversion-storage systems should possess favorable power matching, and the energy harvesting units are required to present stable power output. Simultaneously, higher requirements should focus on improving the battery performance. To date, the low-power output in a single PSC is difficult to support electrochemical battery of high working potential. However, a general integration strategy is utilization of series connection with single PSC to provide required charging power, which inevitably increases the size and integration complexity of the target systems.
Another challenge in the integrated energy conversionstorage systems is the thermal generation induced by light radiation. The studies on the solid electrolytes in the electrochemical batteries received much attention, aiming to solving safety and promoting energy density. Therefore, employment of the allsolid-state batteries can not only avoid the influence of volatile and corrosive liquid electrolytes on the stability of PSCs, but also is beneficial to significantly promote the thermal stability and safety of the integrated systems.
Energy and Power Density
To meet application demands, pursuit of high energy and power density is an essential challenge in the integrated energy conversion-storage systems. Actually, the traditional LIBs assembled with Li metal (500 Wh kg −1 ) or Si-based (400 Wh kg −1 ) negative electrodes possess high theoretical energy density. [116] For achieving high energy density of the electrochemical batteries, LIBs are promising energy storage units in the integrated systems. However, the deposition/stripping processes of Li + on the negative electrode would lead to inevitable lithium dendrites or volume expansion in the Si crystals. Thus, such unexpected behaviors would cause severe crisis in battery safety and capacity decay. On the other hand, power density is another crucial technical requirement in the integrated systems. As typical examples, supercapacitors and AIBs are regarded as the devices with high power density. [103,117] Particularly, AIBs exhibited safe advantage in the grid-scale energy storage. Therefore, integration of PSCs with supercapacitors or AIBs is an effective approach to meet the applications where power density is a requirement.
On the other hand, the roll-to-roll battery structure is difficult to match with the planar structure of the PSCs. Normally, energy storage units are needed to be integrated into the surface of the PSCs, and thus a highly integrated structure could be achieved. Apparently, the current effective area of PSCs is not sufficient to support a compact integrated structure design. Therefore, the large-area fabrication of perovskites is a significant technique for improving the overall energy density.
Conclusions and Perspectives
In summary, increasing development of solar energy is expected to deliver a fossil fuel-free energy market in the foreseeable future. In this perspective, PSCs have received considerable attention due to their great progresses in the PCE, which can compete with Si-based solar cells. According to the current PSCs, it is expected that there will be substantial breakthroughs in the future. However, solar cells are the intermittent devices that enable to convert sunlight into electricity without harvesting energy. In the context of the current energy crisis, therefore, the integration of solar cells and energy storage devices is an important strategy. As a clean and renewable energy source, however, it is difficult to achieve improved PSCs due to severe challenges, such as unstable power output and high safety risk. Thus, all-inorganic perovskite is expected to increase the thermal stability of the hybrid solar cells. Up to date, no meaningful progress has been made in promoting the efficiency, although stability has been greatly improved.
In order to in-depth understanding of perovskite solar cells, in this review, we have also discussed the operation mechanism, key parameters (affecting PCE), critical problems, and challenges of PSCs in details. This would trigger the development and applications of energy conversion and storage. The integrated energy conversion-storage systems could be considered as the derivative technology of PSCs, which rely on the technical advantages of PSCs. We have also reviewed and discussed the recent preliminary explorations in this field (in Section 2), which demonstrates the feasibility of the integrated energy conversion-storage systems. However, there are still essential challenges, including compatibility, compactness, suitable power matching, and stable power output. In the power output, it is difficult to achieve highpotential energy storage devices due to the low output voltage of a single perovskite solar cell. Compared with simple series connection (line connection), the two-terminal perovskite solar cells or PSCs/Si configurations greatly increases the output voltage, while the overall occupied volume could be reduced. In addition, the two-electrode integrated design possesses the most advantages among the feasible integrated systems, in which the perovskite thin film plays a critical role in generating and storing the electrical energy. Under solar radiation (100 mW cm −2 ), the coupling process of photoelectron excitation and electrochemistry enhances the storage efficiency and power density of the integrated system. Thereby, high-efficiency integration of light energy harvesting and storage could be realized.
In the attempt of improving the overall efficiency of the integrated energy conversion-storage systems, great contribution has been made up to date, because overall efficiency is one of www.advancedsciencenews.com www.advancedscience.com the most significant factors. However, limited attention has been paid to the other parameters, i.e., energy density and power density of the overall charge storage. As an integrated system, it is difficult to meet the demands in energy density and power density if the optimization is solely applied to the active materials or electrolytes. The photorechargeable battery is an energy storage device, in which both generation of light-excited charge carriers and electrochemical reaction proceed simultaneously. The additional photoelectrons will further enhance the energy and power density of the batteries. Meanwhile, we suggest an integration system for photochargeable batteries and PSCs, which is expected to achieve the goal of maximizing the overall energy and power density. In this system, the design of high transmission positive electrodes (alkali or nonalkali metal electrodes as negative electrodes) is the key criterion. According to the fundamental of PSCs, the active materials of the positive electrodes should be the appropriate semiconductors with good chemical stability (in the electrolytes), strong light absorption (matched bandgap), and high energy density. Whereby, the PSCs and energy storage units can harvest light simultaneously, and the integrated energy conversion-storage systems is self-charged. More importantly, the overall energy density and power density could be substantially enhanced (Figure 9).
|
v3-fos-license
|
2018-04-03T05:30:20.618Z
|
2017-11-16T00:00:00.000
|
3928533
|
{
"extfieldsofstudy": [
"Geography",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.dib.2017.11.028",
"pdf_hash": "0cb5024dac617a7412845dce8ad5c54374d0eb40",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46661",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "0f42b69ac7dc7202302b17d5450da2e52d84a1fa",
"year": 2017
}
|
pes2o/s2orc
|
A reconstructed database of historic bluefin tuna captures in the Gibraltar Strait and Western Mediterranean
This data paper presents a reconstruction of a compilation of a small but consistent database of historical capture records of bluefin tuna (Thunnus thynnus; BFT hereafter) from the Gibraltar Strait and Western Mediterranean (Portugal, Spain and Italy). The compilation come from diverse historical and documentary sources and span the time interval from 1525 to 1936 covering a period of 412 years. There is a total of 3074 datum, which reach up to 67.83% of the total implying a 32.17% of missing data. However, we have only reconstructed the captures for the time interval 1700–1936 and we provide these reconstructions only for this time interval and for 9 out of 11 series due to the scarcity and inhomogeneity of the two oldest capture time series. This reconstructed database provides an invaluable opportunity for fisheries and marine research as well as for multidisciplinary research in climate change.
a b s t r a c t
This data paper presents a reconstruction of a compilation of a small but consistent database of historical capture records of bluefin tuna (Thunnus thynnus; BFT hereafter) from the Gibraltar Strait and Western Mediterranean (Portugal, Spain and Italy). The compilation come from diverse historical and documentary sources and span the time interval from 1525 to 1936 covering a period of 412 years. There is a total of 3074 datum, which reach up to 67.83% of the total implying a 32.17% of missing data. However, we have only reconstructed the captures for the time interval 1700-1936 and we provide these reconstructions only for this time interval and for 9 out of 11 series due to the scarcity and inhomogeneity of the two oldest capture time series. This reconstructed database provides an invaluable opportunity for fisheries and marine research as well as for multidisciplinary research in climate change.
& This database provides an invaluable opportunity for fisheries and marine research (e.g., resources management) as well as for multidisciplinary research in climate change.
This datasets will be beneficial to understand the bluefin tune population dynamics and their relationship with different environmental variables
Data
The historical BFT captures span the time interval from 1525 to 1936 covering a period of 412 years (Fig. 1). There is a total of 3074 datum, which reach up to 67.83% of the total implying a 32.17% of missing data (Fig. 2). Data were manually digitalized from diverse documentary and historical sources as well as some "recent" publications [1][2][3][4][5][6][7][8][9]. Moreover, the database was double-checked due to potential typographical errors by the investigators. In addition, we have compared visually and quantitatively (as much as possible) our compilations with other previous works [2,4,[6][7][8][9][10]. After a preliminary inspection, we decided to limit our data reconstructions to the time interval from 1700 to 1936, due to the scarcity and inhomogeneity of the two oldest capture time series (Conil and Zahara; Fig. 1 in [9]). As a consequence of these drawbacks, we removed Conil and Zahara in our data reconstructions (Fig. 3).
Experimental design, materials and methods
We have reconstructed the missing data using the Data INterpolating Empirical Orthogonal Functions technique (DINEOF) [11][12][13]; as implemented in the R package sinkr [14]. This statistical technique of data reconstruction is based on the decomposition of the time series into Empirical Orthogonal Functions (EOF), and it was first applied to fisheries by [13]. DINEOF is a self-consistent method for reconstructions missing values contained in geophysical data (i.e., oceanographic, meteorological, etc.) [15]. This statistical method is based on the fact that an optimal number of EOFs, usually very small if compared to the total number of EOFs, retains a large fraction of the total variance of the whole dataset. The DINEOF method fills the missing data by means of an iterative process [12,13]: 1) the leading EOF is computed; 2) the leading EOF is used to estimate the anomalies at the missing points; 3) the process is iterated until convergence in the anomalies at the missing values is achieved from one iteration to the next within a prescribed tolerance level; 4) once the convergence is reached, the number of computed EOFs increases, from 1 to 2 and next to k max EOFs; and 5) there is an estimate for the missing data reconstructed after convergence is achieved with a reconstruction computed using 1, 2, …, k max EOFs. The optimum number of EOFs to be used in the reconstructions is defined by means of the cross-validation technique [16]. However, in this data paper, we have used the maximum number of EOFs, which corresponds to the number of reconstructed time series (i.e., 9 series).
|
v3-fos-license
|
2019-04-26T13:36:11.629Z
|
2019-04-01T00:00:00.000
|
133334446
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41030-019-0091-0.pdf",
"pdf_hash": "dd9f9229cc40dcdadb1673a8b9bb480a9a7676fd",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46662",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "dd9f9229cc40dcdadb1673a8b9bb480a9a7676fd",
"year": 2019
}
|
pes2o/s2orc
|
Patient-Reported Burden of Illness in a Prevalent COPD Population Treated with Long-Acting Muscarinic Antagonist Monotherapy: A Claims-Linked Patient Survey Study
Introduction Symptom burden in inadequately controlled chronic obstructive pulmonary disease (COPD) considerably impacts quality of life, healthcare resource utilization (HCRU) and associated costs. This claims-linked cross-sectional survey study assessed symptom burden and HCRU among a prevalent population of COPD patients prescribed long-acting muscarinic antagonist (LAMA) monotherapy. Methods Patients were identified using claims data from the Optum Research Database. Eligible patients were aged ≥ 40 years with 12 months’ continuous enrollment in a US health plan, ≥ 2 medical claims containing COPD diagnosis codes ≥ 30 days apart, and ≥ 2 claims for LAMA monotherapy in the latter half of the 12-month sample identification period. Patients were mailed a cross-sectional survey assessing patient-reported outcomes (PROs) [COPD assessment test (CAT) and modified medical research council dyspnea scale (mMRC)], clinical characteristics, smoking history, and demographics. Patients also completed the Exacerbations of Chronic Pulmonary Disease Tool (EXACT-PRO) daily diary for 7 days. HCRU was assessed from claims data. Results The study included 433 patients with a self-reported healthcare provider COPD diagnosis, and both claims-based and self-reported LAMA monotherapy treatment (mean age 71.0 years; 59.8% female). Most patients (85.5%) reported a high symptom burden (CAT score ≥ 10), 45.5% had high levels of dyspnea (mMRC grade ≥ 2), and 64.4% reported more severe daily symptoms by the EXACT-PRO. Most patients (71.6%) reported high scores on ≥ 2 PROs. More patients with high symptom burden had COPD-related emergency department visits than those with lower disease burden (27.6% vs 12.7%, P = 0.012). Conclusions In conclusion, a large proportion of patients with COPD receiving LAMA monotherapy experienced a high symptom burden and may benefit from therapy escalation. Healthcare professionals can use validated PROs to help them assess symptom burden. Funding GlaxoSmithKline (GSK study number: 205862) Electronic supplementary material The online version of this article (10.1007/s41030-019-0091-0) contains supplementary material, which is available to authorized users.
INTRODUCTION
Chronic obstructive pulmonary disease (COPD) is one of the most common chronic diseases, and is a leading cause of mortality and morbidity worldwide [1,2]. In the USA, it is the third leading cause of death and is reported to affect over 15 million people [3,4]. This number is expected to rise due to increasing exposure to risk factors and changing population demographics. COPD, characterized by airflow obstruction that progressively worsens over time, leads to debilitating symptoms such as dyspnea and persistent cough and is one of the leading causes of hospitalizations and emergency department (ED) visits globally [1,5]. COPD is also associated with considerable economic burden, with COPD exacerbations in particular contributing significantly to both direct and indirect healthcare costs [1,2,5].
The mainstay of pharmacological therapy for COPD is bronchodilation with a long-acting muscarinic antagonist (LAMA), a long-acting b 2agonist (LABA), or a combination of the two [6][7][8]. Currently, the 2019 Global Initiative for Chronic Obstructive Lung Disease (GOLD) strategy document recommends LAMA or LABA monotherapy as the initial therapy for patients with COPD with a lower symptom burden and higher exacerbation history, and a higher symptom burden and lower exacerbation history [2]. However, a significant proportion of patients can fail to achieve adequate control of symptoms when treated with LAMA or LABA monotherapy [9]. For these patients, escalation to LAMA/LABA combination therapy is recommended, or escalation to triple therapy [a combination of a LAMA, LABA, and inhaled corticosteroid (ICS)] [3] for patients at higher risk of exacerbation [2,9].
The increased symptom burden in patients with inadequately controlled COPD can reduce activity levels and quality of life (QoL) [10][11][12], as well as increasing the risk and frequency of exacerbations which are associated with more rapid disease progression [13] and are a major driver of healthcare resource utilization (HCRU) and associated costs [14,15]. It is therefore important to understand the symptom burden for patients receiving COPD treatment, so that treatment strategies can be optimized.
The objective of this study was to further understand the burden of COPD by examining symptom burden and HCRU among a prevalent population of patients with COPD treated with LAMA monotherapy. The primary objective was to identify the proportion of patients reporting COPD symptoms while receiving treatment with LAMA monotherapy. Secondary objectives included the description of the patient-reported burden of illness, and all-cause and COPDrelated HCRU.
Study Design
The study was a claims-linked, cross-sectional survey of patients with COPD who were prescribed LAMA monotherapy and enrolled in commercial or Medicare Advantage (MA) insurance plans. Patients were identified using medical and pharmacy claims, and enrollment data from the Optum Research Database (ORD) between October 1, 2015 and September 30, 2016. The ORD is a large, geographically diverse, US administrative claims database. In 2016, approximately 32.8 million individuals with commercial coverage and 3.2 million individuals with MA coverage were included in the ORD.
Patients who met study inclusion/exclusion criteria (below) were recruited directly by mail and consented to study participation by returning a completed paper survey and/or a 7-day daily diary. Survey data collection occurred from October to December 2016 and was conducted using a modified Dillman method [16]. Patients were paid $25 following the return of the survey and/or diary with a maximum payment of $50 per patient.
The study was approved by the New England Institutional Review Board (NEIRB), on September 9, 2016 (IRB #120160900). Data collection activities were initiated following all approvals. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent to take part in the study was implied by the return of study materials.
Patient Identification
Patients were required to be at least 40 years of age and continuously enrolled in a commercial or MA health plan with both medical and pharmacy benefits during the 12-month baseline period. Patients were also required to have C 2 medical claims containing diagnosis codes commonly used to define COPD [2,17] [International Classification of Disease 10th Revision Clinical Modification (ICD-10-CM) codes J40-J44] at least 30 days apart during the 12-month baseline and C 2 claims for LAMA monotherapy (umeclidinium, tiotropium, or aclidinium) in the latter 6 months of the sample identification period (codes and treatments are presented in Supplementary Table S1). Patients were excluded if they had prescription claims for any ICS-or LABA-containing therapy (ICS, ICS/LABA, or LAMA/LABA) during the 12-month baseline period. Patients with evidence of lung cancer during the baseline period were excluded. All patients were also required to self-report a healthcare professional diagnosis of COPD and LAMA monotherapy use, and to be able to complete the study surveys in English.
Study Measures
Demographic, sociodemographic, and clinical characteristics were captured using both patient-reported survey data and claims data, including patient-reported time since diagnosis, and smoking status. Claims-based evidence of 11 COPD-related comorbidities were identified: dyspnea; hypertension; atherosclerotic cardiovascular disease (ASCVD); type 2 diabetes mellitus; obstructive sleep apnea; depression; anxiety (all comorbidities were based on diagnosis codes except for depression and anxiety rates, which were based on evidence of diagnosis and/or treatment) [18]. Quan-Charlson comorbidity scores [19] were calculated based on the presence of diagnosis codes on medical claims during the baseline period. HCRU, including ambulatory (physician office and outpatient), inpatient, and ED visits, were obtained from medical claims. HCRU was defined as COPD-related if the medical claim included an ICD-10-CM diagnosis code for COPD in any position.
PRO Measures
In this study, burden of illness for COPD was defined by symptom burden, dyspnea, and symptom severity, and was assessed using three COPD-related validated patient-reported outcomes (PRO) measures. Symptom burden was assessed using the COPD Assessment Test (CAT) [20] and dyspnea was assessed using the modified medical research council dyspnea scale (mMRC) (both assessed using survey data); symptom severity was measured using the EXACT [21] (assessed using a daily diary for 7 days). The EXACT was used in a time-limited fashion to assess symptoms not addressed by CAT or mMRC, and was not used to evaluate exacerbation history. Classifications of burden of illness, dyspnea, and symptom severity were based on established cut-points, where available: [2] patients with an mMRC score C 2 (range 0-4) were classified as having severe dyspnea, and patients with a CAT total score C 10 (range 0-40) were classified as having a high symptom burden [22]. For EXACT scores, patients with more severe symptoms were defined as those having an EXACT score greater than the mean total score in the study sample on at least 1 day. The proportion of patients with CAT, mMRC, and EXACT scores meeting these thresholds were used to show the prevalence of high symptom burden among patients treated with LAMA monotherapy.
Statistical Analyses
The analytic population consisted of respondents with claims-linked survey and diary data who met all study inclusion and exclusion criteria (n = 433). Statistical analyses were performed using SAS software (SAS Institute Inc., Cary, NC, USA, version 9.4) on a Unix platform. Results are presented descriptively. Statistical comparisons were performed using the appropriate two-sided tests (e.g., t-test, chi-square test) based on the distribution of the measure. For all PRO measures, the mean total, summary, and/or domain scores and standard deviations (SDs) were calculated. In all analyses, statistical significance was defined as P \ 0.05.
Study Population
A total of 2275 patients met the eligibility criteria for the study [including having multiple medical claims containing COPD diagnosis codes (Supplementary Table S1), a self-reported healthcare provider COPD diagnosis, and both claims-based and self-reported LAMA monotherapy treatment] and were invited to participate. Of these, 528 completed the survey and daily diary (29.8% response rate [23]) and 433/528 had matched claims, survey, and diary data, and were included in the final analyses ( Fig. 1). COPD chronic obstructive pulmonary disease; ICS inhaled corticosteroid; ID identification; LAMA long-acting muscarinic antagonist Patient demographics and clinical characteristics are presented in Table 1. Over half of the patients were female (59.8%), and the average age was 71.0 years. The majority had an education level of high school or less (58.4%), annual household income \ $50,000 (79.2%), and were current or former smokers (92.6%). There was a high incidence of comorbidities during the 12-month baseline period: all patients had at least one comorbidity, as measured by the Quan-Charlson comorbidity index, [19] with a mean (SD) baseline score of 2.2 (1.6), and with 36% of patients having a mean comorbidity score of C 3. A comparison of baseline demographics of respondents and non-respondents to the survey is shown in Supplementary Table S2. Respondents and non-
COPD Burden of Illness
Despite all patients receiving LAMA monotherapy, the majority reported a high COPD symptom burden; 85.5% and 39.0% had a CAT total severe symptoms = having an EXACT score greater than the mean total score in the study sample, on at least 1 day; higher EXACT values indicate greater symptom severity score C 10 or C 21, respectively, and the mean (SD) overall CAT score was 18.5 (8.4). In addition, almost half of the patients (45.5%) had high levels of dyspnea (mMRC grade 2-4); the mean (SD) mMRC score was 1.6 (1.0), and most patients experienced shortness of breath with less than strenuous exercise (90.5%). Analysis of the EXACT daily diary scores showed that 64.4% of patients had more severe symptoms (relative to the mean total study sample score on at least 1 day) and the mean (SD) average EXACT total score was 37.1 (12.1) ( Table 2). Over one-third (35.8%) of patients had a higher symptom burden on all three PRO measures (mMRC, CAT, and EXACT), and 71.6% reported high scores on C 2 COPDrelated PRO measures. By contrast, only 12.0% of patients experienced low symptom burden on all three measures ( Table 2).
All-Cause HCRU
Patients were prescribed a mean (SD) of 12.8 (6.7) unique medications during the 12-month baseline period. Patients with more severe disease burden (CAT score C 10) were prescribed a statistically significantly higher number of unique medications than those with a low CAT score (CAT score \ 10; Supplementary Table S3). All patients had C 1 all-cause ambulatory visits during the baseline period, with most patients having at least one visit to the physician's office (Supplementary Table S3). Nearly 1 in 4 patients experienced an inpatient hospitalization (22.6%), with an average length of stay for inpatient admissions of 13 days. Nearly half (45.3%) of patients had C 1 ED visit, and the average number of visits among those with any ED visit was 2.1 visits per person.
COPD-Related HCRU
Almost all patients (97.7%) had C 1 COPDrelated ambulatory visits during the 12-month baseline period, including 89.4% of patients requiring a physician office visit and 49.0% with at least 1 outpatient visit ( Table 3). The proportion of patients with physician office visits was statistically significantly lower among patients with high disease burden compared with those with low disease burden as measured by total CAT score (88.1% vs 96.8%, P = 0.044). Of those patients with at least one physician office visit, patients with higher CAT scores required more visits than those with lower CAT scores [mean (SD) number of visits: 3.7 (2.4) vs 3.2 (1.8), P = 0.035]. No statistically significant difference was observed between the numbers of patients requiring outpatient visits.
A quarter (25.4%) of patients had a COPD-related ED visit during the baseline period; the mean number of ED visits among these patients was 1.6 ( Table 3). The proportion of patients with ED visits was statistically significantly higher among patients with high symptom burden (CAT score C 10) than those with low symptom burden (CAT score \ 10; 27.6% vs 12.7%, P = 0.012). Overall, 21.3% of patients were hospitalized at least once for COPD; among these patients, the mean number of hospitalizations was 1.4 with an average duration of 13 days. Differences in the numbers of inpatient visits, and the mean duration of inpatient stays, between patients with low and high CAT total scores were not statistically significant.
DISCUSSION
In this claims-linked survey study assessing patient-reported symptoms and burden of illness among patients with COPD treated with LAMA monotherapy, COPD had a considerable impact on patient well-being and was associated with substantial resource burden. These results are consistent with observations from previous studies which have reported that patients receiving long-acting bronchodilator monotherapy continued to experience a high symptom burden, had recent exacerbations and exhibited poor QoL, and had a higher than average rate of physician interactions [9,24]. The majority of patients in this study experienced substantial symptom burden as measured by multiple PROs; in particular, there were high levels of dyspnea, with almost half of the patients experiencing severe dyspnea using the definitions presented in the GOLD strategic report [2]. Dyspnea poses significant problems for patients, not only in terms of day-to-day QoL but also as a marker of disease progression: for example, dyspnea is a predictor for hospitalization [25] and was found elsewhere to be more strongly correlated with 5-year survival rate than forced expiratory volume in 1 s [26].
The current analysis closely mirrored the GOLD strategy in identifying the impact of COPD in patients with a low and high level of symptoms as defined by a CAT score of \ 10 or C 10, respectively. In the cohort identified in this study, the vast majority of patients Table 3 Association between COPD-cause healthcare resource utilization during the 12-month baseline period and burden of illness CAT COPD Assessment Test, COPD chronic obstructive pulmonary disease, ED emergency department, IP inpatient, SD standard deviation a COPD-related treatment claims with COPD had a high level of symptoms. Consequently, future real world studies could plan to explore additional cut-off points in CAT score C 10 to gain additional insights on the level of symptoms likely to further extenuate any increased risk of HCRU. In this study, patients utilized a wide range of healthcare resources requiring hospitalization and ED visits, as well as a range of ambulatory visits. The majority of patients (97.7%) were seen by a healthcare professional for their COPD (office or outpatient visit) during the 12-month baseline period. A statistically significantly greater proportion of patients with lower CAT total scores (\ 10) had a physician office visit compared with patients with higher CAT scores (C 10). Conversely, patients with higher CAT scores had statistically significantly more ED visits, suggesting greater disease severity or suboptimal management, consistent with the greater symptom burden experienced by this group.
In view of the considerable symptom burden and resource use evident among patients treated with LAMA monotherapy in this study, it is possible that many of these patients could benefit from escalation of therapy. In accordance with the current GOLD recommendations, clinicians should consider the use of an additional bronchodilator such as LAMA/LABA combination therapy when a monotherapy bronchodilator does not provide adequate symptom control [2]. Multiple clinical trials and network meta-analyses have reported improved lung function and QoL outcomes with the use of LAMA/LABA combination therapy compared with long-acting bronchodilator monotherapy [27][28][29][30][31][32][33][34].
It should be noted that CAT, mMRC, and EXACT assess different aspects of COPD burden of illness, and it remains unclear which of these tools or combination of tools should be prioritized in assessment of patient symptoms. The current study was limited to patients receiving LAMA monotherapy and excluded all patients prescribed ICS or dual bronchodilators. The reasons why patients with poor disease control on monotherapy did not escalate to combination therapies were not evaluated. One possible reason might be an underestimation of the symptom burden by the physician, as identified by Mapel et al. [35] or poor communication of the symptom burden between patients and physicians.
We note that the study population is slightly older, has a higher proportion of female patients, and a lower percentage of smokers than are usually seen in COPD clinical trials [36]. In addition, a high proportion of patients have comorbidities, in particular those related to ASCVD, [36] therefore it is possible that dyspnea observed in these patients may not be due only to COPD.
Limitations of this study included those typically associated with claims-linked survey studies. Because claims data are collected for payment rather than research, this data source is associated with certain limitations: the presence of a claim for a filled prescription does not indicate that the medication was consumed or taken as prescribed, and medications filled over-the-counter or provided as samples by a physician were not captured. Additionally, the presence of a diagnosis code on a medical claim does not constitute conclusive evidence of the disease. Patients with a diagnosis of asthma were not excluded from the study; therefore, as no spirometry data were available to confirm the COPD diagnosis, it is possible that the diagnosis code may have been included as a rule-out criterion or incorrectly coded. To help address these limitations, multiple pharmacy claims and diagnosis codes were required for sample inclusion; the requirement for patients to have claims-based and self-reported treatment and multiple diagnosis codes for COPD at least 30 days apart ensured that a prevalent population of patients with COPD was included in the analysis. Patients were also required to report a healthcare provider COPD diagnosis and COPD treatment. The study is subject to limitations of survey data, including sampling error, coverage error, and measurement error. Finally, the study population comprised patients with commercial health plan coverage and MA enrollees, and therefore the results may not be generalizable to uninsured populations or more broadly to populations outside the USA.
CONCLUSIONS
While LAMA monotherapy has demonstrated efficacy and has been shown to reduce dyspnea, exacerbations, and hospitalizations in patients with COPD, [37] the results of this study demonstrate that a large proportion of patients receiving LAMA monotherapy still remain symptomatic. Escalation of therapy to a dual LAMA/ LABA combination may be indicated to reduce patient burden of illness and improve patient QoL, which may in turn reduce HCRU. Better utilization of PROs to understand symptom severity and disease burden in COPD may lead to improved treatment strategies and in turn to amelioration of disease burden and reduced HCRU. Physicians should therefore consider including questions or tools to measure symptom burden, such as the CAT or mMRC, as part of routine care for patients with COPD.
ACKNOWLEDGMENTS
The authors thank the participants of this study.
Funding. This study was funded by GlaxoSmithKline (GSK study number 205862 [HO- ). The funders of the study had a role in the study design, data analysis, data interpretation, and writing of the report. The article processing charges associated with this publication were funded by GSK. The study was conducted by Optum and funded by GSK. Employees of Optum were not paid for manuscript development. All authors had full access to the data in this study and take complete responsibility for the integrity of the data and accuracy of the data analysis. The corresponding author had the final responsibility to submit for publication and is the guarantor.
Medical Writing, Editorial, and Other Assistance. Editorial support (in the form of writing assistance, assembling tables and figures, collating author comments, grammatical editing and referencing) was provided by Elizabeth Jameson, PhD, at Fishawack Indicia Ltd, UK, and was funded by GSK.
Authorship. All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work, contributed to the writing and reviewing of the manuscript, and have given final approval for the version to be published.
Authorship Contributions. BH was involved in the conception/design of the study and analysis/interpretation of data. RHS was involved in the conception/design of the study and analysis/interpretation of data. AGH was involved in conception/design of the study, the acquisition of data and analysis/interpretation of data. BE was involved in the acquisition of data. JW (statistician) was involved in the acquisition of data and analysis/interpretation of data. RR was involved in the conception/design of the study and analysis/interpretation of data.
Disclosures. Beth Hahn is an employee of GSK and hold stocks/shares in GSK. Riju Ray is an employee of GSK and hold stocks/shares in GSK. Richard H Stanford is an employee of GSK and hold stocks/shares in GSK. Alyssa Goolsby Hunter is an employee of Optum, which was contracted by GSK to conduct the study. Alyssa Goolsby Hunter also owns stocks in Optum's parent company, United Health Group. Breanna Essoi is an employee of Optum, which was contracted by GSK to conduct the study. John White is an employee of Optum, which was contracted by GSK to conduct the study. Employees of Optum were not paid for manuscript development.
Compliance with Ethics Guidelines. The study was approved by the New England Institutional Review Board (NEIRB), on September 9, 2016 (IRB #120160900). Data collection activities were initiated following all approvals. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent to take part in the study was implied by the return of study materials.
Data Availability Statement. This study was a collaboration between GSK and Optum. GSK makes available anonymized individual participant data and associated documents from interventional clinical studies which evaluate medicines, upon approval of proposals submitted to www.clinicalstudydatarequest.com. To access data for other types of GSK sponsored research, for study documents without patientlevel data and for clinical studies not listed, please submit an enquiry via the website. The datasets analyzed during the current study are not publicly available. For this manuscript, the data is contained in a database owned by Optum and contains proprietary elements and, therefore, cannot be broadly disclosed or made publicly available at this time. The disclosure of this data to third-party clients assumes certain data security and privacy protocols are in place and that the third-party client has executed Optum's standard license agreement which includes restrictive covenants governing the use of the data.
|
v3-fos-license
|
2017-09-06T09:59:09.020Z
|
2021-12-15T00:00:00.000
|
42895207
|
{
"extfieldsofstudy": [
"Medicine",
"Business"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-88203-7_5.pdf",
"pdf_hash": "0d76d0cd4e18bbcc537c1f3ffafbcfe978b73dec",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46663",
"s2fieldsofstudy": [
"Economics"
],
"sha1": "cbf7609b484cf722a2681abde16c283beb7da0d6",
"year": 2021
}
|
pes2o/s2orc
|
The business case for sustainability
Companies that align their business and employee values with environmental and social responsibility may also see decreased employee hiring and retention costs . Patagonia, a brand renowned for its commitment to its sustainable mission and responsible business practices, has a 4% employee turnover rate—much lower than the average across the retail and consumer goods sectors. Sustainability’s impact on corporate finance should not be overlooked either. Studies show that businesses with responsible ESG practices often have lower cost of equity and debt capital . There are also an increasing number of tax benefits available at the federal and state level that accelerate the timeline for sustainability initiatives to reach positive ROI. Notably, the Inflation Reduction Act (IRA) made $416 billion available to businesses to invest in solutions that reduce pollution, expand clean energy production, and address historical and emerging inequities.
THESE COMPLEX CHALLENGES CAN CREATE VALUE FOR BUSINESS
Sustainability has become an important factor in business strategies.Large multinationals and mid-sized companies are increasingly taking a long-term view toward managing environmental and social risks.Many companies recognize that by addressing environmental and social issues they can achieve better growth and cost savings, improve their brand and reputation, strengthen stakeholder relations, and boost their bottom line.Strategic integration of sustainability prepares companies to better anticipate and understand long-term trends and the effect of resource use, and to address stakeholder expectations.According to a 2011 McKinsey Survey, 76 percent of CEOs consider that strong sustainability performance contributes positively to their businesses in the long term.
2 Companies are capitalizing on local conditions and shaping their business strategies to accommodate constraints on natural resources in a way that allows them to develop innovative new products, services, and business models.This also provides opportunities to bolster their growth, profitability, and add societal value.The business case for sustainability is also connected to improved reputation and brand value.The global survey of senior executives from around the world conducted by the Economist in 2011 found that 76 percent of respondents think that embedding sustainability into the company's business leads to enhanced reputation and increased brand value.The more a company proves to stakeholders that its business is driven by strong sustainability policies, the lower the risks associated with that company.In contrast, weak environmental, social, and governance (ESG) performance can negatively impact a firm's reputation, which in many cases can be costly.British Petroleum (BP) is a good example of how a company's brand value can be affected by poor sustainability policies.
CREATING INNOVATIVE SOLUTIONS
Jain Irrigation is an example of a company that created innovative social solutions and feeds those innovations back into communities.The IFC client, based in Jalgaon, India, pioneered a system of contract farming in which the company buys farmers' crops at a guaranteed price, thereby enabling farmers to plan and obtain loans for irrigation products, such as an affordable drip irrigation system that reduces water consumption.Jain Irrigation has worked closely with its rural customers to promote precision farming, which increases output by optimizing the balance between fertilizers, pesticides, water, and energy.This approach has given Jain Irrigation a competitive edge: its close relationship with smallholder farmers and the fact that its products are customized to local conditions make it easier to win business from large agricultural suppliers.
PROTECTING BRAND VALUE
Due to the Gulf of Mexico oil spill, BP has lost more than $32 million a day in brand value.BP's market value has dropped from $184 billion to $96.5 billion, roughly 48 percent in a period of two months.Developing a good environmental and social reputation can contribute to a willingness among customers and investors to pay a price premium, which directly affects the company's bottom line.
Investment in resource efficiency is important for small and large companies.It helps them strengthen their competitive advantage.Studies have shown that improvements in resource efficiency in energy and water have lead to significant cost savings and lower environmental impact.DuPont, for example, has cut costs by $2 billion in the last 10 years by investing in energy efficiency equipments while reducing greenhouse gas emissions by 75 percent.Another good example in reducing operational costs and environmental impact is the IFC client Kuybyhev Azot in Russia.Companies are working with suppliers to become more resource efficient and environmentally sustainable.For example, Wal-Mart is aiming to save $3.4 billion from reducing supplier packaging by 5 percent by 2013.
There is a correlation between good environmental and social performance and financial performance.According to a Harvard Business School study that tracked performance over the last 18 years, companies with strong ESG performance outperformed companies with weak ESG performance, as measured in accounting terms. 3The study found that performance was stronger in sectors that were significant users of natural resources, where brand and human capital were particularly important and where the companies competed on a business-toconsumer basis.
CUSTOMERS AND INVESTORS VALUE STRONG ESG PERFORMANCE
The growing demand by consumers and investors for sustainable products and services, coupled with increased scrutiny and reporting on corporate responsibility, are driving companies to pay greater attention to their ESG performance.According to McKinsey's global survey of 7,751 consumers, 87 percent are concerned about the environmental and social impacts of the products they buy and 54 percent are willing to pay a premium for products that are sustainably manufactured.
Increasingly, investors are considering environmental and social issues when selecting investments.According to Bloomberg, in 2010, 5,000 investors in 29 countries accessed more than 50 million ESG indicators in the Bloomberg platform-a 29 percent increase over the previous year.Different sustainability reporting frameworks such as the Global Reporting Initiative (GRI) and the Carbon Disclosure Project (CDP) have become important tools for investors in making informed investment decisions.The number of companies using GRI as a framework for sustainable reporting has increased by 73 percent in the last four years, with a dramatic increase from developing countries that are reporting on sustainability measures.
The socially responsible investing (SRI) market enables investors to have a positive return on their investments while also bringing positive impacts to society.According to the Ethical Funds global survey of investors, 92 percent of respondents think that financial returns of SRIs play an important role in their decision to invest in SRIs.Similarly, environmental and social evaluation plays a crucial role in the investors' decision to allocate their capital to SRI funds.
The growth of SRIs has increased exponentially in the last 10 years.The SRI market has grown at an annual rate of 22 percent since 2003.By 2015, SRI assets under management will reach $26.5 trillion or 15 percent of the global total.In 2011, SRIs attracted about one dollar out of every nine invested.Investors are attracted by SRI markets due to their robust financial performance.The majority of SRI funds outperformed S&P 500 over a 10-year period by an average of 6.7 percent.Similarly, over a 5-year period the Dow Jones Groups Sustainability Index performed at an average of 36.1 percent better than did the traditional Dow Jones Group Index.
IMPROVEMENTS IN ESG PERFORMANCE CAN RESULT IN GREATER DEVELOPMENT IMPACT
The success of a company is inextricably linked to the success and sustainability of the communities in which they operate.The Coca-Cola Company and Newmont help illustrate how companies are integrating sustainable development objectives into their core business strategies, thereby benefiting the communities and local economies in which they operate.
The Coca-Cola Company played an important role in the sustainable development of communities in Zambia through the value of goods generated, jobs created, and its positive impact on the supply chain.Coca-Cola procures approximately 25 percent of inputs from local smallholder farmers.Remaining inputs are purchased from companies based regionally.Smallholder farmers play an important role in growing the sugar that is used in Coca-Cola products.In the case of Zambia, sugarcane workers are among the most vulnerable to labor violations due to the lack of formal contractual arrangements to protect their rights, and the low-paid/seasonal nature of their work.For this reason, Coca-Cola introduced an audit program to assess whether supplier and bottler workplaces uphold internationally recognized labor and environmental standards.Through its local partners, Coca-Cola introduced programs to support HIV/AIDS services for its employees and their dependents free of charge, including education and awareness-raising programs, voluntary testing and counseling, and free antiretroviral drugs.
Since Coca-Cola uses water as the primary ingredient in its beverages as well as in its manufacturing activities, the company's most significant impact is on water resources at the agricultural stage.Prior to investing in the Ahafo Mine, Newmont engaged with local communities to responsibly resettle and compensate roughly 1,700 households located in the mining area.Due to resettlement of communities, Newmont built new homes and schools and residents were granted legal title to the land, along with potable water and access to electricity.Additionally, Newmont launched a community development fund to contribute an estimated $500,000 annually to support community development programs such as the provision of water, sanitation, upgrading local clinics and training centers, HIV/AIDS programs for workers as well as a program on malaria prevention, and an information forum for women in the community.In addition to Newmont's community programs, IFC introduced linkages programs to increase local participation in the project and bring additional benefits to the surrounding communities.
OF COMPANIES WITH WEAK VS. STRONG ESG PERFORMANCE Source
: Eccles G.R., Ioannou I. Serafeim G. "The Impact of a Corporate Culture of Sustainability on Corporate Behavior and Performance," Harvard Business School, November, 2011.
3 Eccles G.R., Ioannou I. Serafeim G. "The Impact of a Corporate Culture of Sustainability on Corporate Behavior and Performance," Harvard Business School, November, 2011.FINANCIAL PERFORMANCE million in the Ahafo Mine in Ghana to develop four mining areas, and build and operate related mine facilities.IFC supported the project with $125 million in loans or about 21 percent of total cost.
|
v3-fos-license
|
2021-02-05T05:11:44.189Z
|
2021-02-03T00:00:00.000
|
231803056
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-021-82661-y.pdf",
"pdf_hash": "27bf2d7173badb63abd9e5d848ee81740214fc88",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46664",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "27bf2d7173badb63abd9e5d848ee81740214fc88",
"year": 2021
}
|
pes2o/s2orc
|
Five patients with disorders of calcium metabolism presented with GCM2 gene variants
The GCM2 gene encodes a transcription factor predominantly expressed in parathyroid cells that is known to be critical for development, proliferation and maintenance of the parathyroid cells. A cohort of 127 Spanish patients with a disorder of calcium metabolism were screened for mutations by Next-Generation Sequencing (NGS). A targeted panel for disorders of calcium and phosphorus metabolism was designed to include 65 genes associated with these disorders. We observed two variants of uncertain significance (p.(Ser487Phe) and p.Asn315Asp), one likely pathogenic (p.Val382Met) and one benign variant (p.Ala393_Gln395dup) in the GCM2 gene in the heterozygous state in five families (two index cases had hypocalcemia and hypoparathyroidism, respectively, and three index cases had primary hyperparathyroidism). Our study shows the utility of NGS in unravelling the genetic origin of some disorders of the calcium and phosphorus metabolism, and confirms the GCM2 gene as an important element for the maintenance of calcium homeostasis. Importantly, a novel variant in the GCM2 gene (p.(Ser487Phe)) has been found in a patient with hypocalcemia.
, a disorder characterized by hypercalcemia and elevated or inappropriate PTH secretion by parathyroid glands.
In the present study, we report a novel variant of uncertain significance in the GCM2 gene in a family from Spain with severe hypocalcemia. Moreover, we report three previously described GCM2 gene variants, one likely pathogenic, one of uncertain significance and one benign, in four families from Spain presenting with different disorders of calcium metabolism.
Materials and methods
Ethics statement. The study was approved by the Ethics Committee for Clinical Research of Euskadi (CEIC-E). Patients and their participating relatives provided written informed consent for the genetic study. The research was carried out in accordance with the Declaration of Helsinki on human experimentation of the World Medical Association.
Patients.
A total of 65 genes whose mutations are a recognized cause of calcium and phosphorus metabolism disorders were tested by a Next-Generation Sequencing (NGS) panel in a cohort of 127 Spanish patients (50 had hypocalciuric hypercalcemia, 44 were diagnosed of primary hyperparathyroidism, 13 presented with hypocalcemia and/or hypoparathyroidism, 12 were diagnosed of pseudohypoparathyroidism and 8 had rickets). Clinical diagnoses were made by adult and pediatric endocrinologists. In all cases, the molecular analysis was done in the Molecular Genetic Laboratory at Biocruces Bizkaia Health Research Institute, Barakaldo, Spain.
Index case CA0117 was a 65-year-old male who had hypocalcemia and hypoparathyroidism. He had osteoarthritis and suffered tingling of fingers and toes. Moreover, he was diagnosed with glaucoma. Laboratory results showed low serum Ca 2+ (5.4 mg/dL) and serum intact PTH (5.8 pg/mL), whereas 25-hydroxyvitamin D (24 ng/ www.nature.com/scientificreports/ mL) and serum phosphate (4.3 mg/dL) were within the normal range. He was treated with calcium and vitamin D supplements. Regarding family history, there was no history of hypoparathyroidism or hypocalcemia (Fig. 1b). Index case ME0292 was a 67-year-old male presenting with elevated serum intact PTH levels, hypophosphatemia, normocalcemia with vitamin D deficiency, hypercalciuria, and nephrolithiasis. Laboratory evaluation showed normal serum calcium (10.2 mg/dL), high intact PTH (95.6 pg/mL), low serum phosphate (2 mg/dL), and 25-hydroxyvitamin D levels of 11 ng/mL. In addition, he exhibited high urinary calcium excretion (386 mg/24 h, reference range in adult male < 300 mg/24 h). Furthermore, index case ME0292 had a personal history of ankylosing spondylitis, prostate gland enlargement, hepatic steatosis, and hyperlipidemia. He was diagnosed of primary hyperparathyroidism and parathyroid glands surgery was performed. Parathyroid hyperplasia of two superior glands was verified histologically and both were removed. However, after the surgical intervention, he continued with high intact PTH levels (113 pg/mL). He had a 61 year-old sister who had hyperparathyroidism (intact PTH 103 pg/mL) and nephrolithiasis as well (Fig. 1c). She had normal serum Ca 2+ (9.4 mg/dL), normal serum phosphate (3.1 mg/dL) and exhibited slightly high urinary Ca 2+ excretion (288 mg/24 h, reference range in adult women < 250 mg/24 h).
Finally, index case CA0103 was a 69 year-old male who showed normal serum Ca 2+ (9.5 mg/dL), high intact PTH (140-200 pg/mL), and normal serum phosphate (3.2 mg/dL). The parathyroid scintigraphy with Technetium 99 m-sestamibi suggested a left parathyroid adenoma. The patient exhibited low urinary Ca 2+ excretion (70 mg/24 h). Additionally, he had stage 3 chronic kidney disease which was stable in the last 15 years, and experienced an isolated episode of urinary lithiasis at 20 years of age. The last renal ultrasound performed was normal. Regarding family history, there was no history of hyperparathyroidism or chronic kidney disease (Fig. 1e). . PubMed (https ://www.ncbi.nlm.nih.gov/pubme d/) was consulted to select the genes included in the panel. Library preparation was done using the Ion Ampliseq Library Kit v2.0 (Thermo Fisher Scientific) according to manufacturer's instructions. Samples were then sequenced using the Ion GeneStudio S5 System (Thermo Fisher Scientific). Base calling, read filtering, alignment to the reference human genome GRCh37/hg19, and variant calling were done using Ion Torrent Suite and Ion Reporter Software (Thermo Fisher Scientific).
Variants described in this article were tested by polymerase chain reaction (PCR), sequenced with fluorescent dideoxynucleotides (BigDye Terminator v3.1 Cycle Sequencing Kit, Life Technologies, Grand Island, NY, USA), and loaded onto an ABI3130xl Genetic Analyzer (Thermo Fisher Scientific).
DNA variants were named according to the Human Genome Variation Society guidelines (http://www.hgvs. org) and classified according to ACMG-AMP (American College of Medical Genetics and Genomics and the Association for Molecular Pathology) guidelines 10 .
As a measure of association between genotype and phenotype, we used Odds Ratio (OR). OR values above 1 mean there is an association between the variant and the risk of disease, while values below 1 mean there is a negative association between the variant and the risk of disease. If the 95% confidence interval for an OR includes 1, it means the results are not statistically significant 10 . We used the DJR Hutchon calculator for confidence intervals of odds ratio (http://www.hutch on.net/Confi dOR.htm).
Index case GS0198 had the p.(Ser487Phe) variant (Fig. 1a II1). According to ACMG-AMP guidelines, this variant is classified of uncertain significance (Table 2). This variant has not been found in the population database frequencies checked (GnomAD, ExAc, dbSNP, and 1000G), and occurs at a position within an important domain for the transcriptional function of the protein, near the C-terminus. Moreover, the serine residue at codon 487, located within the transcriptional activation domain 2 (TAD2), involves substitution of a polar neutral serine for the hydrophobic aromatic phenylalanine, probably disturbing the normal function of the protein. His father, who had small stature (1.57 cm) and short lower limbs, had the variant p.(Ser487Phe) in the heterozygous state (Fig. 1a I1). (Fig. 1b I1). This duplication, which is not in a repeat region, is located within an important regulatory region (the conserved inhibitory domain). It has been found in the population database frequencies checked with a low frequency (highest population Minor Allele Frequency (MAF): 0.01). Moreover, it has been observed in four adults in the homozygous state (GnomAD exomes). Furthermore, functional studies showed that this duplication has a transcriptional activity similar to wild-type protein 12 . In addition, we found this duplication in the patient's two asymptomatic daughters in the heterozygous state as well (Fig. 1b II1, II2). This duplication is classified as benign according to ACMG-AMP guidelines (Table 2). Importantly, NGS analysis showed a second rare variant in index case CA0117. A heterozygous PTH1R variant (c.1168C>T; p.(Arg390Trp)) in exon 13 (Ensembl: ENST00000449590.6) was identified. This variant is not inherited by his two asymptomatic daughters. The PTH1R gene (MIM *168468) encodes the parathyroid hormone/parathyroid hormone related peptide receptor (PTH1R) that is a G-protein couple receptor for PTH and PTHLH (parathyroid hormone-like hormone) 13 . This variant has a MAF < 0.01. The software Varsome 14 classified this variant as of uncertain significance according to ACMG-AMP guidelines (Table 3). PTH1R has 7 potential membrane-spanning domains and this variant occurs at a not conserved position located within a cytoplasmic topological domain (amino acids from 383 to 409) within this region (UniProtKB: Q03431) 15 . Mutations in the PTH1R gene are known to cause Jansen's metaphyseal chondrodysplasia (MIM #156400), chondrodysplasia Blomstrand type (MIM #215045), Eiken syndrome (MIM #600002), and failure of tooth eruption (MIM #125350). However, our patient does not have clinical characteristics compatible with these diseases.
Index case ME0292 had the missense p.Val382Met variant located in the CCID (Fig. 1c II1). This variant has a MAF < 0.01 and was previously found in a parathyroid adenoma 11 . Functional studies showed that this variant has 2.1 times higher transcriptional activity than wild-type 6 . Therefore, p.Val382Met is considered an activation variant. In addition, NGS analysis showed a second rare variant in index case ME0292 in the SLC34A1 gene (c.272_292del21; p.Val91_Ala97del) in the heterozygous state. The SLC34A1 gene encodes the Sodiumdependent phosphate transport protein 2a (NaPi-2a, MIM * 182309) that is located in the apical membrane of renal proximal tubular cells 16 . Mutations in this gene are associated with different clinical disease phenotypes; autosomal recessive form of infantile hypercalcemia type 2 (MIM # 616963) or Fanconi renotubular syndrome type 2 (MIM # 613388), and with autosomal dominant hypophosphatemic nephrolithiasis/osteoporosis type 1 (MIM # 612286). This small deletion in exon 4 (Ensembl: ENST00000324417.6) has been previously described in patients who presented with nephrolithiasis [17][18][19] . Functional studies showed a reduction in the expression of this deletion in HEK293 cells, and a significantly reduction in phosphate transport compared with wild-type NaPi-2a in Xenopus oocytes 19 . On the other hand, it is a relatively common deletion with a highest population MAF of 0.05. The proband's sister only had the p.Val91_Ala97del variant in SLC34A1 in the heterozygous state. She presented with nephrolithiasis (Fig. 1c II2).
Finally, we found the p.Asn315Asp variant in the GCM2 gene in the index cases ME0371 and CA0103 (Fig. 1d II1 and Fig. 1e II1, respectively) near the CCID. This variant has a MAF of 0.02 and it has been previously classified as benign or likely benign (ClinVar: VCV000712319.3). Functional studies showed that this variant has a 20% more transcriptional activity than the wild-type 6 . Furthermore, it has been found in patients with parathyroid adenomas and hyperplasia 20 . The asymptomatic mother of index case ME0371 had the p.Asn315Asp variant (Fig. 1d I2). In our cohort, we found two alleles with the variant p.Asn315Asp out of 88 alleles (44 patients with primary hyperparathyroidism analyzed by NGS). Compared with data of gnomAD for Non-Finnish European (481/129204), we observed an enrichment of this variant in patients with primary hyperparathyroidism in our . According to ACMG-AMP guidelines, we classified this variant as of uncertain significance (Table 2).
Discussion
In this study, we describe five families who had variants in the GCM2 gene. The complete genetic study revealed one novel variant of uncertain significance (c.1460C>T; p.(Ser487Phe)), and three previously reported variants; one of uncertain significance (c.943A>G; p.Asn315Asp), one likely pathogenic variant (c.1144G>A; p.Val382Met) and, finally, one benign variant (c.1185_1186insGCC TAC CAG; p.Ala393_Gln395dup), all in the heterozygous state. In addition, the genetic study revealed other two variants located in the PTH1R and SLC34A1 genes, of uncertain significance and likely pathogenic respectively, both in the heterozygous state as well (Table 3). So far, according to the Human Gene Mutation Database (http://www.hgmd.cf.ac.uk), 14 variants in the GCM2 gene have been reported associated with hypoparathyroidism. These 14 variants were considered as loss-of-function mutations. In our genetic study, two variants in the GCM2 gene were found in two patients with hypocalcemia (GS0198) and hypoparathyroidism (CA0117). Index case GS0198 and his father had the missense p.(Ser487Phe) variant in the heterozygous state located within the TAD2 (Fig. 2). As far as we know, only another missense mutation within the TAD2 (p.Asn502His) has been reported 21 . Furthermore, a dominantnegative effect produced by two small deletions affecting the TAD2, p.(His465Thrfs*66) and p.(Pro467Glnfs*64), has been described by other studies 22 . The p.Asn502His variant showed a reduction in transactivation and it was found in the heterozygous state in one patient diagnosed at 5 days of age, presenting with hypocalcemia, hyperphosphatemia, hypomagnesemia, low 25-OH vitamin D levels and normal serum intact PTH levels. The same clinical features were observed in our index case GS0198 who had the p.(Ser487Phe) variant. Moreover, the p.Asn502His variant showed a dominant-negative effect 21 . In the family previously described, the proband's father had the p.Asn502His variant in the heterozygous state as well 21 . He only presented with finger paresthesia and mild hypocalcemia (8.14 mg/dL), while index case GS0198's father had small stature and body segment disparity. These two variants, p.(Ser487Phe) and p.Asn502His, could produce a similar effect in the protein in the heterozygous state.
Index case CA0117 had the p.Ala393_Gln395dup variant, which is located within the CCID (Fig. 2). As far as we know, only one polymorphism showing a reduced transcriptional activity (10% reduction) has been described in the heterozygous state in this domain (p.Lys388Gln) 12 . The p.Ala393_Gln395dup duplication observed in index case CA0117 and his two asymptomatic daughters produces an extension of the inhibitory region. Functional studies showed a similar activity than the wild-type protein 12 . Moreover, this duplication is maybe common to be a pathogenic mutation (1% in some populations). On the other hand, this duplication is enriched in our cohort [OR: 59.15 (IC95% 13.92-251.2)] compared with a control population of Non-Finnish European [gnomAD (168/129188)]. We performed genetic analysis by NGS in 127 patients (254 alleles). Thirteen had hypocalcemia/hypoparathyroidism (26 alleles). Only two patients with hypocalcemia (one of them is not included in the manuscript) had the p.Ala393_Gln395dup duplication in the GCM2 gene (2/26).
Three gain-of-function mutations located in the CCID (p.Leu379Gln, p.Lys388Glu and p.Tyr394Ser) with 3.3, 2.1 and 2.4 times higher activity in the heterozygous state, respectively 12 and two disease-associated polymorphisms (p.Arg59Cys and p.Tyr282Asp) have been previously described associated with hyperparathyroidism. In our cohort, we found one likely gain-of-function mutation (p.Val382Met). The p.Val382Met variant, which is located in the CCID, was previously reported in a parathyroid adenoma 11 . Functional studies performed in the CCID demonstrated that this variant has 2.1 times higher activity than wild-type 6 . Therefore, the p.Val382Met variant caused the parathyroid hyperplasia observed in index case ME0292.
A few variants out of the CCID, p.Gln330Leu and p.Arg406Gln, have been reported in patients with primary hyperparathyroidism 20 . Functional analysis in p.Asn315Asp, present in our cohort, and also located out of the CCID (Fig. 2), showed that it has a 20% more transcriptional activity than the wild-type 6 . Furthermore, it has been found in patients with parathyroid adenomas and hyperplasia 20 . Thus, we hypothesize that the p.Asn315Asp variant may cause the hyperparathyroidism present in index cases ME0371 and CA0103. On the other hand, the asymptomatic mother of index case ME0371 had the p.Asn315Asp variant, and this is in line with the phenotype variability within the family previously observed. Instead, penetrance seems to be low, so it has been suggested that the majority of individuals with gain-of-function variants in the GCM2 gene will not develop a parathyroid adenoma 23 .
Importantly, the genetic study showed a second variant in index case CA0117. The p.(Arg390Trp) variant, located in the heterozygous state in the PTH1R gene, is not inherited by his two daughters. This variant is located in a cytoplasmic region between the 5 and 6 transmembrane domains. Pathogenic mutations within the transmembrane domain have been associated with Murk Jansen Chondrodysplasia. Thus, patients with recessive mutations presented with mild hypercalcemia, hypophosphatemia, low intact PTH levels, hypercalciuria, bone dysplasia, kidney stones, bowing and osteopenia. On the other hand, patients with dominant mutations presented with a milder form of the disease, with less severe skeletal and mineral ion abnormalities 24 . Moreover, it has been described that some polymorphisms in the PTH1R gene can determine the sensitivity of the kidney and bone to the catabolic or anabolic action of PTH 25 . This variant is not located in an important domain for binding to PTH, PTHLH or the signalling initiator G protein and the patient does not present symptoms compatible with diseases associated with pathogenic variants in the PTH1R gene. Therefore, it is unlikely to influence the phenotype of the patient.
On the other hand, index case ME0292's sister presented with high intact PTH levels and nephrolithiasis without other remarkable symptoms. She does not have the p.Val382Met variant in the GCM2 gene. However, we observed another variant in the SLC34A1 gene in the index case ME0292 and his sister in the heterozygous state (p.Val91_Ala97del). Despite the p.Val91_Ala97del deletion is frequent in general population (MAF of 0.01), www.nature.com/scientificreports/ functional studies demonstrated that it exhibits a significantly reduced phosphate transport compared with wild-type, and considering the high prevalence of people globally being affected by kidney stones (1-15%) 26 , we cannot exclude that this deletion may cause nephrolithiasis in family ME0292. Our study shows the utility of NGS in unravelling the genetic origin of disorders in the calcium and phosphorus metabolism. Moreover, our results confirmed GCMb as an important genetic element for the maintenance of calcium homeostasis, as it interacts with genes involved in the calcium metabolism such as CASR, modifying its expression, and GATA3 and MAFB modifying PTH expression 4 . However, the penetrance seems to be low probably because some mechanisms of compensation occur 20 .
In conclusion, this study identified four variants in the GCM2 gene, of which one was novel (p.(Ser487Phe)) and classified as a variant of uncertain significance. Further studies aimed at the functional characterization of this variant will be of help in defining the hypothesized pathogenic role.
|
v3-fos-license
|
2023-05-27T06:17:44.570Z
|
2023-05-26T00:00:00.000
|
258909547
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "3899dc3262351ae76f629eaf56a8ad15c7958567",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46665",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "e71ea8080765f84459e59c7caa2eae75a7f2be83",
"year": 2023
}
|
pes2o/s2orc
|
Outcomes of thymoglobulin versus basiliximab induction therapies in living donor kidney transplant recipients with mild to moderate immunological risk – a retrospective analysis of UNOS database
Abstract Introduction The aim of this study is to assess the outcomes of different induction therapies among mild to moderate immunological risk kidney transplants in the era tacrolimus and mycophenolate-derivate based maintenance. Methods This was a retrospective cohort study using data from the United States Organ Procurement and Transplantation Network among mild to moderate immunological risk living-donor KTRs, defined as having first transplant and panel reactive antibodies less than 20% but with two HLA-DR mismatches. KTRs were divided into two groups based on induction therapy with either thymoglobulin or basiliximab. Instrumental variable regression models were used to assess the effect of induction therapy on acute rejection episodes, serum creatinine levels and graft survival. Results Of the entire cohort, 788 patients received basiliximab while 1727 patients received thymoglobulin induction. There were no significant differences between basiliximab versus thymoglobulin induction in acute rejection episodes at one-year post-transplant (coefficient= −0.229, p value = .106), serum creatinine levels at one-year post-transplant (coefficient= −0.024, p value = .128) or death-censored graft survival (coefficient: − <0.001, p value = .201). Conclusion This study showed no significant difference in acute rejection episodes or graft survival when using thymoglobulin or basiliximab in mild to moderate immunological risk living donor KTRs, maintained on tacrolimus and mycophenolate-based immunosuppressive regimen.
Introduction
Human leukocyte antigen (HLA) matching and incorporating calculated panel reactive antibodies (C-PRA) have a substantial impact on the outcome of kidney transplantation [1]. While HLA mismatching represents an important prognostic factor, HLA-DR mismatching, in particular, has a greater impact on outcomes after kidney transplantation [2]. HLA-DR mismatching can increase the risk of developing donor-specific antibodies (DSA) and subsequently increase the risk of acute rejection episodes [1]. In a recent meta-analysis of 23 cohort studies, incremental HLA-DR mismatch was associated with worse transplant outcomes in terms of acute rejection as well as overall and death-censored graft survival [2]. C-PRA has been widely used a measure of sensitization among kidney transplant patients [3]. Since the mid-1960s, when it was discovered that catastrophic hyperacute rejection was linked to anti-donor HLA antibodies, PRA has been used to evaluate sensitization [4]. Using a panel of healthy blood donors as a representative sample of the possible local organ donor pool, Patel and Terasaki's seminal study also presented a straightforward surrogate test that might identify sensitized patients and predict their likelihood of finding a crossmatch-compatible donor Induction therapy; renal transplant; kidney transplant; outcomes; high risk; HLA; PRA; living transplant ORIGINAL ARTICLE [4]. PRA was only the portion of this pool of donors that a patient's reactive antibodies were present against. Crossmatch incompatibility for a patient with an 80% PRA would be with 80% of donors. A calculated PRA of 20% has been used as a cut-off point for mild immunological risk by many transplant centres [5,6]. A combination of HLA mismatch and C-PRA can be used to assess the immunological risk in transplant.
The recommendations for using thymoglobulin ([rabbit-derived] polyclonal anti-thymocyte globulin) induction therapy are based on the results of a previous meta-analysis that compared basiliximab and thymoglobulin induction therapies in kidney transplant patients [7]. However, the maintenance immunosuppression in most of the reviewed studies was in the era of cyclosporine-based immunomodulating therapy. Currently, most transplant centres depend on tacrolimus as an efficacious cornerstone immunosuppressant in kidney transplantation [8]. Many randomized, multicentre studies conducted in Europe and the US with long follow-up periods showed a significantly lower incidence of acute rejection and improved survival in renal transplant recipients receiving tacrolimus-based immunosuppression compared to those receiving cyclosporine [9,10]. This raises the question of whether basiliximab can be an effective induction therapy in mild to moderate immunological risk kidney transplant patients maintained on tacrolimus or not. Living-related transplant recipient have minimized cold ischaemia times, affording elimination one of the largest variables impacting allograft survival and affording a much cleaner clinical model to assess impact of immunologic incompatibilities and applied therapies. Thus, we aimed to examine the outcomes of basiliximab induction therapy in comparison to thymoglobulin induction therapy in mild to moderate immunological risk living donor kidney transplant recipients (KTRs) maintained on tacrolimus and mycophenolate-based immunomodulating therapy.
Design and study cohort
Because the initiative used publicly available, de-identified data, it was exempt from institutional approval. There was no financial assistance received for this study. Terminology and nomenclature were expressed in keeping with most recent KDIGO consensus guidelines [11]. Data for this study is publicly available in a de-identified fashion and can be accessed through: https://optn.transplant.hrsa.gov. All renal transplant patients who were registered in the United States Organ Procurement and Transplantation Network (OPTN) database between the first of September 2017 and the first of September 2019 were retrospectively reviewed. 2017 was chosen to open the review window due to being the year the U.S. Food and Drug Administration (FDA) statement confirmed the use of thymoglobulin as induction therapy for renal transplantation with specific doses recommended [12].
The patients included were all living donor KTRs with mild to moderate immunological risk who received thymoglobulin or basiliximab induction therapy and were discharged on tacrolimus and mycophenolate mofetil as a maintenance immunosuppressive therapy. Mild to moderate immunological risk kidney transplant was defined as living donors who received their first transplant with PRA less than 20% and had two HLA-DR mismatch. Exclusion criteria applied to patients with previous kidney transplants, those under 18 years of age, deceased donors organ recipients, patients whose DR mismatch was less than two, patients who received an induction therapy other than thymoglobulin or basiliximab, patients who received maintenance immunosuppressive medications other than tacrolimus and mycophenolate mofetil, patients who received both thymoglobulin and basiliximab at the same time and those who had missing data regarding their induction therapy. Patients were followed up until December 2020. Data were collected about recipient factors (recipient age, gender, ethnicity, body mass index), transplant factors (cold ischaemia time, number of previous transplants, calculated panel reactive antibodies, HLA mismatches, type of induction therapies, maintenance immunosuppressive medications) and donor factors (donor type, donor age). Based on the induction therapies administered, kidney transplant recipients were divided into two groups: thymoglobulin or basiliximab therapy recipients.
Main outcomes
The primary outcomes measured were the occurrence of acute rejection episodes at the early post-operative period as well as one-year post-transplant and serum creatinine levels at one -year post-transplant. Acute rejection was defined as biopsy-proven or clinically suspected rejection episodes. Secondary outcomes were the occurrence of delayed graft functions (defined as the need for regular dialysis during the first week following transplantation), overall and death-censored graft survival.
Statistical analysis
The study groups were compared based on baseline characteristics. Continuous variables were compared using the two-Independent T-test and categorical variables were compared using Pearson's chi-squared test. Ten events per variable was the cut-off point to proceed with the regression analysis. Instrumental variable-ordered 'probit' regression analysis was used to assess the relationship between the type of induction therapy and the occurrence of acute rejection episodes at one-year post-transplant. The model was adjusted for recipient, transplant and donor factors collected. The type of induction therapy was instrumented for the transplant centre to reduce the centre effect on the choice of induction therapy. The choice of instrumenting the type of induction therapy to the transplant centre was based on the hypothesis that in the current era, immunosuppressive regimens are protocol-driven and differ from one centre to another [13]. We used the Wald test to assess for exogeneity. The Wald test measures the correlation between the error terms in the probit regression and the instrumented regression. A p value of ≤.05 was the cut-off point to reject the null hypothesis for no endogeneity.
To assess the relationship between induction therapy and serum creatinine at one-year post-transplant, we performed the instrumental variable linear regression model. The estimator is used as a generalized method of moments. The type of induction therapy was instrumented for the transplant centre to reduce the centre effect on the choice of induction therapy. We used the GMM C-Statistics to assess for exogeneity. A p value of ≤.05 was the cut-off point to reject the null hypothesis of no endogeneity.
To perform the analysis for overall and death-censored graft survival, considering the centre effect on the choice of induction therapy, we generated pseudo-observations for the survival function. These pseudo-observations were used in a generalized linear model. In these models, the type of induction therapy was instrumented for the transplant centre to reduce the centre effect on the choice of induction therapy. We generated pseudo-observations for the survival function using the 'STPSURV' command [14]. The generation of pseudo-observations is a method that has been developed to use survival function in direct regression and generalized linear modelling [9,10]. The pseudo-observations are calculated based on the difference between the complete sample and leave-one-out estimators for the pertinent survival quantity [14,15]. The pseudo-observations have been proven to give tight approximation to the Cox regression models.
To assess the relationship between the occurrence of delayed graft function and the type of induction therapy, we applied the approach used for the assessment of acute rejection episodes.
Sensitivity analysis
We performed a sensitivity analysis for the relationship between induction therapies and estimated the doses of thymoglobulin and basiliximab for the available data using the number of days each induction therapy was administered. In April 2017, the FDA statement confirmed the use of Thyroglobulin as induction therapy for renal transplantation. It set the dose at 1.5 mg/kg dose with an administration period of up to seven days post-transplant [12]. It also determined the dose of basiliximab to be 20 mg/kg for two doses to be administered at day 0 and day 4 post-transplant. Based on this, we estimated the overall dose of the induction therapy given by multiplying the number of days it was administered by the approved dose stated by the FDA. We performed an instrumental variable-ordered probit regression analysis, to compare different doses of thymoglobulin and basiliximab. We also performed a multivariable logistic regression analysis to assess the relationship between acute rejection and induction therapies without considering the centre effect. Details of the logistic regression analysis are discussed in the Supplementary data section. Furthermore, we repeated the logistic regression model among several subgroups (black population, non-black population, patients discharged on glucocorticoid withdrawal regimen, male donor to female recipient and female donor to male recipient).
Results
A total of 2515 patients were included in our study (patients on basiliximab = 788, patients on thymoglobulin = 1727). The details of patient selection from the OPTN database are shown in Figure 1. The baseline characteristics for the patients included in our study are shown in Table 1. Data concerning the type of induction therapy given were missing in the case of 246 patients. The comparison between the baseline characteristics of those with missing data versus those with no missing data is shown in Table S1 of the Supplementary data section. There were no significant differences between the two groups, except in HLA-A mismatch (p = .04) and frequency of glucocorticoid maintenance therapy (p value <.01). None of the patients included had positive crossmatch.
Acute rejection rates at one-year post-transplant
Among the 2515 patients included in our study, 2058 patients had available data about acute rejection rates at one-year post-transplant, while 457 (18.17%) patients had missing data about acute-rejection episodes. Comparison between patients with missing data versus those with no-missing data are shown in Table S2 of Supplementary data section.
On performing an instrumental variable -ordered probit regression analysis, there was no statistical difference between thymoglobulin induction therapy and basiliximab induction therapy (coefficient= −0.229, p value = .106, 95% Confidence interval [95% CI]: −0.508 to 0.049), as shown in Table 2. The Wald test for exogeneity showed a p value of .044, rejecting the null hypothesis for no endogeneity. The number of events in the basiliximab and thymoglobulin groups was 44 and 116, respectively.
Serum creatinine at one-year post-transplant
On performing an instrumental variable-linear regression analysis, there was no statistical difference between thymoglobulin induction therapy and basiliximab induction therapy (coefficient= −0.024, p value = .128, 95% CI: −0.055 to 0.006), as shown in Table 3. GMM C-statistics showed a p value of .049, rejecting the null hypothesis for no endogeneity. F-Statistics showed a p value of <.01, rejecting the null hypothesis for weak instruments. Mean serum creatinine in the basiliximab group was 1.35 mg/dl (Standard deviation = 0.41) and 1.37 mg/dl (Standard deviation = 0.47) in the thymoglobulin group.
Secondary outcomes
There was no statistical difference between the two groups in terms of overall graft survival (coefficient = 0.008, p value = .801, 95% CI: −0.001 − 0.001), as shown in Figure 1 in the Supplementary data section. There was no statistical difference between the two groups in terms of death-censored graft survival (coefficient: − <0.001, 95% CI: −0.001 − 0.001, p value = .201), as shown in Figure S2 in the Supplementary data section. The median follow-up was one-year post-transplant.
Sensitivity analysis
We performed an instrumental variable-ordered probit regression analysis, to compare the different doses of thymoglobulin and basiliximab and its relation to acute rejection episodes at one year post-transplant. There was no statistical difference between basiliximab and two days of thymoglobulin (estimated dose = 3 mg/ kg, coefficient = 0.528, p value = .763, 95% CI: −2.910 to 3.967), three days of thymoglobulin (estimated dose = 4.5 mg/kg, coefficient= −0.427, p value = .507, 95% CI: −1.690 to 0.835), four days of thymoglobulin (estimated dose = 6 mg/kg, coefficient= −0.226, p value = 1) or five days of thymoglobulin (estimated dose = 7.5 mg/kg, coefficient = 0.009, p value = .997, 95% CI: −5.111 to 5.129). Additionally, we also performed a multivariable logistic regression analysis to assess the relationship between acute rejection and induction therapies without taking into account the centre effect. This showed no statistical difference between basiliximab induction therapy and thymoglobulin induction therapy (OR = 0.934, p value = .737, 95%CI: 0.628-1.388), as shown in Table S3 in the Supplementary data section. Moreover, there was no statistical difference basiliximab induction therapy and thymoglobulin induction therapy among black population subgroup, non-black population subgroup, or steroids withdrawal subgroup, male donor to female recipient or female donor to male recipient as further shown in Table S4 of Supplementary data.
Discussion
In tacrolimus-maintained living donor kidney transplant recipients with mild to moderate immunological risk, we found no significant differences between basiliximab and thymoglobulin induction therapies in terms of acute rejection episodes, serum creatinine or graft survival at one-year post-transplant. Our study can aid to accurately interpret prior experiences and to specify the application of various induction agents by including them into contemporary pretransplant immunological risk assessment methodologies. The patient's immunological risk status should be taken into consideration when choosing a regimen, and immunosuppression should be tailored to the risk for graft rejection unless there are obvious risk factors for drug-specific adverse effects. However, even though a patient's risk status may be influenced by a variety of variables, only the number of HLA mismatches has been consistently associated with an increase in risk, and the relative significance of other variables frequently remains ambiguous. A recent study by Sureshkumar et al. compared the effect of induction therapy using depleting agents versus IL-2RA in more than 63,000 patients in the USA between 2001 until and 2015 [16]. They stratified the patients according to HLA-DR mismatches into three groups (zero, one and two mismatches) with a median follow-up period of 49 ± 62 months. They found that depleting antibodies are associated with better patient or graft survival in comparison to non-depleting antibodies. However, the calcineurin inhibitor agent used as maintenance therapy in this study was not specified. In the previous decade (2000-2010), cyclosporine use was still Table 3. instrumental variables linear regression model assessing the relationship between serum creatinine at one-year post-transplant and induction therapy. prevalent as a maintenance therapy, which is not the case in the current era, when most centres use tacrolimus. Moreover, the depleting antibody group could have received either alemtuzumab or thymoglobulin, further limiting the study's current relevance. HLA matching has a vital effect on the outcome of kidney transplantation [17]. While total HLA mismatching represents an important prognostic factor [16,17], HLA class II, especially HLA-DR mismatching has a greater impact on outcomes after kidney transplantation [2]. This is due to the high polymorphism in HLA Class II antigens. HLA class II antigens are present both in B-cells and antigen-presenting cells (APCs), which play a crucial role in the development of acute cell-mediated and antibody-mediated rejection. These antigen-presenting cells engulf process antigens and stimulate CD4 T-cells. The high polymorphism characteristic of Class II HLA antigens present on the APCs plays a pivotal role in identifying a large repertoire of foreign antigens. On the other hand, this high polymorphism acts as a barrier against successful transplantation, identifying allograft antigens as foreign antigens and stimulating rejection. Data from large registry studies have shown an approximately 7-13% higher risk of graft failure associated with one HLA mismatch and about 64-74% higher risk associated with six HLA mismatches [2,18] with HLA-DR matching having a much greater effect on the number of rejection episodes and poor long-term survival [19,20].
The initial Collaborative Transplant Study (CTS) analysis revealed that the major effect on transplant outcomes arose from mismatches in the HLA-DR [21,22]. This was also noted in registry data studies performed using The United Kingdom Transplant Service and Eurotransplant registry data [23,24]. Another study by Coupel et al. found that HLA-DR mismatches (and the number of rejection episodes) correlated with poor long-term survival [21]. Several immunosuppressive protocols have been implemented to overcome the detrimental effects of HLA mismatches. In the current era, tacrolimus and mycophenolate derivates (mycophenolate mofetil or enteric-coated mycophenolate sodium) are the core maintenance immunosuppressive therapies used worldwide. Thymoglobulin and basiliximab are the most widely used induction therapies [7,8]. Thymoglobulin is usually reserved for high-risk transplants; however, it is not free of risk. It carries a higher risk of malignancy and serious opportunistic infections in comparison to basiliximab induction therapy [20].
Studies that were performed in cyclosporine era showed significant effect of HLA mismatching, particularly HLA-DR mismatch on graft outcomes [2]. In our study, the effect of HLA-DR mismatching is minimized, reflecting the effect of more potent immunosuppressive therapy to improve graft tolerance. Our study adds robustness and reflects the potency of tacrolimus-based immunomodulating therapy over cyclosporine-based regimens.
We used the available data from the UNOS database to conduct our study, which data has implicit internal limitations. The UNOS database record data about only low-resolution HLA-mismatch and therefore, mismatching at epitope or eplet levels cannot be excluded. With the most current technology, it has been shown that mismatching at the epitope or eplet level can confer a significant risk on transplant outcomes [25,26]. Low-resolution data on donor and recipient HLA types is insufficient to fully realize the promise of better matching, although it does have some prognostic utility. In addition, the UNOS database does not reveal mismatches at other types of Class II HLA antigens, especially the HLA DQ antigen. Several studies have shown that an HLA DQ mismatch can lead to worse outcomes in comparison to matched HLA DQ transplants, irrespective of the HLA DR mismatch [27,28]. Moreover, there was significant difference in PRA levels between basiliximab and thymoglobulin groups. However, the PRA levels in both groups were less than 10%, which is considered as the cut-off for standard immunological risk transplants in most of the literature [5,9]. In addition, we included patients with no previous transplants in our study. Therefore, the risk of sensitization due to previous transplant is effectively eliminated.
Furthermore, there was a statistical difference between both induction groups in terms of donor age. However, from the clinical point of view, both ages (44.73 in the thymoglobulin group and 47.82 in the basiliximab group) are very similar and do not meet the criteria for expanded criteria donor [5,13]. Our results reflect that selection of living donors with age less than 50 leads to acceptable transplant outcomes irrespective of HLA mismatching.
Notwithstanding the disparities in glucocorticoid withdrawal, PRA level and percentage of African American ethnicity between the thymoglobulin group and the Basiliximab group, the findings of our study can help clinicians decide which induction therapy is best for a transplant patient. In the event that a person is of African ancestry and has a greater PRA (even more than 0%), most clinicians advise employing thymoglobulin induction. Glucocorticoid withdrawal is not advised if basiliximab induction therapy is being used.
Another limitation of our study was the missing data about tacrolimus trough levels. Therefore, we cannot assess the difference in tacrolimus trough levels between both groups. In addition, calculation of thymoglobulin dose may not be accurate since some patients may have received less than 1.5 mg/d daily dose due to low white blood cells count and thrombocytopenia. Taking into account the limitations of our study and the retrospective nature of the UNOS database, we recommend that a randomized case-control study should be performed to confirm our results.
Our data analysis is also limited by the ongoing developments in transplant immunology. First of all, there is growing interest in using DSA for risk assessment [13]. Unfortunately, DSA results are unmeasured confounders in the registry database. DSA tests are one of the new technologies that are used to determine immunological risk but are not used or available worldwide, e.g. many countries in Africa and Asia are not using molecular methods such as Luminex. More studies are needed to identify immunological risk based on DSA.
Second, the development of solid-phase single-bead antigen testing of solubilized human leukocyte antigens (HLA) to detect donor-specific antibodies (DSA) has made it possible to stratify immunological risk status in a much more nuanced manner, taking into account the various classes and intensities of HLA antibodies Class I and/or II, including HLA-DSA [13]. Combinations of these tests are now frequently used to evaluate immunologic risk, with further technological developments emerging, such as the detection of non-HLA antibodies against angiotensin type 1 (AT1) receptors or the T-cell ELISPOT assay of alloantigen-specific donors [29]. Retrospective data analysis of UNOS (or any other administrative or clinical database) is typically intended to help understanding of past clinical practice and provide support for future improvement of results. However, even with regard to potential future developments in data recording and registry content, properly analysing and interpreting existing data can undoubtedly advance research. Finally, the black population is under-represented in our study. Our subgroup analysis showed no significant differences between both types of induction therapy among black population.
In conclusion, within the limitations of UNOS database, our study showed no significant difference in acute rejection episodes or graft survival when using thymoglobulin or basiliximab in mild to moderate immunological risk living donor KTRs. Therefore, in the current era of tacrolimus and mycophenolate agent-based maintenance immunosuppression, basiliximab may be a safe induction therapy for this class of recipients. Tailoring the induction therapy in kidney transplant patients should be adjusted based on the patients' needs and reflected in institutional protocols.
|
v3-fos-license
|
2019-03-30T13:04:47.048Z
|
2019-03-29T00:00:00.000
|
85563946
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://jnanobiotechnology.biomedcentral.com/track/pdf/10.1186/s12951-019-0478-y",
"pdf_hash": "f7b0cdb2bc2474678a749233fdbfa8caea485a2d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46667",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "f7b0cdb2bc2474678a749233fdbfa8caea485a2d",
"year": 2019
}
|
pes2o/s2orc
|
Role of A2B adenosine receptor-dependent adenosine signaling in multi-walled carbon nanotube-triggered lung fibrosis in mice
Background Multi-walled carbon nanotube (MWCNT)-induced lung fibrosis leads to health concerns in human. However, the mechanisms underlying fibrosis pathogenesis remains unclear. The adenosine (ADO) is produced in response to injury and serves a detrimental role in lung fibrosis. In this study, we aimed to explore the ADO signaling in the progression of lung fibrosis induced by MWCNT. Results MWCNT exposure markedly increased A2B adenosine receptor (A2BAR) expression in the lungs and ADO level in bronchoalveolar lavage fluid, combined with elevation of blood neutrophils, collagen fiber deposition, and activation of myeloperoxidase (MPO) activity in the lungs. Furthermore, MWCNT exposure elicited an activation of transforming growth factor (TGF)-β1 and follistatin-like 1 (Fstl1), leading to fibroblasts recruitment and differentiation into myofibroblasts in the lungs in an A2BAR-dependent manner. Conversely, treatment of the selective A2BAR antagonist CVT-6883 exhibited a significant reduction in levels of fibrosis mediators and efficiently decreased cytotoxicity and inflammatory in MWCNT treated mice. Conclusion Our results reveal that accumulation of extracellular ADO promotes the process of the fibroblast-to-myofibroblast transition via A2BAR/TGF-β1/Fstl1 signaling in MWCNT-induced lung fibrosis.
Background
Carbon nanotubes (CNTs) are new nanomaterials in a single layer (single-walled CNT, SWCNT) or concentric multi-layers (multi-walled CNT, MWCNT) with increasingly wide utilization in the fields of medicine, electronics, and structural engineering [1]. However, because of the high demand for CNTs, it must be concerned that the health hazards of CNTs for human occupational and environmental exposure are more serious [2,3]. One study reported MWCNT-containing airborne dust levels up to 400 μg/m 3 in a research laboratory, although the total mass concentration reported in this study was not comprised exclusively of MWCNT [4]. Furthermore, some animal studies have confirmed that pulmonary exposure to MWCNT results in fibrosis in the lungs [5][6][7].
Once exposure, MWCNTs deposit in the respiratory tract and increase lung burden, eventually leading to chronic inflammation and high risks of related adverse effects such as fibrosis [8]. The pathologic development and features of CNT-induced pulmonary interstitial fibrosis overlap with those of irreversible pulmonary fibrosis (IPF) and pneumoconiosis considerably. IPF is the main cause of death due to unknown pathogeny and few treatment options. The mortality rate of IPF at 3-5 years after diagnosis is 50% [9]. The fibrosis is progressive post-exposure and characterized by fibroblast proliferation and an excessive deposition of extracellular matrix (ECM) in the interstitium. Fibroblasts in granulation tissue differentiate into myofibroblasts, which has contractile phenotype and proliferates and synthesizes ECM components [10].
Secretion of inflammatory cytokines and infiltration of additional inflammatory leukocytes were observed after pulmonary exposure to MWCNT, which confirms the pulmonary inflammation produced by MWCNT [6,11]. Elevated population of neutrophils is correlated with the acute inflammatory response [12]. Myeloperoxidase (MPO) is a peroxidase enzyme that is synthesized and secreted by neutrophils and monocytes [13]. MWCNT deposit efficiently and persistent in the respiratory tract, due to their pro-inflammatory potency represent one environmental factor proceeding to the progressive fibrotic lesions [14]. There have been significant advances in understanding of MWCNT toxicity, yet the underlying mechanism of MWCNT-induced lung fibrosis remains elusive. High levels of extracellular adenosine (ADO) were produced in response to tissue injury and inflammation [15]. Moreover, extracellular ADO levels are closely associated with the progression and severity of pulmonary fibrosis [16,17]. Therefore, the aim of the present study is to determine the ability of ADO to elicit lung inflammation and exacerbate lung fibrosis in MWCNT treated mice.
In response to cellular stress, adenosine triphosphate (ATP) is released into extracellular and subsequently dephosphorylated to ADO by ecto-nucleotidases including ectonucleoside triphosphate diphosphohydrolase 1 (CD39) and Ecto-5′-nucleotidase (CD73) [18]. ADO plays a principal role in the wound healing process. Under physiologic conditions, extracellular ADO level in cells and tissue fluids are in the nanomolar range, while ADO rises substantially during different forms of cellular distress [19]. ADO orchestrates the cellular response by acting on ADO receptors, including A 1 adenosine receptor (A 1 AR), A 2A AR, A 2B AR, and A 3 AR, of which A 2B AR has emerged as a major mediator of chronic lung disease, such as fibrosis and tissue remodeling [20]. A 2B AR has the lowest affinity for ADO and is normally activated under excess accumulation of extracellular ADO. A 2B AR levels are elevated in patients with IPF [21]. The ADO signaling system has been closely linked with the production of several mediators, including interleukin-6 (IL-6) [6] and transforming growth factor (TGF)-β1 [22]. In addition, ADO contributes to the differentiation of pulmonary fibroblasts into myofibroblasts, disease progression, and tissue remodeling via the engagement of the A 2B AR [23]. However, the ADO signaling involved in MWCNTinduced lung fibrosis is unknown until today.
TGF-β1 is a profibrotic cytokine that promotes myofibroblast activation and proliferation and plays a central role in the induction of fibrogenesis [24]. Exaggerated TGF-β1 signaling contributes to accumulation of collagen and further ECM [25]. Interestingly, the inhibition of A 2B AR exerts an antifibrotic effect in chronic lung disease by preventing the expression of TGF-β1 [26]. Therefore, there is a necessity to further characterize the A 2B AR interacting with TGF-β1 signaling in MWCNT-induced lung fibrosis.
The hypothesis in this study is that the progressive inflammation, alveolar remodeling, and lung fibrosis induced by MWCNT are associated with progressive and proportionate increases in the level of extracellular ADO and the enhanced expression of A 2B AR. Moreover, we speculate that CVT-6883 (a selective A 2B AR antagonist) would reduce TGF-β1-mediated fibroblast proliferation and differentiation into myofibroblasts, and ultimately attenuate MWCNT-induced lung fibrosis.
MWCNT characterization
Physicochemical characteristics and morphology of the MWCNT sample used in the present study are reported in Table 1. The surface morphology of MWCNTs is shown in micrograph images.
MWCNT-induced fibrotic phenotypes in mice
The histopathological changes in the MWCNT and DM groups have been shown in Fig. 1. Lung tissues were obtained at 1, 3, 7, and 14 days post-exposure. H&E staining showed interstitial thickening and bronchiolocentric inflammation in the mice exposure to MWCNT. The pathological changes reached a peak on day 7 and persisted throughout the 14-day post-exposure. As expected, aspiration of DM did not cause notable changes in the lungs (Fig. 1a).
We performed Masson's trichrome staining to examine the fibrotic response directly. Abnormal collagen deposition was observed in the lung of MWCNT treatment on MWCNT 30-50 nm a < 10 μm a day 1 post exposure, progressed to a peak level on day 7, and maintained at a similar level throughout the day 14 ( Fig. 1b). However, fibrotic mass formation was not observed in mice treated with DM.
MWCNTs exposure led to a significant increase in disease pathology compare to DM control (Fig. 1c). The fibrotic changes were quantified by using the Ashcroft score, which confirmed a significant increase of the fibrotic lesions in MWCNT treatment lungs compared with control at all the time points examined (Fig. 1d).
The effect of MWCNT on CD73, ADO, and A 2B AR
Treatment of mice with MWCNT significantly increased CD73 gene expression on whole lung homogenate at 7 days post-exposure. However, CD73 gene expression appeared to be more variable at day 14 post exposure ( Fig. 2a). We measured the level of extracellular ADO, as defined by relative ADO level in bronchoalveolar lavage fluid (BALF). As expected, MWCNT treatment significantly elicited ADO level in BALF from day 3 to 14 ( Fig. 2b). MWCNT exposed mice exhibited significantly increased in the mRNA expression of A 2B AR at 7 days post-exposure, and this elevation remained throughout 14 days after MWCNT exposure (Fig. 2c).
Inhibition of pulmonary inflammatory following treatment with CVT-6883
To evaluate MWCNT-induced inflammatory, we examined percentage of neutrophils in peripheral blood (Fig. 3a) and MPO activity (Fig. 3b) in the lung of mice. MWCNT induced a rapid increase in the percentage of neutrophils, and continued to increase on day 7 followed by a slightly reduction on day 14. MWCNT treatment enhanced MPO activity from day 7 to 14.
Based on this time course of fibrosis development induced by MWCNT, day 7 was chosen as the time point to reflect the MWCNT-induced lung injury. MWCNT exposure significantly increased lactate dehydrogenase (LDH) activity in serum (Fig. 3c) and percentage of neutrophils (Fig. 3d) in peripheral blood. Treatment with CVT-6883 caused a noticeable alleviation in LDH and percentage of neutrophils produced by MWCNT.
We further examined the protein levels of IL-6 in BALF (Fig. 3e) and lung tissues (Fig. 3f ). The levels of IL-6 significantly increased in BALF and lung tissues of the MWCNT-induced group as compared with control mice. In contrast, this augmentation was significantly inhibited by the treatment of CVT-6883.
Treatment with CVT-6883 reduced profibrotic mediators in the lungs
MWCNT treatment resulted in a significant rise in TGF-β1 mRNA and protein expression. Moreover, our data showed that MWCNT treatment significantly promoted TGF-β1-stimulated Smad3 phosphorylation (p-Smad3) in lung tissues. Treatment of CVT-6883 showed a significant decline in TGF-β1 mRNA expression (Fig. 4a) and TGF-β1 and p-Smad3 protein levels (Fig. 4b).
Normalization of lung fibrosis in the lungs of CVT-6883-treated mice
To further determine the effect of A 2B AR in the lung fibrosis of MWCNT exposure, we examined the protein level of two major ECM proteins (collagen I and fibronectin 1 (FN1)). As expected, CVT-6883 treatment inhibited the increased in collagen I and FN1 levels induced by MWCNT (Fig. 5a, b).
CVT-6883 inhibited fibroblast-to-myofibroblast transformation in mouse lungs
To determine whether A 2B AR have the potential to directly induce fibrotic reactions characterized by increased differentiation of fibroblasts into myofibroblasts, we analyzed the effect of CVT-6883 on the expression of follistatin-like 1 (Fstl1) and fibroblastto-myofibroblast transition markers: α-smooth muscle actin (α-SMA), platelet-derived growth factor receptor-β (PDGFR-β), heat shock protein 47 (HSP47), and fibroblast-specific protein 1 (FSP1). Fstl1 has the regulatory functions in cell proliferation and differentiation. As shown in Fig. 5c and d, MWCNT enhanced mRNA expression and protein level of Fstl1, while CVT-6883 inhibited Fstl1 expression. In addition, the protein levels of α-SMA, PDGFR-β, HSP47, and FSP1 were dramatically increased by MWCNT in the lungs (Fig. 6). However, CVT-6883 cotreatment suppressed the effects of MWCNT.
Discussion
CNT exposure induced the pulmonary collagen deposition accompanied with pronounced acute inflammation preceding chronic fibrosis progression. Serum LDH Fig. 2 The mRNA expression of CD73 and A 2B AR in lung tissues, as well as ADO level in BALF at 1, 3, 7, and 14 days after MWCNT and DM treatment. a The transcript expression of CD73. b ADO levels. c The transcript expression of A 2B AR. Data represented the mean ± SEM (n = 3). *p < 0.05, versus control group activities reflects the cellular injury after MWCNT exposure. To further evaluate potential pulmonary inflammation induced by MWCNT, the level of proinflammatory cytokines IL-6 and MPO activity between the BALF and lung tissues was examined. In our study, MWCNT induced pulmonary inflammatory response by recruiting and activating neutrophils in the lungs through movement of circulating leukocytes to the lungs. Previous studies have shown that MWCNT-induced inflammation is probably due to phagolysosome membrane permeability, which has been implicated in the activation of IL-6 [27]. Notably, CVT-6883, as an inhibitor of A 2B AR, greatly ameliorated pulmonary inflammation, resulting in a significant reduction in fibrosis. Based on these results, collagen deposition caused by MWCNTs is related to the proinflammatory effects of MWCNTs. Notably, the blockade of A 2B AR restores cellular injury and inflammatory effects, thereby alleviating the progression of lung fibrosis.
We firstly examined the expression of CD73 following MWCNT exposure. Interestingly, MWCNT-induced lung fibrosis significantly increased CD73 gene expression and ADO levels. This finding is consistent with CD73-mediated enzymatic conversion of adenosine monophosphate (AMP) to ADO in the lungs of mice with radiation-induced pulmonary fibrosis [28]. In addition, ADO also increased CD73 through transcriptional regulation via cyclic AMP response element in the CD73 promoter [28]. Chronic inflammation was associated with a constant increase in CD73 + leukocytes in the lung and an accumulation of CD73 + T cells, during the fibrotic phase [28]. CD73 was up-regulated in lung biopsy samples from patients with stage 4 chronic obstructive pulmonary disease or severe idiopathic pulmonary fibrosis, respectively [29]. Moreover, reduced extracellular ADO accumulation in radiation treated CD73 − / − mice prevented fibrosis development [30]. Thus, the necessity of ADO production of nucleoside signaling can be confirmed, and CD73 activation and ADO accumulation potentiates lung fibrosis after MWCNT treatment.
In the present study, elevations in extracellular ADO activated A 2B AR, which promoted MWCNT-induced lung fibrosis through activating TGF-β1. Elevated expression of A 2B AR has been detected in several chronic lung diseases, such as chronic obstructive pulmonary disease and lung fibrosis [21,23]. The role of A 2B AR stimulation or blockade in cell proliferation has been depending on the cell type and culture condition [31,32]. Of note, A 2B AR stimulated TGF-β synthesis in lung fibroblast cells [33]. Furthermore, A 2B -null mice exhibit slightly effects in acute lung injury but reduced lung fibrosis, suggesting that A 2B AR promote fibrosis [34]. We therefore determined whether A 2B AR modulates TGF-β1 signaling in CNT-induced lung fibrosis. The TGF-β1/Smads signaling pathway occurs as a result of receptor-ligand interactions resulting in the expression of a number of TGF-β1 target genes through the rapid phosphorylation and nuclear translocation of Smad3 [35]. Inhibition of TGF-β/Smad signaling pathway is an effective approach to treat fibrotic disorders [36]. Our results showed that MWCNT remarkably stimulated the expression of TGF-β1 and Smad3 phosphorylation in the lungs in an A 2B AR-dependent manner. These data identified the A 2B AR as an upstream modulator of TGF-β1 and can be a potential therapeutic target in MWCNT induced lung fibrosis.
Large amounts of ECM remodels connective tissue into dense scar tissue, and ultimately leads to a disruption of organ architecture and loss of function [37]. TGF-β1 plays a central role in the ECM proteins production (such as collagen I and FN1) in the lungs after MWCNT exposure [10,38]. Inhibition of TGF-β1 signaling by Smad3 inactivation was partially resistance to pulmonary fibrosis [39]. The results of present study are also consistent with this because MWCNT increased the production of collagen I and FN1 where TGF-β1 signaling was activated. Moreover, our study showed that induction of collagen I and FN1 was blocked by co-treating with A 2B AR inhibitor CVT-6883, confirming the critical role of A 2B AR in MWCNT-induced deposition of fibrous ECM. Altogether, the antagonism of A 2B AR potentially suppresses the TGF-β1 signaling activation and thereby inhibits ECM production and deposition during MWCNTinduced lung fibrosis.
Lung fibrosis is characterized by excessive accumulation of α-SMA-expressing myofibroblasts arising from interactions with TGF-β1 and mechanical influences [40]. Fibroblasts/myofibroblasts are major effector cells in production of ECM proteins and airway remodeling [41]. Fstl1 is associated with myofibroblast accumulation and subsequently ECM production that is mediated by canonical TGF-β signaling [42,43]. Previous study showed that MWCNTs increased the myofibroblast population by promoting fibroblasts proliferation and differentiation [44]. Here, blocking A 2B AR signaling using CVT-6883 markedly attenuated Fstl1 induction in MWCNT-treated lung tissue. We further examined the effects of A 2B AR in fibroblasts and myofibroblasts during MWCNT-exposed lungs using HSP47 and FSP1 as markers for fibroblasts and, α-SMA and PDGFR-β for myofibroblasts [44][45][46]. Our study found that MWCNT remarkably increased numbers of fibroblasts and myofibroblasts in the lungs in an A 2B AR-dependent manner. Collectively, MWCNT elevates A 2B AR expression, which promotes Fstl1-induced fibroblasts proliferation and differentiation.
Conclusion
In conclusion, our study identified that excesses extracellular ADO level promoted lung fibrosis following MWCNT exposure, which involved the engagement of the A 2B AR. Antagonism of A 2B AR attenuated TGF-β1-induced fibroblast proliferation and differentiation, thereby inhibiting collagen deposition and progressive pulmonary fibrogenesis induced by MWCNT (Fig. 7).
To the best of our knowledge, modulation of ADO levels and antagonism of A 2B AR-mediated responses may be a novel therapeutic approach for MWCNT-induced lung fibrosis.
Animals
Six to eight weeks male C57BL/6 mice were purchased from Liaoning Changsheng Technology Industrial Co., LTD (Liaoning, China). All experiments involving animals were in accordance with the Ethical Committee for Animal Experiments of Northeast Agricultural University. The mice were housed under environmental conditions (22 ± 2 °C, 55 ± 5% relative humidity) with a 12 h light/dark cycle, and were provided with standard pelleted rodent diet.
Experimental protocol
A single dose of 50 μl of DM only, or 50 μl of DM containing 40 μg MWCNT was administered by pharyngeal aspiration, which is an alternative to inhalation of administration to deliver a specific dose of an agent into mouse lungs and represents a noninvasive route [49]. Some mice were treatment with CVT-6883 (1 mg/kg, Tocris) in the morning and in the evening (interval was 12 h) for 5 days by intraperitoneal injection. The same formulation and dose of CVT-6883 described above was used in MWCNT studies, where twice daily intraperitoneal injections were given on days 3-7 of the protocol [26,50].
Tissue collection and histopathology
The left lobe of the lung was inflated and fixed in 10% neutral buffered formalin for hematoxylin and eosin (H&E) and Masson's Trichrome. Histological analysis of lung pathology scoring was obtained from stained lung tissue [51]. Fibrotic changes were quantified using the modified Ashcroft scale [49]. The right lung lobes were collected for mRNA and protein analysis.
Biochemical assays
Blood samples were collected from all animals in EDTAcontaining vacutainer tubes. Percentage of neutrophils in peripheral blood of mice were obtained using an automated Auto Hematology Analyzer BC-2600Vet (Mindray, Shenzhen, China).
We also obtained the serum after centrifugated at 3000 × rpm for 10 min at 4 °C. All serum samples were hemolysis-free. Serum LDH activities were measured with a Uni Cel DxC Synchron chemistry system (Beckman Coulter Inc., Fulton, CA, USA).
Measurement of MPO activity
The lung tissues were homogenized and dissolved in extraction buffer for the analysis of MPO activity [52]. To assess the accumulation of neutrophils in the lung tissues, MPO activities were detected following the respective manufacturer's instructions (Nanjing Jiancheng Bioengineering Institute, Nanjing, China).
Enzyme-linked immunosorbent assay
The concentration of IL-6 in the BALF was measured by Enzyme-linked immunosorbent assay (ELISA) following the manufacturer's protocols (R&D Systems, Minneapolis, USA).
Statistical analysis
Results are expressed as mean ± SEM. Differences among groups were evaluated by one-way analysis of variance (ANOVA) followed by Tukey's post hoc test. A p-value < 0.05 was considered as significant. Statistical analyses were carried out using SPSS 19.0 software (SPSS, Chicago, IL, USA).
|
v3-fos-license
|
2018-04-03T03:04:17.111Z
|
2016-09-29T00:00:00.000
|
16141532
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2016/6456031",
"pdf_hash": "439c6cb9d88c9d63e4d4d87f81fd38120dfb8164",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46668",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "439c6cb9d88c9d63e4d4d87f81fd38120dfb8164",
"year": 2016
}
|
pes2o/s2orc
|
Epidemiology of American Tegumentary Leishmaniasis and Trypanosoma cruzi Infection in the Northwestern Argentina
Background. Endemic areas of tegumentary leishmaniasis (TL) in Salta, Argentina, present some overlap zones with the geographical distribution of Chagas disease, with mixed infection cases being often detected. Objectives. The purpose of this study was to determine the magnitude of Leishmania sp. infection and potential associated risk factors, the serologic prevalence of T. cruzi, and the presence of T. cruzi-Leishmania sp. mixed infection in a region of the northwest of Argentina. Methods. Cross-sectional studies were conducted to detect TL prevalence and T. cruzi seroprevalence. A case-control study was conducted to examine leishmaniasis risk factors. Results. Prevalence of TL was 0.17%, seroprevalence of T. cruzi infection was 9.73%, and mixed infection proportion—within the leishmaniasic patients group—was 16.67%. The risk factors associated with TL transmission were sex, age, exposure to bites at work, staying outdoors more than 10 hours/day, bathing in the river, and living with people who had lesions or were infected during the study. Discussion. The endemic pattern of TL seems to involve exposure of patients to vectors in wild as well as peridomestic environment. Cases of T. cruzi infection are apparently due to migration. Therefore, a careful epidemiological surveillance is necessary due to the contraindication of antimonial administration to chagasic patients.
The described scenarios of TL transmission in Argentina include four cycles' patterns: (a) wild cycle with transmission in primary or residual vegetation, (b) with eventual peridomestic transmission due to wild or secondary vegetation 2 BioMed Research International changes, (c) with peridomestic transmission in contiguous domiciles with the residual vegetation, and (d) peridomestic cycle in rural, ruralized periurban, or urban-rural interface environment [9]. However, the potential existence of urban transmission has been reported, which represents an important change in the transmission pattern paradigm of this disease at the regional scale [10]. Oran and San Martin departments (Salta province) are the areas with the greatest risk of transmission in the country, which reported the highest number of cases to the overall TL incidence in Argentina [10,11].
In several areas of Latin America (including northern Argentina) the geographical distribution of TL overlaps with transmission areas of American trypanosomiasis (Chagas disease). The World Health Organization estimates that 8 to 10 million people are infected worldwide, mostly in Latin America where the disease is endemic [12]; this is caused by Trypanosoma cruzi and it is transmitted by several species of triatomine insects, Triatoma infestans being the most important in Argentina. In the last century, the progressive urbanization and the intensive migration of infected individuals increased the risk of transmission by blood transfusion and congenital route in nonendemic regions [13]. In restricted areas located in the east and northeast of Salta province, corresponding to the Gran Chaco ecoregion, vectorial transmission of T. cruzi still occurs but not in the rain forest ecoregion (Yunga ecoregion).
Mixed infections due to Leishmania sp. and T cruzi have been reported in patients showing clinical symptoms of TL, ranging between 12% and 70% [2,[14][15][16]. Prevalence of Leishmania sp. and T. cruzi mixed infection is unknown for northern Salta. Cross-reactivity between T. cruzi and Leishmania sp. infections has been reported when some serological tests were evaluated [2,16,17], possibly due to the close phylogenetic relationship between these parasites. The occurrence of Leishmania sp. and T. cruzi mixed infections also has therapeutic implications. Antimonial drugs used to treat leishmaniasis have potential cardiac toxicity [18][19][20], which is an important concern in patients infected by T. cruzi because about 30% of people infected by this parasite develop chronic cardiomyopathy [21].
In the present study, we examined the prevalence of single infections by Leishmania sp. and T. cruzi as well as the proportion of mixed infections due to both parasites in people living in the northern of Argentina. Demographic, behavioral, and environmental variables have also been studied as potential risk factors associated with transmission of cutaneous leishmaniasis. The epidemiologic pattern here observed can occur in several countries in Latin America and this research can provide information to optimize global and local prevention measures of public health.
Study
Design. This research included two cross-sectional studies, a tegumentary leishmaniasis case-control study, and the report of mixed infection for Leishmania sp. and T. cruzi.
A cross-sectional study was conducted in 2009 to determine the prevalence of Leishmania sp. (LP) taking into account the active cases of TL (ACTL) and total population of Hipólito Yrigoyen (HYTP: population censed by the PHC): The seroprevalence of T. cruzi infection (TCSP) corresponding to 2009 was also calculated through a crosssectional study. The sample size was 113 people. It was calculated considering an expected prevalence 5% with 4% of accuracy and confidence level of 95% using the Epidat software v3.1 (Epidat Xunta de Galicia, Santiago de Compostela, Spain and Pan-American Health Organization, Washington, DC). Seventy-nine quasi-randomly selected households were studied ( Figure 1(b)). Of all household members, only those who wanted to participate voluntarily were selected (one person or more per household). The seroprevalence (TCSP) was calculated as the number of seropositive people for T. cruzi infection (TCP) over the sample size (SS): Also, the mixed infection proportion within group of patients with TL in 2009 was calculated ( Figure 2).
An unmatched case-control study was conducted to identify risk factors associated with TL cases. The TL cases included in the case-control study were patients living in Hipólito Yrigoyen and diagnosed between 2001 and 2009. The control population was selected within the cross-sectional sample used to calculate the T. cruzi seroprevalence infection ( Figure 2). A survey was conducted in both cases and on the control people through a structured epidemiological questionnaire. After each person was interviewed, 6 mL of venous blood was drawn by clinical laboratory technicians and allowed to clot at room temperature. The sera were obtained by centrifugation at 3500 rotations per min for 5 min, then aliquoted into 1.5 mL tubes, and stored at −20 ∘ C until tested.
The variables registered were age, sex, occupation (exposure to vector bites at work), recreational habits (staying outdoors for more than 10 h and bathing in the river), personal preventive measures, household data (i.e., location, construction material, proximity to sites of possible development of sandflies, distance from crop fields and primary vegetation, knowledge about and application of preventive measures), living with people who were infected or had lesions, and aspects indicating knowledge about TL [22][23][24]. The data from the questionnaire were managed using the application EpiData Entry version 3.1 [25]; the resulting database was exported to the R statistical software for respective analysis. The people included in each study group mentioned above were defined according to the diagnostic criteria described below.
Diagnostic Procedures.
The patients were evaluated in the field by the personnel at Instituto de Investigaciones en Enfermedades Tropicales (IIET) at Universidad Nacional de Salta, the San Vicente de Paul Hospital in Orán, and Eva Peron Hospital in Hipólito Yrigoyen.
The sera and blood samples collected in the field were transported to the IIET for processing. The parasitological diagnostic of Leishmania was made in the IIET and the patients were derived from Hipólito Yrigoyen Hospital. Both diagnostic procedures such as serological, parasitological, and molecular techniques were performed following the protocols established in previous studies. The commercial kits were applied according to manufacturer's instructions.
Diagnostic Criteria.
Diagnostic criteria included the following.
Leishmaniasis Cases.
Included were individuals who had lesions clinically compatible with TL and visualization of amastigotes of Leishmania sp. in Giemsa-stained smears and/or positive reaction of serum samples by enzyme-linked immunosorbent assay (ELISAg) homogenate protein of L (V.) guyanensis [15] and/or positive reaction to leishmanin skin test [4,5].
T. cruzi Infection.
The subjects were considered infected with T. cruzi when serum samples by ELISA and Indirect hemagglutination (IHA) tests (Wiener Lab, Argentina) were reactive. The samples with discordant results between ELISA and IHA were examined by recombinant ELISA 3.0 (Wiener Lab, Argentina) and Immunofluorescence test [26] or Polymerase Chain Reaction (PCR) [27]. The recombinant ELISA 3.0 was reported as specific test for T. cruzi infection detection without cross-reaction with Leishmania [17,28].
Mixed Infections.
Included were patients with TL and positive results for at least two tests for T. cruzi infection, mentioned above.
Controls in Case-Control Study.
Included were individuals living in Hipólito Yrigoyen who were not grouped as leishmaniasis cases and/or T. cruzi infected.
Data
Analysis. The prevalence of Leishmania sp., seroprevalence of T. cruzi infections, and mixed infection proportion with 95% Confidence Intervals (CI) were calculated using the EPIDAT software version 3.1.
In the case-control study, the independent continuous and discreet variables were, respectively, categorized or dichotomized. Univariate and multivariate logistic regression (LR) analysis were carried out. The Odds Ratios (OR) and 95% CI were calculated to assess the link between the TL cases and potential risk factors. The variables with OR > 1 People positive for ELISA-leishmaniasis or leishmanin skin test (LST) may have been exposed to the Leishmania parasite. The cross-reaction of both ELISA and leishmanin skin test with chagasic infection does not distinguish whether the person was exposed to Leishmania parasite or is infected with T. cruzi. Due to this, the people positive for Chagas lab tests were not included as controls in the case-control study. and < 0.05 in univariate logistic regression analysis were tested in a multivariate analysis to establish a model involving the least number of variables that best explains the dependent variable (TL cases).
The final model was obtained using the stepwise technique, a procedure that combines forward method (it starts from a model only with the constant or independent variable, followed by progressive introduction of variables in the equation, provided they are significant) and backward method (all the variables are initially considered in the model, and those lacking significance are then progressively eliminated) [29]. The Akaike Information Criterion (AIC) was used as selection criterion. AIC calculation is based on minimization of the loss of information function, penalizing for the number of variables introduced that seeks the model that best adjusts to the data with the minimum number of possible variables, thus producing simpler models [30,31]. The model that has been chosen minimizes the AIC. Data were considered statistically significant if < 0.05. All statistical analysis for case-control study was performed using R software version 2.15.2 [32].
Ethical Approval.
All the people included in the study agreed to participate by signing an informed consent form (ICF). The project and ICF were approved by the Ethics Committee of the School of Health Sciences at the National University of Salta and the "Fundación Huesped."
Results
In 2009, only 18 cases of TL were diagnosed in Hipólito Yrigoyen, which represented a TL prevalence of 0.17% (CI 0.09-0.26). The age range of study patients was 7-69 years with an average of 35.45 ± 16.69 (SD).
Of the 113 samples analyzed to detect T. cruzi infection, 67 (59%) corresponded to females and 46 (41%) to males. Their age ranged between 7 and 74, with an average of 37.5 ± 17.3 (SD) years. There was no statistically significant difference in prevalence between males and females ( = 0.73). The seroprevalence for this infection was 9.73% (CI 3.83-15.64) in 2009. The frequencies of cases according to age and sex are summarized in Table 1. (Figure 2). The control group consisted of individuals without positive diagnosis for leishmaniasis and Chagas diseases according to criteria above described.
The variables that showed significant association ( < 0.05) with the presence of TL (case) in univariate LR analysis were sex, age, exposure to vector bites at work, staying outdoors for more than 10 h (so), bathing in the river, and living with people who were infected or had lesions during the study period (Table 3). These variables were included in a multivariate logistic regression analysis and final model obtained using the stepwise method. This model includes only 3 predictive variables (sex, age, and so) that explained the occurrence of TL cases (
Discussion
In the north of Salta province, TL levels are hyperendemic in some sites and periods [33,34]. This situation is worsened by the presence of cases of Chagas disease, which may further generate cases of mixed infections, causing a synergistic problem for the health care system. In the locality Hipólito Yrigoyen, TL prevalence values of 0.17, 0.79, and 0.18% were previously reported [33,34]. The prevalence calculated in the present study (0.17%; CI 0.08-0.26) is similar, which indicates a level of active transmission that persists over time. However, high incidence foci are likely to occur in short exposure periods in this area [5].
In the case-control study, the associated variables would be reflecting the existence of a complex pattern of transmission. Male sex and staying outdoors for more than 10 hours would be indicators of a sylvatic mode of transmission, facilitated by labour, subsistence, or recreational activities (in rural environments and/or deforestation areas, or hunting and fishing activities), as it is indicated in regions where TL is endemic [9,24,33]. The significantly higher proportion of infected children compared to that of adults found in this analysis (OR = 7.73; CI: 2.05-29.16) suggests the existence of other patterns of transmission in Hipólito Yrigoyen. The incidence of TL in children has been cited as an indicator of peridomiciliary transmission, especially in localities adjacent to primary and/or secondary vegetation [4,[22][23][24]33].
In addition, a high density of sandflies has been detected in the vegetation near to the irrigation channels located in the outskirts of the city (Figure 1) [35], showing a species diversity similar to that found in a nearby place where there was a high rate of infection [5,36]. Many families go to these sites for recreational purposes in times of high temperature and risk leishmaniasis transmission (7 pm to 10 pm approximately) with the consequent risk of being bitten by infected sandflies and contracting leishmaniasis, as reported in a study of spatial distribution of TL cases [37].
The presence of active TL cases among elderly people that remain mostly in their houses and of sandflies in the center of the town [35] offers another plausible epidemiologic situation of disease transmission (but with low probability) in urban environments because lower abundance of sandflies was recorded here [35]. Indeed, in Hipólito Yrigoyen, house courtyards have vegetation patches that can be colonized by sandflies from the periphery, according to the characteristics of metapopulation dynamics [10].
On the other hand, in the study area, the possibility of vector-borne transmission of T. cruzi has been discarded, because no insects or indicators of their presence have been found in the annual activities of entomological surveillance of triatomines carried out by Primary Health Care System in recent and historical monitoring. Thus, cases of T. cruzi infection in Hipólito Yrigoyen would be associated with migratory processes (movement principally of rural populations from Argentina and Bolivia of the Gran Chaco ecoregion where Chagas disease is endemic) [13]. The T. cruzi seroprevalence found in this work is low compared with the prevalence value observed in rural populations of endemic areas (25%) [38], and it is high compared with other regions without endemic transmission [39]. In turn, infected children may have acquired infection by congenital transmission, as reported in previous studies of this type of transmission in the province of Salta [40].
The proportion of T. cruzi-Leishmania sp. mixed infection within the group of patients with TL reported in the north of Salta reaches 30 and 40% [2,14,15] and does not show differences from the percentage obtained in this study. Knowing the level of this condition in the population allows us to explore the factors involved in the origin and persistence of mixed infections. In addition, because antimonials are cardiotoxic, a careful diagnosis and implementation of alternative treatments are needed to avoid further complications.
The complex situation in Hipólito Yrigoyen in reference to TL is aggravated by the coexistence of T. cruzi infection. The transmission pattern involves mainly natural areas, but the possibility of a peridomiciliary transmission in the outskirts of the city cannot be ruled out. This situation demands the involvement of different stakeholders to control the magnitude of disease incidence, implementing prevention strategies and taking into account biogeographical and sociocultural characteristics, as well as human-induced environmental changes and situations that pose a risk.
The present work provides epidemiological information of potential determinants of TL occurrence in Hipólito Yrigoyen, its magnitude, and situation of T. cruzi infection in the same area. This information is useful for the local health system because it may contribute to a better planning of the surveillance systems and to the design of prevention strategies in the area. These epidemiological patterns of mixed infections can occur in other countries were T. cruzi transmission does not exist or was interrupted and tegumentary leishmaniasis is endemic.
|
v3-fos-license
|
2021-08-21T02:12:39.500Z
|
2021-01-01T00:00:00.000
|
237252016
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=110965",
"pdf_hash": "d02527d111f8ca5c295bb7347eeb72ff7571c0c3",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46671",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d02527d111f8ca5c295bb7347eeb72ff7571c0c3",
"year": 2021
}
|
pes2o/s2orc
|
Erythema Nodosum Manifestation Post COVID-19 Vaccine: A Case Report
Erythema nodosum (EN) is a delayed hypersensitivity response that may be triggered by a range of conditions, including infections and vaccines. Rare cases of EN caused by COVID-19 were recently reported but none due to COVID-19 vaccines were documented. We report here a case of EN occurring after COVID-19 vaccination. Patient presented with painful nodular lesions of all 4 limbs, evolving for one month. These lesions appeared 48 h after the second dose of COVID-19 vaccination. The patient reported no recent infectious episodes. The physical examination found numerous, erythematous dermohypodermatitis knots with no palpable adenopathy. Some were regressive according to biligenesis shades. Biology and radiology findings eliminated other common causes of this dermatosis. The skin biopsy was done and suggested EN. The final diagnosis was post COVID-19 vaccine EN. Patient received a symptomatic treatment and had a slight improvement of the lesions 10 days after diagnosis. Physicians should be aware of the side effects of the vaccine including skin manifestations, especially since more people are bound to be vaccinated.
Introduction
Erythema nodosum (EN) is a delayed hypersensitivity response that may be triggered by a range of conditions, including infection (mostly Streptococcus species) [1], medications, pregnancy, malignancies and inflammatory processes.
First reported in late 2019, the COVID-19 has become a pandemic and has been How to cite this paper: Hali, F., Marmech, C., Chiheb, S. and Alatawna, H. (2021) Erythema Nodosum Manifestation Post COVID-World Journal of Vaccines mentioned in very few writing as an EN inducer [2] [3]. Some vaccines can also cause EN. The pathogenesis is unclear, but is considered a delayed hypersensitivity reaction triggered by exposure to an antigen [1]. This case is, to our knowledge, the first report of EN manifestation in the context of a COVID-19 vaccine.
It seems likely that in 2021, COVID-19 vaccines will be globally available. Hence, we should be aware of their possible side effects
Case Report
A 66 years old female patient, with a breast cancer history since 2008 currently in remission, consulted for painful nodular lesions of the lower and upper limbs, evolving since one month. The investigation found the occurrence of EN 48 hours following the second injection of the "Astrazeneca" vaccine. The patient did not report any recent infectious episode. The physical examination found numerous, erythematous dermohypodermatitis knots of the 4 limbs. Some of them were regressive in appearance according to the biligenesis shades. Diameter ranged from 2 to 3 cm ( Figure 1). There was no palpable adenopathy. Biology (Table 1) [2]. In our observation, the symptomatology involved all 4 limbs which make our case special. EN usually resolves spontaneously within 8 weeks [1]. The diagnosis is usually clinical but in some doubtful cases, a biopsy may be required [7]. Independently of etiology, EN is characterized by typical histological features: inflammation of the dermohypodermic junction and the periphery of the septa, containing neutrophils and eosinophils, turning to an infiltrate of lymphocytes and histiocytes as the process evolves, which is in concordance with our case. Histiocytic granulomas, known as Miescher's radial granulomas, can also be found [6].
The assessment made in our case aims to eliminate the most frequent causes of this dermatosis, which are haemolytic streptococcal infections, sarcoidosis and bacterial or inflammatory enteropathies.
In 15% -50% of cases of EN, the etiology remains undetermined. The imputability of a drug, when it is evoked, is difficult to prove. Some vaccines can induce EN, such as BCG, hepatitis B or typhoid fever vaccines [8] [9]. Rare cases mentioned a vaccine against HPV as an EN inducer [10] [11].
In our patient, recent vaccine administration is the trigger for EN. To the best of our knowledge no previous association between EN and COVID-19 vaccine has been reported.
The pathogenesis of EN is unclear, but is considered a delayed hypersensitivity reaction triggered by exposure to an antigen. In this observation, the lesions appeared 48 hours after vaccination, which is in line with the hypothesis cited above.
In fact, some vaccines introduce a mild infection that resembles the real infection, leading to a strong immune response [12].
COVID-19 infection can lead to a deregulated immune response. Rare cases of EN have been recently described in association with COVID-19 [2] [3].
Certain inflammatory markers are increased in COVID-19 such as IL-1, -2, -6, -7, and -10 [13] [14]. In patients with EN, polymorphisms of IL-1 and -6 promoter genes have been described [15] [16], as well as high levels of IL-6 [17]. This may result in a higher susceptibility to EN in situations of immune dysregulation, like COVID-19, thus resulting in an excessive inflammatory reaction. This might partly clarify the connection between COVID-19 and EN.
Conclusion
We reported here a unique case of EN occurring after COVID-19 vaccination. As the state of the pandemic is quickly evolving, more people are bound to get vaccinated. Thus, clinicians should be mindful of the side effects of the COVID-19 vaccines including skin manifestations. Literature is likely to reveal more dermatological manifestations in the future.
|
v3-fos-license
|
2018-12-20T14:03:07.993Z
|
2018-12-03T00:00:00.000
|
56177771
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://zookeys.pensoft.net/article/26052/download/pdf/",
"pdf_hash": "14918caaef5810279f0ce27d49c2270bc9313145",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46673",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"sha1": "14918caaef5810279f0ce27d49c2270bc9313145",
"year": 2018
}
|
pes2o/s2orc
|
Woodlice and their parasitoid flies: revision of Isopoda (Crustacea, Oniscidea) – Rhinophoridae (Insecta, Diptera) interaction and first record of a parasitized Neotropical woodlouse species
Abstract Terrestrial isopods are soil macroarthropods that have few known parasites and parasitoids. All known parasitoids are from the family Rhinophoridae (Insecta: Diptera). The present article reviews the known biology of Rhinophoridae flies and presents the first record of Rhinophoridae larvae on a Neotropical woodlouse species. We also compile and update all published interaction records. The Neotropical woodlouse Balloniscusglaber was parasitized by two different larval morphotypes of Rhinophoridae. Including this new record, there are 18 Isopoda species known to be parasitized and 13 Rhinophoridae species with known hosts, resulting in 35 interactions. There are a total of 53 interaction records from Holarctic and Neotropical countries. Of the 18 known isopod hosts, only five species have more than one parasitoid, including the new Neotropical host record presented in this work.
Introduction
Terrestrial isopods are soil macroarthropods involved in decomposition processes and nutrient cycling (Zimmer 2002). This group has many predators within the soil but few known parasites and parasitoids. Among parasitoids, all known species belong to the family Rhinophoridae (Insecta: Diptera) (Sutton 1980). This family of flies comprises about 150 species worldwide that mainly parasitize woodlice Arnaud 2001, Nihei 2016). Despite their numbers, not many papers discuss the woodlouse-parasitoid interaction. Studies regarding the interaction and fly's larval stages are scarce and difficult to find and the taxonomy and phylogeny of both groups have been considerably modified since those studies were published. Hence, there is no current list of recorded interactions and a need to update them taxonomically. Information from immature stages and their biology is crucial for evaluating the systematic position of many aberrant oestroid flies such as the rhinophorids (Pape and Arnaud 2001), so knowledge of the morphology of larval stages may help phylogenetic analysis and classification (Cerretti et al. 2014), as well as to understand its evolutionary history in association with the woodlice hosts. Therefore, this work aims to (1) review the known biology of Rhinophoridae larvae focusing on the woodlouselarva interaction, (2) present the first record of Rhinophoridae larvae on a Neotropical woodlouse species and (3) update the recorded interactions according to current taxonomy of both groups.
Material and methods
Bibliographic searches in the platforms Web of Science, Science Direct, Biodiversity Heritage Library and Google Scholar were performed using the following keywords: Rhinophoridae, woodlouse flies, Tachinidae, Rhinophorinae. All the subsequent references from obtained papers were searched in available databases and scientific libraries.
Regarding the new woodlouse host record, infected individuals of Balloniscus glaber Araujo & Zardo, 1995 that had been collected in Morro Santana, Porto Alegre, Southern Brazil (30°4'4"S, 51°7'22"W) were discovered from laboratory culture. The location is at 100 m of elevation and the vegetation consists of a mosaic of Atlantic forest and grassland (Overbeck et al. 2006). Hosts were carefully dissected, photographed, and preserved in ethanol 70 %. Larvae were heated in water at 60 °C before being transferred to ethanol whenever possible. The material used in this study is deposited in Museu de Zoologia, Universidade de São Paulo, São Paulo, Brazil (MZUSP).
Taxonomy of isopod species was updated according to Schmalfuss (2003) and recent revisions. Taxonomy and name validity of Rhinophoridae species were based on regional catalogues and recent generic revisions, when available (Herting 1993, Cerretti and Pape 2007, 2009.
Biology of larval stages: Isopoda-Rhinophoridae interaction
Very few studies regard the biology of the larva and its effect on the woodlouse host. These studies usually demand a long period of time due to the difficulty of obtaining the parasitoids (Thompson 1934, Bedding 1965, 1973. This difficulty is partially explained by the low prevalence of this parasitoid on natural populations and for the apparent specificity of host species (Bedding 1965). Prevalence on natural populations is usually lower than 2% and seems to be associated with the infection method.
Adult Rhinophoridae flies copulate and the female deposits the eggs on substrates (Bedding 1965, Wijnoven 2001 contaminated by uropod gland secretion of isopods rather than on the host itself (Bedding 1965) which may be a derived character in this group of parasitoids (Wood 1987). This secretion is not commonly observed in all woodlice species but it is rather easily obtained from Porcellio scaber Latreille, 1804 (Gorvett 1946, Deslippe et al. 1996 which might explain why this species has the highest number of known parasitoids and highest prevalence on natural populations (Bedding 1965, Sassaman andPratt 1992).
The eggs deposited on the soil hatch and the 1 st instar larva attaches itself to the body of a passing woodlouse. The larva may wave its anterior end slowly forward and sideward in an attempt to attach itself to the body of a passing woodlouse (Pape and Arnaud 2001). This method of infection is affected by host size since the larva cannot reach the sternites of bigger (taller) animals. It was also observed the suitability of the host relates to a specific period of the molting cycle of the isopod. Differently from insects, crustaceans present a highly calcified cuticle (Roer et al. 2015). Within crustaceans, isopods have developed specific strategies to recycle calcium from the old cuticle such as a biphasic molting (they first molt the posterior half and then the anterior half of the body) and accumulation of amorphous calcium carbonate in the anterior sternites prior ecdysis (Greenaway 1985, Steel 1993, Ziegler 1994. The fly larva attaches itself to isopods with calcium plates (i.e., during premolt or intramolt) and penetrates through the intersegmental membrane of the sternites of the freshly molted host (Bedding 1965), since they present a softer cuticle at this stage. Nonetheless, there is a high rate of cannibalism of freshly molted isopods (Bedding 1965) thus reducing the chances of survival of the fly larva inside the host and possibly explaining the low prevalence among natural populations.
After the larva has entered the host, it then molts to its 2 nd instar and starts feeding, first on the hemolymph, and then on the organs of the host. The 3 rd instar larva fills most of the body cavity leading to isopod death. Pupation occurs inside the empty exoskeleton of the host (Thompson 1934, Bedding 1965 (Figure 1). Thompson (1934).
First Neotropical woodlouse host record
Almost all records from Rhinophoridae hosts are from the Palearctic region. Outside the Palearctic, there is only mention of Porcellio scaber, Oniscus asellus Linnaeus, 1758 and Porcellionides pruinosus (Brandt, 1833) in the Nearctic (Brues 1903, Jones 1948, Sassaman and Garthwaite 1984, Sassaman and Pratt 1992 and Armadillidium sp. (probably Armadillidium vulgare (Latreille, 1804)) in the Neotropic (Parker 1953). All of these woodlice species were parasitized by Melanophora roralis (Linnaeus, 1758). Nonetheless, all the aforementioned oniscidean and rhinophorid species are introduced from the Palearctic on these locations. Some authors hypothesize that transportation of infected woodlice can explain the occurrence of Palearctic Rhinophoridae in the Nearctic and Neotropic (Mulieri et al. 2010, O'Hara et al. 2015 provided that introduced woodlice are common in these regions (Jass and Klausmeier 2000). The lack of native woodlouse hosts in the Nearctic region is thought to be associated with the low diversity of native woodlice species there (c.f. Schmalfuss 2003), but the same is not true for the Neotropic. In fact, in Brazil alone there is circa 200 described species, most of them native (Cardoso et al. 2018).
In the Neotropic, 19 native species of Rhinophoridae have been described (Cerretti et al. 2014, but there is no information regarding parasitoidhost interaction so far. Of these, only the 1 st instar larva of Bezzimyia yepesi Pape & Arnaud, 2001 (Venezuela) is known (Pape and Arnaud 2001) and no host record has been made before, even for the two introduced species, Melanophora roralis (L.) and Stevenia deceptoria (Loew, 1847) (Mulieri et al. 2010) (Figure 2).
Here we observed that the Neotropical isopod Balloniscus glaber is a host for the dipterous larvae in southern Brazil (Figure 2), and the two observed 3 rd instar larvae morphotypes are different from the nine Palearctic species with previously described 3 rd instar larval forms (Thompson 1934, Bedding 1965, including the introduced Melanophora roralis. (2014); Melanophora roralis records from Parker (1953), Guimarães (1977), González (1998), Cerretti and Pape (2009) and Mulieri et al. (2010); Stevenia deceptoria records from Mulieri et al. (2010). Base map modified from: commons.wikimedia.org/ Balloniscus glaber shares many characteristics with clingers (Wood et al. 2017) although it does not present a typical clinger eco-morphological body type like Porcellio scaber (sensu Schmalfuss 1984). However, it presents clinging behavior ( Figure 3A) for predator avoidance (Quadros et al. 2012, Wood et al. 2017) and its legs are shorter than in runner type animals of similar size. These morphological and behavioral characteristics might facilitate larva infection due to reduced distance of sternites to the substrate. Furthermore, like Porcellio scaber, this species also frequently discharges a sticky secretion from their uropod glands upon stimulation ( Figure 3B), secretion that is recognized by adult fly females and might stimulate oviposition (Bedding 1965). Five infected individuals have been recorded in the same location ( Figure 3C-F). The larvae (one per host) occupied the full body cavity, reaching up to 7 mm in length and resulted in death of all woodlice hosts (Suppl. material 2). Hosts lacked discernible internal reproductive system and the empty gut was the only remaining organ ( Figure 3E). No host presented any signs of alteration in overall appearance. The parasitoids could only be identified at the family level due to the lack of larval descriptions for the native species and lack of adults to get a more precise identification. The larvae were identified as Rhinophoridae based on comparative examination of descriptions and illustrations available on the literature; both collected morphotypes presented elongate body shape, anterior and posterior spiracles, and cephaloskeleton as characterized by rhinophorid species. The two 3 rd instar larvae morphotypes are conspicuously different on body shapes, posterior ends, cephaloskeleton, and anterior and posterior spiracles (Figs 4, 5). These forms differ from the known larval stages described by Thompson (1934) and Bedding (1965Bedding ( , 1973. Given the apparent specificity of host records (see next topic) we believe they are Neotropical species (and none of the introduced species). They may be larvae of the described Neotropical species of Shannoniella Townsend, 1939 or Trypetidomima Townsend, 1935, or they may even belong to undescribed species, since the distribution of Balloniscus glaber (Lopes et al. 2005) does not extend to the locations where these native Rhinophoridae have been found, namely, the southeastern portion of Brazilian Atlantic Forest (Nihei andAndrade 2014, Nihei et al. 2016). Furthermore, the location of the new Rhinophoridae record is at a low altitude and Neotropical woodlouse flies seem to be rare in the lowlands, being usually found at elevations of 600-1200 meters in Brazil (Nihei andAndrade 2014, Nihei et al. 2016). Nonetheless, Balloniscus glaber can be found in altitudes up to 1000 meters in southern latitudes (Lopes et al. 2005) while another species from the genus, Balloniscus sellowii (Brandt, 1833), presents a broader latitudinal distribution (Schmalfuss 2003).
A further publication will describe in detail the morphology of the two 3 rd instar morphotypes, and DNA sequencing will be performed trying to obtain a more precise identification.
Reviewed interactions records following current taxonomy
The earliest reference to a Rhinophoridae parasitoid of woodlice appears to be from von Roser (1840 apud Thompson 1934) that created some confusion in the literature in later years. In his paper, the dipteran "Tachinia atramentaria" (currently Stevenia atramentaria (Meigen, 1824)) is mentioned as a parasite of a woodlouse, possibly Oniscus asellus. Thompson (1934), Herting (1961), Bedding (1965) and Verves and Khrokalo (2010) mentioned that Oniscus asellus was probably a wrong identification while Cerretti and Pape (2007) mention Oniscus asellus as a possible host for Stevenia atramentaria. The doubtful record was finally resolved in Kugler (1978) where the author states the record was based on a misidentification of Trachelipus rathkii (Brandt, 1833) according to a personal communication from Herting, that apparently had already been corrected in Sutton's book (1980). Rognes (1986) and Dubiel and Bystrowski (2016) still list Oniscus asellus as a host or possible host of Stevenia atramentaria but referencing articles that mention the species as a possible host, probably following von Roser's reference from 1840. Therefore, we could not find any reliable record of Oniscus asellus as a host of Stevenia atramentaria. Dubiel and Bystrowski (2016) list Trachelipus rathkii as a host from Stevenia atramentaria for the first time, but it should be the third record of this interaction if the identification correction from von Roser's article is taken into account as well as the thesis from Bedding (1965).
|
v3-fos-license
|
2020-03-05T10:39:17.216Z
|
2020-02-24T00:00:00.000
|
214202276
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.intechopen.com/citation-pdf-url/71206",
"pdf_hash": "6c025255c40d7405fd7e5bae6845e9c95f02c560",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46675",
"s2fieldsofstudy": [
"Political Science"
],
"sha1": "95101e1c1f62344e6f6860323ad9bb2522d077d4",
"year": 2020
}
|
pes2o/s2orc
|
Uprising and Human Rights Abuses in Southern Cameroon-Ambazonia
In 2016, lawyers, teachers and students in the two Anglophone regions initially led demonstrations and strikes, which eventually involved a wider section of the population. This mobilization was against their marginalization by the Francophone-dominated government in which they were chronically under-represented in all aspects of national life: political appointments and professional training and had been treated as second-class citizens since their reunification. They argued that their vibrant economic and political institutions had been completely erased, and their education and judicial systems had been undermined and degraded. Activists spread videos that show security forces abusing human rights (by suppressing peaceful gatherings, beating, harassing, arresting and killing protesters, burning their houses, schools and hospitals) in order to produce a counter-narrative to the ‘official story’ that main-stream media had been producing. We collected and analyzed 30 videos to better appreciate the human rights abuses. The videos provide information that cannot be provided by other types of data. They are used as ‘proofs of facts’ and they contain much more visual information on bodily movement and acoustic data. The videos show appalling images not just of how French-speaking soldiers tortured Anglophones but also their inability to communicate with them adequately although they share the same country.
Introduction
It seems people everywhere are questioning the ability of traditional political actors to represent their interests and are increasingly seeking a more direct and unmediated relations to the decisions that affect their lives [1]. The Southern Cameroon-Ambazonia crisis commonly known as the Anglophone crisis revolves around the marginalization of the Anglophones and the dilution of their cultural identities especially concerning education and the judiciary by the Francophones in their attempt to assimilate them. Anglophones have therefore collectively given voice to their grievances and concerns and are demanding that something be done about them and they have taken extra-institutional actions by arming themselves to defend themselves against the government security forces who abuse their human rights by arbitrarily arresting, torturing, detaining, killing them, burning their houses, raping their women and also refusing them the right to self-determination.
Uprising and Human Rights Abuses in Southern Cameroon-Ambazonia DOI: http://dx.doi.org /10.5772/intechopen.91053 the dignity and statehood of Anglophones-not by the French-speaking community at large, but by the government which was led and dominated by Francophones.
Marx and Engels [4] famously argued that, in any epoch, the dominant ideas are the ruling ideas in society that serve to maintain the dominance of the ruling classes. Those who have the means of economic production also have control over the production of ideas, and the class which is the material force of society is at the same time the ruling intellectual force. The ruling class, rules also as thinkers and as producers of ideas and regulate the production and distribution of ideas of their age. Similarly, La République du Cameroon has been producing ideas to suppress Southern Cameroonians because of their dominance over the economy, judiciary and political institutions. When the crisis started in order to dilute it, they produced many unsuccessful concepts such as the promotion of Bilingualism and multiculturalism, the national disarmament and demobilization and reintegration committee all headed by Anglophones and whose reports were dropped in the dustbins. Finally, they gave Anglophones what they termed 'Special Status,' which Anglophones rubbished as being empty. How did these two separate entities come together and form a union?
Cameroon was initially a German's territory from 1887 to 1914 before the British invaded it from Nigeria in 1914 and the German surrendered in February 1916. After the war, the League of Nations partitioned the colony between the United Kingdom and France on June 28, 1919, and France gained the larger geographical share. French Cameroon became independent as La République du Cameroun in January 1960 and Nigeria was scheduled for independence later that same year, which raised question of what to do with the British territory. A plebiscite was agreed on and it was held on February 11, 1961, and the British Southern Cameroon voted to join Cameroon as West Cameroon I.CB Dear [5]. To negotiate the terms of the union, the Foumban Conference was held on July 16-21, 1961 in which the Federal Constitution was drafted. It stated in Article 47.1 that "No bill to amend the constitution may be introduced if it tends to impair the unity and integrity of the federation." This poorly conducted re-unification was based on centralization and assimilation, and has led the Anglophone minority feeling politically and economically marginalized as their cultural differences are ignored. "On the 1 st September 1966 the Cameroon National Union (CNU) was created by the union of political parties of East and West Cameroon. Most decisions were taken without consultation, which led to widespread feelings amongst the West Cameroonian public that although they voted for reunification, La Republique du Cameroon was absorbing or dominating them," Wikipedia [6].
Achankeng [7] states that although the plebiscite was an expression of willingness to associate with French Cameroon, no necessary discussions took place to arrive at an agreed document and set the legal basis of the federation. So it never took place and neither were any agreements subsequently signed between the two countries.
In 1972, President Ahidjo (the President of the Republic of Cameroon) conducted a referendum on the form of the state. Although the West Cameroon lawmakers heavily opposed and rejected it on the ground that it was a violation of the 1961 Federal Constitution, he went ahead with the referendum and the Federal Republic of Cameroon became the United Republic of Cameroon [8]. All these events were calculated attempts meant to incorporate a former colony into another state. Bongfen [9] and Ajong [10] state that it abolished "all federal legislative, judicial and administrative institutions, and removed all guarantees that protected the rights of the minority Southern Cameroonians in the federation. Unlike during the plebiscite of 1961 wherein only Southern Cameroonians voted to decide on their destiny, the May 1972 referendum was extended to all the people of la République du Cameroun. It was 'a creeping annexation than unification' . However, the dissenting voices of Southern Cameroonians rejecting the centralized United Republic of Cameroon were dwarfed by the wide majority of La République. Many Southern Cameroonians regard 20th May, -the national day of today's Cameroon -as a day when they lost their freedom".
In 1984, Paul Biya removed one of the stars from the flag and changed the official name of the country to the Republic of Cameroon (La République du Cameroon), which Cameroon had before her unification with Southern Cameroon. Some Anglophones such as Gorji-Dinka, Bernard Fonlon and Carlson Anyangwe from the Southern Cameroon considered it as the dissolution of the 1961 union.
Citizens from these regions, that is, the Anglophone regions, have been mobilizing against their marginalization by the Francophone-dominated government. They complain about chronic under-representation in all issues of national life, including political appointments and professional training. They argue that since their reunification, they have been treated as second-class citizens. Their vibrant economic and political institutions have been completely erased, and their education and judicial systems have being undermined and degraded.
Gorji Dinka and Albert Mukong: Southern Cameroonian nationalists who protested the ill-treatment of their people by the central regime were arrested and detained. Representatives of southern Cameroonians in the tripartite talks of 1991 proposed a return to the federation, but the leaders of La République du Cameroon ignored them. In 1994, John Ngu Focha and Salomon T. Muna both former Prime Ministers of the Southern Cameroons returned to the United Nations in New York and demanded separate independence for the Southern Cameroons. The mission to the UN preceded the All Anglophone Conference (AAC 1), which took place in Buea in April 1993 bringing together all Southern Cameroon citizens who unanimously called for the restoration of the statehood of the Southern Cameroons. A second All Anglophone Conference (AAC 2) was held in Bamenda in May 1994, at which the decisions of AAC 1 were reiterated and a reasonable time was given to French Cameroon to accept a return to the two state federations or Southern Cameroon would revive its statehood and independence. The implementation of AAC 1 and AAC 2 was however stalled by the brutal arrests and incarceration of the leaders of the AAC with several others escaping into exile.
The ACC was renamed the Southern Cameroons Peoples Conference (SCPC), and later the Southern Cameroon People's organization (SCAPO), with the Southern Cameroon National Council (SCNC) as the executive governing body. Southern Cameroon National Council younger activists formed the Southern Cameroons Youth League (SCYL) in Buea on May 28, 1995.
When they felt their demands were met with contempt and total disregard, the SCNC took their case back to the United Nations led by John Foncha and protested against La République du Cameroun annexation of their territory. Their focus has been maintained on the restoration of the statehood of Southern Cameroon, and the government brutal repression has helped to unify them.
Police routinely disrupted SCNC activities: On March 23, 1997, gendarmes killed about 10 people in a raid in Bamenda. The police arrested between 200 and 300 people, mostly SCNC supporters as well as members of the Social Democratic Front. In the subsequent trials, Amnesty International and SCNC found substantive evidence of the government torturing and using force on them. The raid and trial resulted in a shutdown of SCNC activities. On October 1, 1999, SCNC militants took over Radio Buea to proclaim the independence of Southern Cameroon but failed to do so before security forces intervened. After clashes with the police, the SCNC was Uprising and Human Rights Abuses in Southern Cameroon-Ambazonia DOI: http://dx.doi.org /10.5772/intechopen.91053 officially declared illegal by the Cameroonian authorities in 2001. In 2006, a faction of SCNC once again declared the independence of Ambazonia Lansdorf, ed. [11]. Although Cameroon is bound by the international law and its own constitution to respect human rights and freedoms, many human rights have been violated in Southern Cameroon. This work pays particular attention on the cruel treatment of people who exercise the right to association and peaceful assembly. We use videos to show how these rights were violated in Ambazonia. We argue that the videos helped to globalize the crisis and attract the attention of the international community to the severity of the killings and abuse.
Data collection and interpretation
In the age of smartphones, images or video-making has become less problematic as most people even in the third world possess a smartphone with a built-in camera. They take pictures of what is relevant to them in their daily lives. They usually film the remarkable, the extraordinary, the exceptional and not the ordinary or everyday activities [12]. From the onset of the Anglophone crisis, participants made many videos to expose the human right abuses of the military and they flooded the Internet. That was why the government cut-off the Internet in the English-speaking areas to stop them from circulating incriminating images.
We decided then to collect 30 videos to analyze them because they provide information that cannot be provided by other types of data. They are used as 'proofs of facts' and as it is often said, a picture or video is more, and different, than a thousand words because they contain much more visual information on bodily movement and include acoustic data. Although images are specific reality constructions, ambivalent, subjective and diffuse, their interpretation must be substantiated in words [13].
The videos collection contribute toward answering a research question and are interpreted by providing verbal accounts and linked to the theoretical concept of cultural dominance and media and information communication. The questions we asked concerning each of the videos were similar to those asked by Becker [14]: What are the acts of violence and human rights abuses in each video? How can they be interpreted and linked to our theoretical concept? What insight do they generate and substantiate? What different kinds of people are there? We link observations to theoretical concepts such as status, groups, norms, rules, and common understandings, deviance and rule violation, sanctions and conflict resolution.
The Ambazonia uprising
The relationship that exists between Southern Cameroon and La République du Cameroon is one of two people, two inheritances, and two divergent mentalities: one struggles for its liberation, while the other suppresses and abuses its human rights or struggles to maintain control over it by using its mighty state military. They speak different languages with little or no rapprochement although they live in the same country. The various videos below clearly show the differences. The oppressors' troops speak in French, while the oppressed speaks in Pidgin English. A country divided predominantly by language although language is not the cause of the Anglophone crisis: it is the history of people. This shows the struggle between the two people and languages while one is resisting the onslaught and domination, the other is trying very hard to overcome and crush them. Having been oppressed for long, the oppressed is not willing to give up and the oppressor is not willing to let her leave her unitary state, and then the struggle of two people stiffens. The government that has been in power for over 38 years does everything to suppress the uprising by sending its brutal security forces to harass the Anglophones who are striking for a just course.
According to Cameroon Concord News 2019 [15], "being Anglophone or francophone in Cameroon is not just the ability to speak, read and use English or French as a working language. It is about belonging to the Anglophone or Francophone ways including things like outlook, culture and how local governments are run. Anglophones have long complained that their language and culture are marginalized". They thought it necessary to protect their judicial, educational and local government systems. They wanted an end to annexation and assimilation and more respect from government for their language and political philosophies. They preferred a total separation by creating their own independent state if the government failed to listen to them.
According to www.Amnesty.org [16], "towards the end of 2016, the two Anglophones regions were rocked by demonstrations and strikes, initially led by lawyers, teachers, students, and eventually involving a wider section of the population. They protested against what they viewed as the growing marginalization of the Anglophone linguistic, cultural, educational traditions and systems in various sectors such as the failure to use the Common Law in courts and Standard English in classrooms, as well as the improvement of their representation in politics".
They decided to express their grievances by protesting. The protests began in the streets of Anglophone cities as thousands of Anglophone Cameroonians, from lawyers and teachers as well as irate youth, protested against the Francophone hegemony. Handfuls of videos show young men manifesting determination and strength for change in the Southern Cameroon-Ambazonia. They collaborated especially when one of them was shot because they were conscious of their marginalization. They knew the police would shoot them but they moved on. This shows that a disillusioned unemployed youth is very dangerous for the health of a country. They all hungered for independence and not even federalism that some elite would talk of. Although largely, but not always peaceful in nature, these protests were met with sustained repressions from the Cameroon authority and security forces. Some peaceful protesters were killed during the demonstrations; hundreds of people were arrested and detained without trial. Our objective in this work is to analyze the confrontation between the protesters and security forces using amateur videos secretly taken by the protesters.
Cultural domination in the judiciary section
The protest began on October 6, 2016, as a sit-down strike initiated by the Cameroon Anglophone Civil Society Consortium (CACSC), an organization consisting of lawyer and teacher trade unions from the Anglophone regions of Cameroon. Barrister Agbor Balla, Dr. Fontem Neba and Tassang Wilfred led the strike.
According to Wikipedia [17], "the common lawyers of Anglophone Cameroon were said to have written an appeal letter to the government over the use of French in schools and courtrooms in the English-speaking regions of Cameroon. In an effort to protect the English culture, they began a sit-down strike in all courtrooms on October 6, 2016. Peaceful marches began with marches in the cities of Bamenda, Buea, and Limbe calling for the protection of the common law system in Anglophone Cameroon and the practice of the Common Law sub-system in Anglophone courts and not the Civil Law as it was used by French-speaking DOI: http://dx.doi.org/10.5772/intechopen.91053 magistrates". They equally demanded for the creation of a common law school at the University of Bamenda and Buea [18].
More so, Francophones occupied all the juicy positions in the Supreme Court. Although Francophones had little or no knowledge in English and the Common Law, most of the magistrates and bailiffs in the Anglophone zone were Francophones. Anglophones lawyers were disgruntled of the domination of the Civil Law as if Cameroon was uniquely a Civil Law country. There was equally a problem of translating the Business law for Africa (OHADA) uniform acts, CEMAC code, and others because the Francophones wanted to assimilate the Common Law sub-system.
In Africanews Morning call [19], Barrister Bobga Harmony declared that the government of Cameroon had completely ignored them, which was a violation of the right to self-determination. According to him, "since 1972, they have been a progressive, an inexplicable, illegal and illegitimate erosion of the common law." He regretted that Francophones had been replacing the Common Law with the French Civil law as if Anglophones "were a conquered people." The lawyers had complained for years through writing to competent authorities before realizing that if they did not take concrete actions, they would be swallowed up by the dominant Francophone system. So they held a Common Law conference on the May 9, 2015, which was followed by a second conference in Buea where they made a declaration reinforcing their position.
Although they had sent a communiqué to the presidency of the Republic of Cameroon, nobody listened to them. Instead of defending the Common Law lawyers, the Minister of Justice insulted them in the government newspaper: Cameroon Tribune. As a result, they protested and insisted to talk only with the president of the Republic of Cameroon or his properly mandated agent because they had exhausted all negotiation with the executive and the legislature. They had filed a petition to the national assembly and the senate and they were planning to file a petition to the constitutional council for the determination of the question of whether there had been any act of union between West Cameroon and East Cameroon. They planned to proceed to the international jurisdiction like the African Commission for Human and People's Right, the Human Right Commission if the government did not listen to them. Bobga Harmony said "We are going to seize the international community because these are grave abuses of human rights. The international community cannot fold its arms and allow us to be brutalized in our land," Barrister Bobga Harmony said in Africanews Morning Call [19].
Cultural domination in the educational section
Teachers and the general public joined the lawyers in the strike. They reportedly opposed what was described as the "imposition of French in schools in Anglophone parts of the country." According to Catherine Soi reporting for Aljazeera [20], students battled on their own at school because even private school teachers had deserted classroom in support of the public sector teachers and so many classrooms and schools across Ambazonia were empty. They wanted the government to stop sending teachers who spoke only in French or Pidgin English. Even students supported the strike action because after completing school, they were unable to find jobs.
"For over fifty years Anglophone students have not been able to have a headway in Cameroon in most disciplined that bring about development: science and technology because the government has refused to train teachers for our schools," declared Tassang Wilfred over Aljazeera (2016).
According to University of Buea strike Report [21], a mammoth crowd of students came out protesting in order to attract the authority of the university attention to their plights. A student carried a placard on which it was written: "enough is enough." They had a variety of complaints: the non-payment of the 50,000frs CFA that the government had promised them, the cancelation of the 10,000frs CFA penalty fees for the late payment of school fees, and the payment of fees before being given a semester result, and as it was the general cry with the secondary and high schools in the Anglophone zone, they also demanded the removal of Frenchspeaking lecturers from the faculty of the university.
They stood in front of the Administrative Block wishing to meet the Vice Chancellor to tell her their problems but instead security forces took her away and a huge number of security forces were sent to dispatch them. As they arrived, the students ran into different directions and the atmosphere became very misty because the security officers had thrown teargas and fired gun shots in the air. The students shouted no violence as they ran away for safety. Although students were beaten and arrested, it did not dampen the spirit of the strike action so the students left and marched into the street.
The white coffin revolution
According to Bamenda protest close to one hundred wounded [22], protesting residents voice other grievances, including -poor roads, no jobs and water. "On November 21, 2016, Mancho Bibixy, the newscaster of a local radio station, stood in an open casket in a crowded roundabout in the Anglophone city of Bamenda. Using a blow horn, Bibixy denounced the slow rate of economic and structural development in the city. " "When that Chinese them di come, m-e-y they come tell we when they dig road, na we di fix'am back," he declared his discontent with the bad state of roads that Chinese would only construct but would not repair. He showed his defiant attitude by declaring he was ready to die while protesting against the social and economic marginalization of Anglophone in the hegemonic Francophone state.
"I don tell them, if na teargas I go drink'am." "Let them chase me….it won't mean anything to me," he declared. He emerged as a key leader in the Anglophone political movement who were among the first to be arrested and he was later slammed a 15-year prison term (Figure 1).
In a video entitled "Bamenda Protest Close to 100 wounded," [22] it shows how the white coffin was carried about and a mammoth crowd of young men followed it with Bibixy himself leading.
"We can never be defeated by the police," they declared when the police came to stop them. They rounded-up one of them and chased the others who came to stop their peaceful march. One can clearly hear a voice saying in the video: "You no take hi gun?" asking whether he has not taken his gun. "Cameroon must change," "That independent na today where i go start o-o," which means: the independence will start today.
Young men came in their bikes honing while those who were on foot shouted. Protesters were all over the whole streets.
"I say… bamenda di hot yah," they said in the background. "We need change in Bamenda," they said. "whosai the police them dey where they di try their nonsense, make them come now," they declared with determination.
Then suddenly trucks of military men arrived shooting in the air and killed a good number and wounded about a hundred. "Jesus, they are killing us in Bamenda," they said. Another truck arrived on which it was written "Gendarmerie Nationale" and it sprayed huge amount of water on a hostel: Grand Plaza; certainly where some of the protesters were hiding. The video shows how two persons hurriedly took away a shot person on a bike and some were taken and given private treatment at homes.
In a video entitled: Bamenda Boiling, they Escaped Teargas, on December, 8th 2016 [23], shows some young men shouting loudly and running away as fast as they could from the police who were throwing teargas on them to stop them from manifesting. Some covered their nostrils with handkerchiefs to prevent them from inhaling the toxic gas.
The struggle as well was not only between the Francophone and Anglophone but also between the Anglophone and their elite who enjoyed juice positions in the government and were not ready to resign from their positions. They were enablers: the government used them to crush their own people. They always would preach anti-struggle campaign and would bring other Francophone authorities to fight against their people. Each time they visited the Anglophone zone, there was always a battle between them and their people. The elite wanted to maintain the status quo, while the general population wanted a change.
The video Bamenda Boys against CPDM [24] shows a comic scene where a young man brought a large catapult and took a stone to support the big stick and another one pulled the rope from behind him and then they took the catapult to confront the CPDM barons. According to Zigolo Tchaya 2016 [25] reporting for France 24, when the Prime Minister of Cameroon (an Anglophone) and the Secretary General of the Cameroon People's Democratic Movement, the party of the government in power, went to Bamenda to hold a pro-government rally with its militants to calm down the striking lawyers and the teachers' association, who had been striking for 2 months, a group of young men burnt the CPDM party uniform of an elderly person who was going to attend the rally. The angry youth blocked the hotel where the Prime Minister and Secretary where lodging and there was a confrontation between them and the security. According to Gigova [26], it led to four deaths and several wounded and about 50 arrested. The Prime Minister, The CPDM Secretary General, the Governor of the North West region, and the national security adviser were forced to go into hiding.
Government response and human rights violations
Cameroon 2018 Human Rights Report [27] states that "although the law provides for freedom of peaceful assembly, the government often restricted this right. The law requires organizers of public meetings, demonstrations, and processions to notify officials in advance and does not require prior government approval of public assemblies, nor does it authorize the government to suppress public assembles that it has not approved in advance. However, officials routinely asserted the law implicitly authorizes the government to grant or deny permission for public assemblies".
It equally states that, "the government often refused to grant permits for gatherings and used force to suppress assemblies for which it had not issued permits. Authorities typically cited "security concerns" as the basis for deciding to block assemblies. The government also prevented civil society organizations and political parties from holding press conferences. Police and gendarmes forcibly disrupted meetings and demonstrations of citizens, trade unions, and political activists, arrested participants in unapproved protests, and blocked political leaders from attending protests." In the Stream: Alzeera 2017 [28], Anne Marie Befoune put it as "The strike action is a reflection of a bigger problem, people have had a lot of pains, frustration and anger in their hearts and they were just looking for the slightest opportunity to express what they feel." The irony is that each time the security forces brutalized the protesters, they instead united against the common enemy, which was the government security forces.
Government responded by cruelly torturing and exerting inhuman or degrading treatments or punishment on demonstrators. Although the constitution and law prohibit such practices, there were reports that security force members beat, harassed, or otherwise abused citizens, including separatist fighters. Cases have been documented of how security forces severely mistreated suspected separatists and detainees [27].
Below we show videos that demonstrate gross human rights violation of the lawyer, the students and the general public.
Molestation of lawyers
The government sent over 5000 troops to thwart the Anglophone crisis. According to Zigolo [25] reporting for France 24 [25], the crisis was considered to be "a strong organized and well-coordinated violence from angry protesters and government did not want to allow that part of the country to be destroyed and the protesters too said they would not stop protesting until the government solved their problem".
According to StopBlaBlaCam [29], policemen blew the 'the men in uniform': lawyers with their batons in Buea. The whole city was also under lockdown, monitored by Special Rapid Response (ESIR), the police and gendarmerie. There was also a strong police presence to face the demonstrators. Incidentally, the policemen were demanding that the lawyers hand over their black robes.
On November 10, 2016, the demonstration of lawyers in Buea in the Southwest region met with heavy-handed police response. Lawyers were reportedly brutalized, their offices ransacked, and their wigs and gowns seized by police. Many were injured and harassed in their cars. Their phones were seized and destroyed, and some were barred from joining the demonstrators. Police reportedly raided hotels in search for lawyers and were harassed by law enforcement officers (Figure 2). Cameroon- The video entitled: Uprising 4 Police Brutality on Lawyers [31] clearly shows the commotion that took place in the Muea police station. One sees a police officer running after a young lawyer and then another lawyer is pushed into the police station by yet another policeman. Another lawyer is beaten and pushed out of the police station. The police kicks another who falls down and his watch falls off but the police pulls him up by dragging his coat. A female fat police encourages her colleague to hit the lawyer by clearly articulating the phrase in French "frappe," "frappez-lui" over and over.
Uprising and Human Rights Abuses in Southern
The episodes of police brutality in Cameroon were not limited to lawyers only; it extended to University of Buea students as well as the general public. Many were molested by police and disturbing videos show police officers armed with stick hitting or rolling them in water, invading students' quarters and beating them.
Molestation against university students
The videos show appalling images of how French-speaking soldiers, who were alienated from the sufferings of English-speaking citizens, inflicted pains on them. Although they were in the same country, they could not communicate because they spoke different languages.
The video Police and Gendarmes severely torturing University Students in Buea strike [32] certainly was filmed while in the house because of the iron bars of the window. In it, two policemen force a student to lie down very fast: "Couches-toi" the police ordered him to lie down.
"Comment ca," the young man retaliated by asking why. "Couche-toi vite," he ordered again. "Ne parle pas," "viens ici," "Enleve la cle ci," "viens d'abord ici, regarde la bas," they continuously ordered him. Then one of them raises his baton and hits him while the other forces him to lie down while they hit him counting the number of strokes in French. The police standing by takes the baton from his colleague and asks the students to roll on the soil while he hits him with all his force. "Tourne, c'est votre pays-ci?" he asked whether it was his country. "Vous savez que vous allez gravez?" he asked while hitting him whether he knew they would go on strike. The video entitled "2 police and Gendarmes severely torturing University Students in Buea, Buea strike [33] shows with a lot of noise in the background, two policemen harassing university students in their neighborhood. Three university students are laying down, one in a puddle and a female student is brought in and the police man brutally pushed her in the puddle.
"Attend d'abord, je vais te giffler hei," the policeman said in French threatening to slap the girl and then the girl's leg is pulled and is forcefully pulled in the puddle, rubbing her head in it.
"They go kill man," they camera man exclaimed that they would kill them. The Southern Cameroon updates: Police Brutality at UB 28/11/2016 [34] certainly taken from a story building shows how a group of police and gendarmes in the street of Molyko molested a young man. While one of the policemen was pulling him ahead, another one came from behind and kicked him and he fell down. It is clearly seen how one of the security officers had wounded a female student's head, one also sees a student whose t-shirt had been torn and blood dripping from his head.
The video: Université de Buéa -les forces de l' ordre entrent dans les residences et tortuent des etudiants [35] starts with the camera woman inviting fellow students to run for safety. "Yuna enter o-o-o-h," she invited other students. Then students are seen running very fast into their residence for safety as scores of security men followed them behind with batons. They caught some married women and hit them severely. "They go kill we that married woman them, I swear," the camera women lamented. A woman is drawn from her house and mercilessly hit by the security officers. "Pour les hommes faire les descendre," an order is given in French to bring out all men. "Faire descendre tout les hommes," the order is repeated for emphasis. A boy is removed from his house and the French-speaking security officers hit his head with their batons.
"Amenez-le, ca va," an order is given and the boy is held from his belt. The Centre for Human Rights and Democracy in Africa [36] reported that at least 14 student hostels were attacked that day. More than 140 rooms were vandalized, their occupants tortured on the Buea (Molyko) main boulevard, and some students were asked to sing that "an Anglophone will never rule the country." Even though most students were finally released, several of them spent 3 days in detention facilities in overcrowded cell conditions, with little or no communication with their families.
The molestation of the general public
The video Bamenda in turmoil today December, 2006, part 1 [37] shows a group of predominantly young men lamenting because a police had shot one of the protesters who wore a t-shirt with white and red lines on it, stained by blood and mud. He lay helplessly in the hands of his comrades.
"Oh my God, wait, wait. Bring he s-o, hold i hand," they held him and he dangled in their hands while those around him lamented.
The video This is Bamenda [38] shows a group of young men carrying peace plants and marching very fast in a street in Bamenda. They were carrying a dead young man to the main street in Bamenda called the Commercial Avenue. The commentator said "Bamenda is turning into something else," which means that many people are dying in Bamenda, and then he calls on "BBC, CNN and Alzeera, you guys need to support us, people are dying," he said. The spectators and the participants shouted and lamented.
"Y-e-e-u-h Bamenda, Bamenda, Bamenda, Bamenda," he shouted several times. "w-e-e-e-h massa," he shouted several times again. Then the dead man is shown with a blue band that fastened him to the stick he was tied. He is being carried away by other young men marching very fast and singing: "Amba, Amba, Ambazonia." It means they identify themselves more with Ambazonia than Cameroon. DOI: http://dx.doi.org/10.5772/intechopen.91053
Internet shutdown
The various videos incriminated Cameroon security forces and therefore as a result as [27] shows Cameroon experienced its first Internet shutdown in January 2017 for 93 days. It came after Anglophone teachers, lawyers, and students went on strike over alleged social bias in favor of Francophones. Education, financial, and health-care institutions as well as businesses that relied on Internet access were stunted. International bodies applied pressure on the government to restore Internet access. Despite Internet access being restored in April 2017, there were continuing reports of network instability. In October 2017, the government effected a second Internet blockade, targeting social media and apps such as Whatsapp and Facebook where such videos as those described above were sent. It continuously affected the country economically, and many citizens were forced to travel back and forth to regions with Internet access for business or information.
The Ambazonian war
Two weeks into the protests, more than 100 protesters were arrested, and six were reported dead [39]. Throughout September, separatists carried out two bombings: one targeting security forces in Bamenda Quartz Africa [40], and while the first bombing failed, the second injured three policemen Reuter [41]. On September, 22, Cameroonian soldiers opened fire on protesters, killing at least five and injuring many more [40]. On November 30, 2017, the president of Cameroon declared war on the Anglophone separatists Sun Newspaper [42].
"I have learned with emotion the assassination of four Cameroonians military and two policemen in the South of our country ---things must henceforth be clear. Cameroon is victim of repetitive attacks claiming a secessionist movement. Facing these aggression acts, I would reassure Cameroonians that everything has been put in place to take out of the dark these criminals so that peace and security reigns all over the territory." This marked the start of a very violent confrontation between government forces and armed separatists.
Non-state actors, including local armed groups, also bear much responsibility for the violence. Separatist militias are battling government forces as well as progovernment "self-defense" forces that consist of what separatists term criminal gangs who are terrorizing local inhabitants and wreaking havoc. The military also conducts a deliberate violent campaign against civilian population. Lawyer Right Watch Canada [43], "There is evidence that much of the violence is intentional and planned, including retaliation attacks on villages by government security forces, often followed by indiscriminate shooting into crowds of civilians, invasion of private homes and the murder of their inhabitants, and the rounding up and shooting of villagers." According to the International Crisis Group, at least 1850 people have been killed since 2017; the ICG reports that at least 235 soldiers and police officers and 650 civilians, and close to 1000 separatists have lost their lives; and Anglophone federalists estimate 3000-5000 dead, and separatists estimate 5000-10,000 dead.
Arbitrary arrests and detentions
The Centre for Human Rights and Democracy in Africa [44] reports that in early January 2017, the Cameroon Anglophone Civil Society Consortium (Consortium or CACSC) agreed to meet with the government about the release of protesters arrested during a 2016 demonstration in Bamenda. The Consortium accused the government for shooting four unarmed youth and proceeded to declare "Ghost Towns" on January 16 and 17. The reports equally state that, "in response, the government cut the Internet and banned the activities of two groups: the Southern Cameroon National Council (SCNC) and the Consortium on January 17, 2017. The same day, two prominent Anglophone civil society activists who headed the Consortium: Dr Felix Agbor NKongho and Dr Fontem Neba were arrested".
On January 9, 2017, armed soldiers forcibly entered the home of Mr Mancho Bibixy, a journalist and Newscaster of "Abakwa" (a local radio program reporting on the rights of the Anglophone minority), and arrested him, along with six other activists. He was taken to a vehicle with neither shoes nor identification papers and was arbitrarily detained for 18 months and his hearings were postponed for more than 14 times.
On May 25, 2018, Bibixy and his co-accused were sentenced to between 10 and 15 years of prison each by a military court, for acts of terrorism, secession, hostilities against the state, propagation of false information, revolution, insurrection, contempt of public bodies and public servants, resistance, depredation by band, and non-possession of national identity card. He was being held in an overcrowded cell at the Kondengui Central Prison, a maximum-security prison in Yaoundé.
Between September 22 and October 17, 2017, 500 people were arrested, with witnesses describing the detainees as being packed into jails in the South West region. In December 2017, a group of about 70 heavily armed Cameroonian soldiers and BIR sealed the village of Dadi and arrested 23 people returning from their farm or were in front of their homes.
On January 5, 2018, 47 separatist activists, including Sisiku Ayuk Tabe of the proclaimed Interim Government of Ambazonia, were arrested and detained by Nigeria authorities in Abuja. The detainees were repatriated afterwards and imprisoned in Yaoundé incommunicado for 6 months awaiting trials. They were not given access to their lawyers nor charged with any offense.
Mass arrests and detentions have caused harsh and often life-threatening prison conditions in Cameroon, including gross overcrowding, lack of access to water and medical care, and deplorable hygiene and sanitation. Prisoners are transferred out of the region to other more secure areas.
Internally displaced persons (IDPS)
Several hundred thousand persons abandoned their homes in some localities of the Northwest and Southwest Regions because of the socio-political unrest. Estimate of IDPs varied depending on the source, with the government estimating 74,994 IDPs as of June, while the United Nations estimated 350,000 IDPs from the Northwest and Southwest as of September.
On December 2017, the Senior Divisional Officer for Manyu: Oum II Joseph asked the population of Manyu residents in Akwanga, Eyumojock, and Mamfe sub-division to relocate or they would be considered accomplices or perpetrators of ongoing criminal occurrences registered on security and defense forces [45].
By the end of December 2018, the crisis had forced mass displacement of the population in the North West and South West regions, with estimates of between 450,000 and 550,000 displaced persons. This represents more than 10% of the region's population. Cameroon now has the sixth largest displaced population in the world. Many are fleeing violence as a result of raids on villages and surrendering areas. They take refuge in the forests where they lack hygiene, health services, sanitation, shelter and food. The United Nations Office for the Coordination of Humanitarian assistance estimates that approximately 32,000 Cameroonians are registered refugees in Nigeria. More than 200 villages have been partly or completely destroyed, forcing hundreds of thousands of people to flee. The rate of attacks has increased steadily, usually causing significant damage. An additionally 30,000 to 35,000 people have sought asylum in neighboring countries.
Destruction of schools and villages
Separatist activists who seek an independent state for the country's Englishspeaking regions began to set fire on schools and attack teachers and students to enforce a boycott they had declared on local schools. In June 2018, UNICEF reported that at least 58 schools had been damaged since the beginning of the crisis in 2016. Human Rights Watch documented 19 threats or attacks on schools, and 10 threats or attacks on education personnel (Figure 3).
Most children in the two regions have been deprived of the right to an education, with 30, 000-40,000 children affected. As of June 2018, armed separatists had reportedly attacked 42 schools, at least 36 of which were burnt down; the Cameroonian's figure indicated that they had burnt at least 120 schools. Rural areas are especially affected.
Anglophone villages suspected of harboring separatists or arms have been burned and pillaged in both the South West and North West regions. Homes have been burned to ashes, sometimes with their inhabitants. About 206 settlements have been raided and partially destroyed by state defense forces during attempts to crack down on armed separatists. Several villages in Mbonge and Konye subdivision have been completely emptied of their population. Civilian witnesses say that army attacks are routinely followed by the ransacking of houses and shops, the destruction of food stocks, and the rounding up and mistreatment or killing of civilians, often as reprisals for their killing of a member of the defense and security forces (Figure 4).
Discussions
One of the key ways social movements engage in cultural resistance is by means of the production and dissemination of multiple forms of media in order to mobilize support, to reach out for supports beyond those already in agreement with movement claims, and to increase the legitimacy of their claims and demands. Social movements operate at a considerable disadvantage when trying to influence news portrayals of issues than do their better-funded opposing groups and organization.
Anglophones or Ambazonians who are defending themselves from the Cameroon security forces that kill them are presented in the state television and other media as "terrorists" and never as those fighting for a just course, whereas as seen above, they did not start the war; it was declared on them. The main stream media equally promoted hate speech and incitement to violence, which radicalized separatist groups the more. Government officials refer to protesters in dehumanizing or incendiary terms, such as "dogs" and "terrorists" in the main stream media. When the security agents who terrorize the population are presented in mainstream media, they are considered as valiant and patriotic agents of the republic who protect the population. Did they really protect the population when they tortured them, arbitrarily arrested them, and burned their houses as seen above? Therefore, media serve to propagandize and serve the interests of the powerful that control and finance them. The propaganda model shows that media function to represent the agendas of the dominant social, economic and political groups that exercise power nationally and globally. Therefore, social movements face difficulties in their attempts to transmit their claims and to traverse the gap between their intended messages and their target audiences.
Activists in the Ambazonian crisis created a strategy that Mattoni [47] considered as alternatives that are the creation of their own independent media or public forums of communication in order to communicate for a lack of interest or bias by established media. Alternatively, in the Ambazonian crisis, many videos were produced that facilitated the mobilization and production of a counter-narrative to the 'official story,' which indicates that there is no Anglophone problem in Cameroon and the professionalism of the security forces. The Internet makes the process of sharing easier and faster and with a potentially larger audience than ever before. These messages in the videos from the alternative media environment have made their ways into mainstream mass media like the various reports carried by BBC, France 24, TV5 Monde, etc.
Conclusion
The Ambazonia crisis was triggered by the Southern Cameroonians' attempt to break the dominant Francophone cultural hegemony. They came into union with them from a weaker position with a population numerically smaller. As a result, La République du Cameroon has been making efforts not just to dominate them but to absorb them into the broader Francophone cultural system. They silently destroyed the dignity and statehood of Anglophones-not by the French-speaking community at large, but by the government that was led and dominated by Francophones.
Toward the end of 2016, the two Anglophone regions were rocked by demonstrations and strikes, initially led by lawyers, teachers, and students and eventually involving a wider section of the population. They protested against what they viewed as the growing marginalization of the Anglophone linguistic, cultural, educational traditions and systems in various sectors such as the failure to use the Common Law in courts and Standard English in classrooms, as well as the improvement of their representation in politics.
Many videos were produced showing their repressive response of the government, which were opposed to the official narratives produced by the main stream media. We collected 30 of them because they provide information that cannot be provided by other types of data. They are used as 'proofs of facts.' The videos show appalling images not just of how French-speaking soldiers tortured Anglophones but also their inability to communicate with them adequately although they share the same country.
The government response to the demonstration led to the violation of the following rights: the right to life, liberty, and security of persons; the right to be free from torture or cruel, degrading and unusual treatment; the right to be free from arbitrary arrest and detention; the right to association and peaceful assembly; the right to equality before and equal protection of the law; the right to take part in the conduct of public affairs; the right to have criminal charges and rights determined by a competent, impartial and independent tribunal (and in the case of civilians, a civilian court); the right to a fair trial, representation by a lawyer of choice, and (where the defendant does not have means to pay for legal representation) legal aid; the right to prompt, detailed notice of charges in a language understood by the defendant and adequate time and facilities to prepare a defense against them and communicate with counsel; the right to an interpreter where required; the right to appeal; the right not to be persecuted for any act or omission that was not a crime when committed; and the right to self-determination.
|
v3-fos-license
|
2023-12-31T16:11:08.522Z
|
2023-12-01T00:00:00.000
|
266680572
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.rmcr.2023.101972",
"pdf_hash": "bab731cf2f97ee9c022365b3933902558d5c1470",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46676",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "347a430e63fbb2bcc5e85a76f6e6b7fbe43137a6",
"year": 2023
}
|
pes2o/s2orc
|
A case of angioimmunoblastic T-cell lymphoma presenting with migration of lung shadows
A 62-year-old woman presented with chronic cough. Chest CT showed multiple nodules and consolidation. Bronchoscopy could not confirm a specific diagnosis. Because her symptoms and lung opacities improved spontaneously, she was followed without treatment. Seven months later, chest radiography showed worsening of consolidation and a tumorous shadow. After performing cervical lymph node and lung tissue biopsies, we diagnosed her as having angioimmunoblastic T-cell lymphoma (AITL). Cases of AITL showing migration of lung shadows have not been reported. AITL development is influenced by immunodeficiency and reactivation of EBV, and migration of lung opacities may be related to the patient's immune status.
Introduction
Pulmonary opacities, which repeatedly appear and disappear over the natural course of disease, are often explained as "wandering" or undergoing "migration" because they seem to move.Infectious or non-infectious pulmonary diseases show wandering lung shadows.Here, we report a case of angioimmunoblastic T-cell lymphoma (AITL) that showed wandering shadows.This would appear to be a rare case because, to our knowledge, no cases of AITL presenting with migrating shadows have been reported.
Case presentation
A 62-year-old Japanese woman presented to our hospital complaining of coughing for 2 months, and bilateral abnormalities were revealed on chest radiography.Her past medical history included subarachnoid hemorrhage at 50 years of age.She had not started taking any new medications for several years.She smoked 10 cigarettes a day for 27 years, stopping at 50 years old.She had no history of drinking or exposure to chemicals or dust and no family history of respiratory diseases or malignancy.On admission, her body temperature was 36.3 °C and SpO 2 was 97 % on ambient air.Chest auscultation revealed normal sounds.There were no superficial swollen lymph nodes, skin rash, or pitting edema.Arterial blood gas analysis on ambient air showed pH 7.52, PaCO 2 34.8 Torr, PaO 2 66.6 Torr, and HCO3 -28.8 mmol/L.Laboratory tests showed lymphocytopenia: her white blood cell count was 4800/μL (neutrophils 81.1 %, eosinophils 3.8 %, lymphocytes 6.9 %), hemoglobin 13.7 g/dL, platelets 31.1 × 10 4 /μL, and immunoglobulin levels were low: IgG 387 mg/dL, IgA 114 mg/dL, IgM 50 mg/dL, and IgE 17 IU/mL.Lactate dehydrogenase was 276 IU/L, C-reactive protein was 1.68 mg/dL, and soluble interleukin-2 receptor was 1430 U/mL (reference range: 121-613 U/mL).Aminotransferase and creatine were within normal range.Autoantibodies related to connective tissue diseases such as anti-nuclear antigen, rheumatoid factor, or anti-neutrophil cytoplasmic antibody were negative, tumor markers were not elevated, and serum cryptococcal antigen was negative.Urinalysis results were within normal range.
A chest radiograph showed consolidation and tumorous shadow mainly in the lower bilateral lung fields (Fig. 1A).There were shadows overlapping the cardiac outline and left pulmonary hilum, and the border of the lung field was indistinct.Chest computed tomography (CT) showed multiple nodules mainly in the bilateral lower lung fields, some of which were fused and formed tumorous shadow or consolidation (Fig. 2A).Ground-glass opacities and interlobular septal thickening were also present around nodules and tumor.CT of the paranasal sinuses showed slightly thickened mucosa of the maxillary sinus but no fluid accumulation.The above findings indicated organizing pneumonia, granulomatosis with polyangiitis, cryptococcosis, and lymphoproliferative disorders as differential diagnoses.We performed bronchoscopy with bronchoalveolar lavage (BAL) in the left upper lobe (S3) and transbronchial lung biopsy in the left lower lobe (S9).We recovered 71 of 150 mL (47.3 %) of BAL fluid, which showed a white blood cell count of 4.0 × 10 4 /mL (neutrophils 0.2 %, lymphocytes 8.6 %, macrophages 91.2 %) and a CD4/CD8 ratio of 0.60.No atypical cells or hemosiderin-laden macrophages were present.No significant pathogens were isolated, and multiplex PCR of her BAL fluid was negative for viruses and Mycoplasma pneumoniae (FTD Resp 21 Kit; Fast Track Diagnostics, Silema, Malta).Transbronchial lung biopsy showed non-specific changes, so we could not confirm a diagnosis.Because her symptoms and lung opacities improved spontaneously after her discharge (Figs.1B and 2B), and decreased lymphocytes and immunoglobulin were shown, we decided to observe her condition without administration of medication.The reduction in lymphocytes and immunoglobulin was examined in another hospital, and she was diagnosed as having common variable immunodeficiency (CVID).
Seven months after her first admission, she revisited us due to worsening of cough.A chest radiograph showed extensive consolidation and enlargement of a tumorous shadow in the left lung field (Fig. 1C).Nodular shadows had also emerged in the right hilum and middle right lung field.CT showed enlargement of nodules mainly in the left lung field (Fig. 2C) and multiple lymphadenopathies from neck to abdomen.A biopsy of her right cervical lymph node (Fig. 3) revealed high endothelial venules (HEV) with thickened walls showing irregular dendritic branching and increased pale/clear cells around the HEV.Immunohistochemical staining revealed atypical cells positive for CD3, CD5, CD10, bcl-2, PD-1, and ICOS, and negative for CD20 and CD79a.Additionally, follicular dendric cells positive for CD21 were increased around the HEV, and some of the B cells were positive for EBER-ISH.Her serum level of anti-EBV-VCA IgG was high and those of anti-EBV-VCA IgM and anti-EBV EBNA antigen were low, suggesting that she was already infected with EBV.We also performed surgical lung biopsy in the left upper lung, which showed similar findings to the biopsy of the right cervical lymph node (Fig. 4).From these histopathological manifestations, we diagnosed her as having angioimmunoblastic Tcell lymphoma (AITL).She received chemotherapy and autologous peripheral blood stem cell transplantation and has been followed up for several years.
Discussion
This is a case of AITL in which chest opacities repeatedly appeared and disappeared.We could not initially confirm the diagnosis, and chest radiographic findings showed spontaneous improvement.However, chest opacities emerged with worsening of symptoms, and we could finally diagnose AITL from histopathological manifestations of the cervical lymph node and lung biopsy.
AITL is defined as peripheral T-cell lymphoma characterized as a systemic lymph node-involved disease with diverse lymphocytic infiltrate and hyperendothelial venous and follicular dendritic cell proliferation [1].AITL represents 1-2% of non-Hodgkin lymphoma and is more common in older adults, with the peak incidence in the sixties.Various symptoms are characteristic such as lymphadenopathy, hepatosplenomegaly, skin rash, fever, and weight loss.Although the prognosis of AITL depends on risk factors, it is reported to be poor with 5-year median survival of 32 % [2].
Table 1
Diseases that show migration shadow.
Low levels of lymphocytes and immunoglobulin were found in our patient, and she was initially diagnosed as having CVID.Malignant lymphoma sometimes develops in immunodeficiency patients.It is suggested that immunodeficiency causes infection or reactivation of pathogens associated with tumor development, such as EBV, and genetic variation in the embryonic cell lineage is related not only to primary immunodeficiency but also to development of lymphoma [3].A case of AITL developing in a patient using immunosuppressive drugs was reported, which suggests the association of immunodeficiency with AITL [4].Moreover, development of AITL is related to EBV infection [2].In the present patient, reactivation of EBV due to immunodeficiency might have contributed to development of the AITL.
Lung involvement is reported in 7 % of AITL cases [5].Chest imaging findings often include nodular shadows or consolidation, and enlarged mediastinal lymph nodes, pleural effusion, ground-glass opacities, and multiple nodules were also reported [6][7][8].In the present case, besides formation of consolidation and a tumorous shadow, chest opacities spontaneously disappeared and reemerged.This pattern of chest imaging is labelled "migration", and there are several diseases that show migration of shadows (Table 1).However, within our literature search of PubMed, we could find no cases of AITL that showed migration of shadows.
Among lymphoproliferative diseases, migration of shadows was reported in a case of lymphomatoid granulomatosis (LYG) [9].As mentioned above, development of AITL is associated with EBV infection, and LYG also develops by reactivation of EBV due to immunodeficiency.It was suggested that migration of shadows in LYG is associated with the balance between immune status and proliferation of EBV and tumor cells [10].Cases of AITL with pulmonary involvement may show migration of shadows due to the same pathogenesis as with LYG.
Conclusion
We reported a case of AITL that showed migration of lung opacities.Migration of shadows in patients with AITL has never been reported; however, lung involvement of AITL should be considered as a differential diagnosis when migration of shadows is present on lung imaging.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Fig. 1 .
Fig. 1.Chest X-rays of the patient.(A) Image at first visit to our hospital showed bilateral multiple tumorous and nodular shadows.(B) Image obtained 3 months after first visit showed improvement of opacities in the right lower lung field, and nodular shadows were partly enlarged in the left lung.(C) Image at 7 months after first visit showed extensive consolidation and enlargement of tumorous shadow in the left lung field.Nodular shadows were also present in the right hilum and the middle right lung field.
Fig. 2 .
Fig. 2. Chest CT images of the patient.(A) Image at first visit to our hospital showed multiple tumors and nodules in the bilateral lung fields.(B) Image at 4 months after first visit showed shrinking of tumors in the bilateral lower lobes and an enlarged nodule in the left lung.(C) Image at 7 months after first visit showed nodules were enlarged mainly in the left lung and a tumorous shadow emerged in the right lower lobe.
Fig. 3 .
Fig. 3. Pathological features of the right cervical lymph node.(A) HE staining revealed high endothelial venules (HEV) with thickened walls showing irregular dendritic branching and pale/clear cells increased around the HEV.Immunohistochemical staining revealed the atypical cells to be positive for CD3 (B), a marker of matured T cells, and CD10 (C), a marker of follicular helper T cells, and negative for CD20 (D), a marker of B cells.
Fig. 4 .
Fig. 4. Pathological features of lung tissues from the surgical lung biopsy.(A) Panoramic image of HE staining showed a well-circumscribed nodular lesion.HE staining × 20 (B) and × 60 magnification (C) showed HEV and pale/clear cells similar to findings of the cervical lymph node.
|
v3-fos-license
|
2020-05-28T13:05:35.732Z
|
2020-05-28T00:00:00.000
|
218905045
|
{
"extfieldsofstudy": [
"Medicine",
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/frobt.2020.00060/pdf",
"pdf_hash": "e2c96ea1d2690735cbe6b724658d0ecb3a22a05e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46677",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "e2c96ea1d2690735cbe6b724658d0ecb3a22a05e",
"year": 2020
}
|
pes2o/s2orc
|
Learning to Avoid Obstacles With Minimal Intervention Control
Programming by demonstration has received much attention as it offers a general framework which allows robots to efficiently acquire novel motor skills from a human teacher. While traditional imitation learning that only focuses on either Cartesian or joint space might become inappropriate in situations where both spaces are equally important (e.g., writing or striking task), hybrid imitation learning of skills in both Cartesian and joint spaces simultaneously has been studied recently. However, an important issue which often arises in dynamical or unstructured environments is overlooked, namely how can a robot avoid obstacles? In this paper, we aim to address the problem of avoiding obstacles in the context of hybrid imitation learning. Specifically, we propose to tackle three subproblems: (i) designing a proper potential field so as to bypass obstacles, (ii) guaranteeing joint limits are respected when adjusting trajectories in the process of avoiding obstacles, and (iii) determining proper control commands for robots such that potential human-robot interaction is safe. By solving the aforementioned subproblems, the robot is capable of generalizing observed skills to new situations featuring obstacles in a feasible and safe manner. The effectiveness of the proposed method is validated through a toy example as well as a real transportation experiment on the iCub humanoid robot.
INTRODUCTION
Over the past few years, there has been growing demand for bringing robots from industrial manufacturing lines to human-centered scenarios thanks to ever evolving sensors, actuators, and processors. Increasing computational power also gives rise to novel control and learning algorithms. Nevertheless, contrary to the high maturity of industrial robots and relatively simple service robots, how to deploy general purpose robots, such as humanoid robots into cluttered environments still remains a formidable challenge. Conceivably, before humanoids can successfully operate in daily-life settings, a series of challenges need to be confronted. One of the major concerns for robots to operate outside laboratory environments is obstacle avoidance. Indeed, obstacle avoidance represents a necessary capability for robots to become more autonomous, flexible and safer in order to cope with complex working environments. In addition, as humanoid robots could sometimes be expected to work alongside human beings, classical high-gain feedback control shall be deprecated in case human-robot interaction becomes dangerous. Hence, compliance is a highly desired skill in this case, as the incorporation of variable impedance skills into the robot controller allows for a safe physical human-robot interaction (Abu-Dakka et al., 2018).
In view of high-dimensional state and action spaces of humanoid robots, a user-friendly method to endow them with various skills is Programming by Demonstration (PbD), also known as imitation learning . The complete procedure of PbD typically consists of a demonstration phase, where robots are shown the desired behavior, and a reproduction phase, where robots are required to reproduce the learned skills, typically with the help of movement primitives (Ijspeert et al., 2013;Huang et al., 2019). Under the framework of PbD, humanoid robots are able to efficiently acquire novel motor skills from demonstrations of a human teacher. Following this paradigm, many successful results have been achieved, such as peg-in-hole task (Abu-Dakka et al., 2014), cleaning task (Duan et al., 2019), table tennis (Huang et al., 2016), etc.
Traditional imitation learning that only focuses on either Cartesian or joint space might become inappropriate in situations where both spaces are equally important, such as in writing or striking tasks. Consequently, hybrid imitation learning of skills in both Cartesian and joint spaces simultaneously has recently emerged. In order to further generalize the applicability of hybrid imitation learning, this paper considers the integration of obstacle avoidance in the context of hybrid imitation learning such that robots can reproduce the learned skills in a broader range of situations. Specifically, we consider the following three aspects: (1) designing a proper potential field so as to bypass obstacles. As a common technique for obstacle avoidance within PbD, the potential field formulation as well as its hyperparameters play an important role in realizing obstacle avoidance. In order to compare the performance of different potential fields, a novel imitation metric is proposed. Moreover, a kernel-based reinforcement learning algorithm is employed to determine the hyperparameters of the chosen potential field; (2) guaranteeing that joint limits are respected when adjusting trajectories in the process of avoiding obstacles. During obstacle avoidance, joint trajectories are usually modified according to the effects of the potential field. Therefore, the evolution of joint trajectories shall be constrained by bounding them within the allowable range; (3) determining proper control commands for robots such that potential human-robot interaction is safe. To do so, We propose to control the robot with a minimal intervention controller rooted in linear quadratic tracking.
The rest of the paper is organized as follows: section 2 reviews the previous work related to our problem. Section 3 presents the proposed framework for learning to avoid obstacles with minimal intervention control. Subsequently, section 4 reports the results of the toy example as well as the experiments on the iCub humanoid robot to show the effectiveness of the proposed method. Finally, conclusions and future routes of research are given in section 5.
RELATED WORK
In general, obstacle avoidance is a classical topic. Due to its great significance, it has been extensively studied in a broad range of fields not only limited to robotics but also computer graphics, computer aided design, urban planning, etc. Among the considerable works dedicated to the topic, obstacle avoidance can be roughly classified into two categories: motion planning and reactive methods.
Sampling-based motion planning algorithms normally rely on planners, such as Probabilistic Roadmap (PRM) (Kavraki et al., 1996) and Rapidly-exploring Random Tree (RRT) algorithms (LaValle, 1998), along with their numerous extensions. In order to facilitate collision detection of the samples, polyhedrons are usually used as proxies for robots and obstacles. Collision avoidance strategies elicited from sampling-based motion planning usually could generate globally optimal trajectories, but become computationally expensive and time consuming in the case of high-dimensional multi-body problems or narrowpassage problems . Optimization-based techniques can also be employed for obstacle avoidance. The collision-free trajectory can be obtained by optimizing the cost function formulated by a combination of obstacle cost and other indexes, such as smoothness. Various strategies for optimization could be applied to motion planning as well. For example, Zucker et al. (2013) presented a trajectory optimization procedure based on covariant descent. However, considering that gradient-based methods could get stuck in local optima, Kalakrishnan et al. (2011) proposed to instead use a derivativefree stochastic optimization approach to motion planning. The optimized trajectory is obtained by rolling out a series of noisy trajectories and the candidate solution is updated with the received cost with no gradient information required in the process. Also, in order to speed up the optimization process, Schulman et al. (2014) proposed to find collision-free trajectories using sequential convex optimization where collision is penalized with a hinge loss.
By contrast, reactive methods can make sure that robots can behave in response to the sensed obstacles in real time. Yet, the limitations lie in the design of priority assignment between the ongoing tasks as well as the obstacle avoidance task. Therefore, the solution is usually satisfying local conditions and thus suboptimal. In addition, there are also stability issues regarding reactive methods, as identified by Koren and Borenstein (1991).
In the framework of PbD, obstacle avoidance is usually realized with the help of potential fields, i.e., collision-free movement is generated by a repellent force obtained from a gradient of a potential field centered around the obstacle (Khatib, 1986). Kim and Khosla (1992) proposed a new formulation of the artificial potential field for obstacle avoidance using harmonic functions. A clear advantage of harmonic functions is that it can completely eliminate local minima in a cluttered environment. In the spirit of harmonic potential functions, within the context of dynamical-system-based robot control, Khansari-Zadeh and Billard (2012) proposed a real-time obstacle avoidance approach that can steer robot motions generated by the dynamical system. Since such modification happens locally, the dynamical system FIGURE 1 | Illustration of the proposed approach to obstacle avoidance within hybrid imitation learning. First, multiple demonstrations are presented to the robot and the statistical information are encoded by GMR. Subsequently, with the help of Gaussian product between Cartesian trajectory and joint trajectory, the inconsistency from both spaces are unified. A bio-inspired potential field is employed for obstacle avoidance, since it can better reserve the fidelity against the original trajectory. The hyperparameters of the potential field are determined by reinforcement learning with the learned trajectory driven by a minimal intervention controller.
(DS) can still be kept globally stable, i.e., all trajectories can reach the target point. In addition, the method can handle multiple obstacles without changing the equilibrium of the original dynamics. Huber et al. (2019) further extended such method to the case of moving convex and star-shaped concave obstacles. The impenetrability of the obstacles' hull and asymptotic stability at a final goal location was proven by contraction theory.
Compared with previous work on obstacle avoidance within PbD, the contribution of our proposed method focuses on tackling the aforementioned three subproblems. To address problem (1), a novel imitation metric is provided such that the performance of different potential fields can be quantified. Based on such imitation metric, we can evaluate similarity between the trajectories modified due to obstacle avoidance as well as the original demonstrated trajectories. The hyperparameters of the chosen potential field are determined by a reinforcement learning algorithm. To solve problem (2), we parameterize the joint space into exogenous states using the hyperbolic tangent function. We show that the proposed method can guarantee that the evolution of joint trajectories is always bounded within the specified range. As for problem (3), we employ a minimal intervention control strategy based on linear quadratic tracking. The illustration of the proposed method is shown in Figure 1. In addition, a flowchart to summarize the whole procedure is shown in Figure 2.
PROPOSED APPROACH
In this section we propose an obstacle avoidance approach within PbD that aims to preserve the demonstrated trajectories. Our obstacle avoidance strategy is devised based on the principle of artificial potential field. With the goal of preserving the demonstrated trajectories, we propose to use a bio-inspired potential field called Fajen potential field proposed by Fajen and Warren (2003). The Fajen potential field is built upon the empirical evidence of how humans steer their motion for obstacle avoidance. The hyperparameters of the chosen potential field are determined by a reinforcement learning algorithm. Although the Fajen field has been used in previous works (Hoffmann et al., 2009;Rai et al., 2014), its merit over others was not elaborated explicitly. To this end, we contribute quantitative evidence to benchmark different types of potential fields and show that the Fajen potential field indeed outperforms others. Furthermore, we propose to rely on a novel imitation metric rather than RMSE to evaluate the imitation fidelity. The suggested imitation metric is based on the technique of curve similarity analysis (Dryden, 2014). Moreover, given that trajectories will be modified by the potential field unpredictably during obstacle avoidance, there are possibilities that joint trajectory evolution could exceed the allowable range. To address this issue, we employ Constrained Dynamic Movement Primitives (CDMPs), recently developed by Duan et al. (2018) so as to ensure joint trajectories are always bounded within the specified range.
Trajectory Retrieval From Multiple Demonstrations
In order to endow robots with variable impedance skills, a minimal intervention controller is usually used for tracking the reference trajectories. Multiple demonstrations to the robot are required to capture the statistical information underlying the reference trajectories. Assume that both Cartesian trajectory t ] ⊤ as well as their corresponding time index t are recorded. For M demonstrations, each one having length T, we denote the collected dataset as In order to extract the probabilistic properties from the multiple teachings, several techniques can be employed, such as Gaussian Mixture Models (GMM) or Hidden Markov Models (HMM) (Calinon and Lee, 2016). As an example, here we employ GMM with H components to encode the raw training data as GMM is one of the most mature probabilistic approaches for modeling multiple demonstrations. Without loss of generality, we use τ to denote either τ x or τ j . GMM first estimates the joint probability distribution with priors π h , h = 1, . . . , H satisfying H h = 1 π h = 1: where Furthermore, Gaussian Mixture Regression (GMR) is employed to retrieve the probabilistic trajectory (Calinon and Lee, 2016). The corresponding output with respect to a query point t is formulated by a conditional probability distribution: where w h (t) are the activation functions defined as withμ Note that (4) is usually approximated by a unimodal output distribution for robot control. By resorting to the law of total mean and variance, the approximated normal distribution After applying GMR, we obtainτ for joint trajectories whereμ τ j = [μ j⊤μj⊤ ] ⊤ . One issue arising here is that there is inconsistency emerging between Cartesian and joint constraints due to multiple demonstrations. Such phenomena are usually referred to as competing constraints in the literature . In order to unify the constraints from both spaces, we employ the Gaussian product for the fusion of Cartesian and joint trajectories as in Calinon and Billard (2008). To obtain the corresponding joint trajectory q that satisfies the corresponding Cartesian constraints x, a Jacobianbased inverse kinematics technique is employed: where J † = J ⊤ (JJ ⊤ ) −1 denotes the Moore-Penrose pseudo inverse of J and J N = I − J † J is a null-space matrix that projects the additional secondary task into the null space of robot movement. In this case, the joint trajectoriesτ j ∼ N(μ τ j ,ˆ τ j ) retrieved from demonstrations are set as the secondary task and thus represent null-space movement. Upon transformation from Cartesian to joint space, the obtained probabilistic joint trajectory also satisfies Gaussian distribution q C ∼ N(µ C , C ) with mean µ C and variance C given by: To address the competing constraints, the final probabilistic trajectoryq ∼ N(μ,ˆ ) is retrieved by fusion of Cartesian and joint constraints with the help of the Gaussian product .
meanμ and varianceˆ given by Rasmussen (2003): By executing the retrieved probabilistic trajectory, robots are able to satisfy both Cartesian and joint constraints simultaneously.
Learning to Avoid Obstacles
Once the probabilistic trajectories are retrieved, the robot will be required to track the reference trajectories in presence of obstacles. We present a novel imitation metric other than the simple sum of squares in section 3.2.1 in order to compare the performance of different potential fields. The joint limit avoidance issue is addressed in section 3.2.2. Subsequently, section 3.2.3 formulates the search of the optimal potential field hyperparameters in terms of a Reinforcement Learning problem.
Imitation Metric for Potential Field
Since we expect robots to behave like humans during the process of obstacle avoidance, our choice for the potential field is the Fajen potential field, which is derived from a bioinspired perspective (Fajen and Warren, 2003). The basic idea behind Fajen field is to first calculate the angle between the current velocity and the direction toward the obstacle. Given this angle, the method determines how much to change the steering direction in order to keep away from the obstacle. The steering effect from the Fajen potential field is used to design the coupling term: where R is a rotation matrix with axis r and rotation angle π/2, o denotes the position of the obstacle in Cartesian space and ϕ is the angle between the velocity of the end-effectorẋ and the vector o − x, which is always positive.
As there are a number of other potential fields that can also be used for obstacle avoidance within PbD, an interesting issue arising is how to compare the performance of different potential fields. In order to evaluate the reproduction quality of the trajectory, imitation metrics are required. Traditionally, imitation metrics are mainly formulated as a weighted sum of squares of differences between the reproduced and the demonstrated trajectories, where the weights usually come from the variance matrix across multiple demonstrations Calinon and Lee, 2016;Huang et al., 2018). Such evaluation metric is improper in the case of obstacle avoidance since the trajectory shapes are changed much more severely with respect to straightforward reproduction.
Here, we employ a novel perspective on the formalism of imitation metrics, such that the effects of different potential fields can be fairly evaluated. Specifically, we propose to formulate the imitation similarity metric from the perspective of curve similarity analysis (Dryden, 2014). In general, the technique of curve similarity analysis has a wide rage of applications, such as signal alignment, DNA matching, signature comparison, etc. and is very pertinent to our situation (Mitchel et al., 2018). As there are a number of curve similarity analysis methods, the Procrustes distance is used for our case as an example (Dryden, 2014).
Overall, the Procrustes distance facilitates shape analysis by removing relative translational, scaling, and rotational components. Here we consider how to calculate the Procrustes distance d(X 1 , X 2 ) between two trajectories First, the translational component will be removed. To this end, all the trajectory points are translated as a whole such that the mean value of all the points coincide with the origin point. The mean valueX i of all the points for trajectory i is calculated asX Similarly, the scaling component is removed by normalizing the root mean square distance between the points and the origin point. The scaling factor s i of trajectory i is calculated as The scale of trajectory i is normalized by The removal of the rotation effect is not straightforward because the calculation of the optimal rotation matrix W requires to solve an optimization problem where · denotes the Euclidean norm of a vector. It can be verified that the optimal rotation matrix is given by Dryden (2014) where UΣV ⊤ is the Singular Value Decomposition (SVD) of the matrix k X 2 k X 1⊤ k . In order to make W a valid rotation matrix, i.e., det(W) > 0 where det(·) denotes the determinant of a matrix, Σ is modified into Σ ′ by replacing its smallest singular value with the sign of det(UV ⊤ ), and the other singular values with 1. Finally, the Procrustes distance is given by The process to calculate the Procrustes between two trajectories is summarized in Algorithm 1.
Joint Limit Avoidance
Once a suitable type of potential field has been chosen, we are ready to modify robot trajectories for obstacle avoidance. One problem here is how to guarantee that joint limits are respected. It can be conceived that the modified trajectories are very susceptible to the strength of the potential field and a strong field could drive joint trajectories out of the allowable range.
To cope with such issue, we will drive the robot's trajectories using our recently developed Constrained Dynamic Movement Primitives (CDMPs) (Duan et al., 2018). CDMPs are derived by parameterizing the original trajectory using the hyperbolic . We make modifications on the hyperbolic tangent space such that the joint trajectories will always evolve within the given bound. Formally, assume the joint limits are determined by q min and q max . The feasible joint space in terms of the exogenous variable ξ is as follows where q e = diag 1 2 (q max − q min ) and q o = 1 2 (q max + q min ). Therefore, given the desired reference trajectory q d , its transformation into tanh-space is given as Consequently, Dynamic Movement Primitives (DMPs) are trained in tanh-space with modifications from potential field exerted therein. DMPs are described by τω = −αω, where τ > 0 denotes movement duration, α > 0 is a scalar, ω is the phase variable on which the forcing term f depends, φ i (ω) = e −h i (ω−l i ) 2 is the basis function with h i > 0 and l i ∈ [0, 1] and χ k,i is the corresponding weight. Moreover, p is the coupling term that modifies the DMPs trajectory according to the potential field. It should be noted that the training of DMP is the same as the usual procedure, i.e., fitting the acceleration profile with Gaussian radial basis functions. Here the only difference is that it happens in tanh-space. The working flow of CDMPs is illustrated in Figure 3 and the corresponding algorithm is summarized in Algorithm 2.
Reinforcement Learning of Hyperparameters
Normally, the strength of a potential field determined by the hyperparameters can affect the performance of obstacle avoidance greatly. If the strength is too high, then the robot will react more than enough to avoid the obstacle. Yet, although joint limit cannot be exceeded thanks to CDMPs, the similarity with respect to the demonstrated behavior will be very much sacrificed. On the other hand, if the strength is too low, the robot would be in danger of hitting the obstacle. In order to find the optimal hyperparameter values, a non-parametric reinforcement learning algorithm called Cost-regularized Kernel Regression (CrKR) is employed (Kober et al., 2012). CrKR is used to learn the optimal policy that maps the state, which is the position of the obstacle, to the optimal action, which is associated with the corresponding hyperparameters. At the beginning of each trial, for a query obstacle position s o , the hyperparameters are sampled from the Gaussian distribution γ i ∼ N γ i (s o ), σ 2 (s o ) with the mean given by and the variance to facilitate exploration given by where k(·, ·) is a kernel function; k(s o ) = φ(s o ) ⊤ ⊤ with φ(·) being basis functions and i = φ(s i ), K = ⊤ , C = diag(c 1 , c 2 , . . . , c n ) is a cost matrix, λ is a ridge factor and Ŵ i is a vector containing the training examples for the hyperparameters. It should be noted that the cost element c i stored in the cost matrix C is calculated by the Procrustes distance between the collision-free trajectoryτ i and the demonstrated trajectoryτ , i.e., c i = d(τ i ,τ ). The learning process terminates when the distance between the robot end-effector and the obstacle is smaller than the threshold.
Minimal Intervention Control
To track the reference trajectory µ with optimal control inputs U * = [u ⊤ t u ⊤ t+1 . . . u ⊤ t+m−1 ] ⊤ , a minimal intervention control strategy can be employed (Calinon et al., 2014). To start with, an optimization problem is formulated as where R i is a positive definite weight matrix. To find the analytical solution U * , we formulate joint dynamics in terms of a double integrator: By applying (27) repetitively, we have Frontiers in Robotics and AI | www.frontiersin.org Therefore, the original optimization problem as in (26) can be re-written compactly: By inserting (28) into (29) and setting the derivative with respect to U equal to 0, the optimal control policy U * can be calculated as The complete procedure for the proposed approach to obstacle avoidance with minimal intervention control is summarized in Algorithm 3.
EXPERIMENTS
This section illustrates the effectiveness of the proposed approach by reporting the results of two evaluative experiments. The first experiment is a toy example where the comparison between different potential fields is conducted. With the help of the toy example, the necessity of introducing a novel imitation metric under obstacle avoidance is unveiled. Subsequently, the second experiment is devised as a transportation task under obstacle avoidance, following the proposed framework to obstacle avoidance under PbD. The real experiment is conducted on an iCub, a full-body child-size humanoid robot (Natale et al., 2017).
Toy Example
By convention, the imitation metric between the reproduced trajectory and the demonstrated one is designed as RMSE. However, in the context of obstacle avoidance, we show in the toy example that RMSE is not a proper imitation metric as it can not reflect the real imitation fidelity. Therefore, a novel imitation metric instead of RMSE is required to measure the imitation fidelity. As discussed earlier, we choose the imitation metric by resorting to the technique of curve similarity analysis. Specifically, the Procrustes distance is employed to replace the RMSE so as to compare the performance of different potential fields. In our toy example, the performances of the Fajen potential field as provided in (12) and the Khatib potential field (Khatib, 1986) are compared against each other. The mechanism of the Khatib potential field for obstacle avoidance is that the repulsive force becomes larger as the manipulator moves closer to the obstacle: where ρ(x) is the distance between the current position and the obstacle, η is a gain term and ρ 0 is called the threshold distance to the obstacle point. The coupling term for obstacle avoidance with Khatib potential field is calculated by deriving the gradient, i.e., p(x) = −∇U(x). The comparison of obstacle avoidance performance between the Khatib potential field and the Fajen potential field is shown in Figure 4. A point moves from the starting point (0, 0) to the goal point (1, 0) with an obstacle located at midway (0.3, 0). The parameters used for Khatib potential field are chosen as η = 0.12 and p 0 = 0.15, while the parameters used for the Fajen potential field are chosen as γ = 2000 and β = 20/π.
As can be seen in Figure 4, by employing Khatib potential field, the area enclosed by the distorted obstacle avoidance trajectory and the demonstrated one is smaller than that of Fajen field. In this imitation metric, Khatib potential field preserves higher imitation fidelity than Fajen field. However, this metric is not reasonable as one can identify intuitively that the obstacle avoidance trajectory under Fajen field should share more similarity to the demonstrated one than Khatib field. And by employing Procrustes distance for the imitation metric, indeed the obstacle avoidance trajectory under Fajen field has a smaller deviation from the demonstrated one.
Numerically, when evaluated with root mean square error, the distance is 0.37 for Khatib potential field and 0.41 for the Fajen field. However, by using the Procrustes distance as imitation metric, the distance becomes 3.06 for the Khatib field and only 0.92 for the Fajen field. In conclusion, Procrustes distance should be chosen over RMSE when the reproduction trajectory is modified irregularly. Besides, Fajen field should be preferred for obstacle avoidance as it preserves imitation fidelity better. Table 1 summarizes the quantitative comparison between two potential fields under two types of similarity metric. The code is available as Supplementary Material.
Transportation Task
We evaluate the proposed method with a real experiment on the iCub humanoid robot. In order to show the effectiveness of the proposed framework for obstacle avoidance within PbD, a transportation task is considered as a concept-proof experiment. As iCub is a full-body humanoid robot with 56 DoFs in total, to accomplish the transportation task in our case, only the right arm having 7 DoF is activated with 3 from the shoulder, 3 from the wrist, and 1 from the elbow.
The conceived experimental set-up is as such: a sponge is first handed over to the robot, and then a human teacher guides the activated robot right arm to reach a final location. Finally, the robot is required to reproduce the demonstrated trajectory with the existence of an obstacle positioned midway through the demonstrated trajectory. During the kinesthetic teaching phase, the robot is taught the transportation task for five times. The collected dataset records both robot Cartesian and joint trajectories. In order to encode the probabilistic information underlying the multiple demonstrated trajectories, a GMM with five components is employed to model the distributions of the demonstrated Cartesian and joint trajectories, respectively. The GMM modeling results are plotted in Figure 5, and it can be observed that the trajectory segment with larger variation incurs larger covariance. The probabilistic reference trajectories to control the robot are extracted with GMR. In order to unify the inconsistency between Cartesian and joint trajectories as a result of multiple demonstrations, the trajectories from both spaces are fused by the Gaussian product as in (11). The experimental illustration is shown in Figure 6.
In order to incorporate joint limit avoidance during obstacle avoidance, the retrieved trajectories are then used to train CDMPs according to Algorithm 2. For the design of the coupling term of CDMPs, the Fajen potential field is employed for obstacle avoidance. As it has been shown in the toy example, it outperforms Khatib potential field with the Procrustes distance as imitation metric. During the reproduction phase, a virtual obstacle is positioned at (0.31, −0.25, 0.73) m in Cartesian space with respect to the world frame. The snapshots of the robot reproducing the transportation tasks with the obstacle is shown in the bottom row of Figure 4. The Cartesian trajectory of the obstacle avoidance trajectory (in green) and the demonstrated trajectory (in blue) is shown in Figure 7. During the execution of the obstacle avoidance, no joint violates the corresponding joint limits.
The hyperparameter values of the potential field are determined by the reinforcement learning algorithm CrKR in an off-line fashion. We learn these hyperparameters as a function of the position of the obstacle. The reward used by CrKR is formulated as the Procrustes distance between the obstacle avoidance trajectory and the demonstrated one. To run CrKR, we choose the Gaussian kernel k = exp (s i − s j ) 2 for the algorithm with the distance of a point from itself k(s, s) = 1 and a ridge factor λ = 0.5. During the preparation steps of the algorithm, the corresponding matrices K, C and Ŵ are initialized by 20 samples. The total number of trials for each run of the algorithm is 800. The cost variance is calculated by repeating the learning process for five times. The learning results are reported in Figure 8. The learning process terminates when the threshold of the minimum distance to the obstacle is triggered. When the learning process finishes, the Fajen field parameters are obtained as γ = 1260 and β = 3.2 with respect to the specified obstacle position.
Finally, the reproduction trajectory is executed with a minimal intervention controller in order to endow the robot with variable impedance skills. The cost function of the minimal intervention controller is parameterized by R = 10 −2 I. The prediction horizon is set as 10 discrete time steps.
CONCLUSIONS AND FUTURE WORK
In this paper, we presented an approach to obstacle avoidance in the context of hybrid imitation learning. To exploit the probabilistic information underlying the trajectories, multiple demonstrations are taught to the robot. The initial trajectory is then retrieved from the human demonstration dataset by fusing Cartesian and joint constraints with the Gaussian product. Since there are various types of potential field, the Procrustes distance other than RMSE is employed for the benchmarking of the performance of different potential fields. As a common technique in curve similarity analysis, the Procrustes distance can better reflect the imitation fidelity between the obstacleavoidance trajectory and the original demonstrated one. It should be noted that minimizing the Procrustean imitation metric might be more numerically expensive than square root distance. Given that the potential field would modify joint trajectories unpredictably during obstacle avoidance, joint limit avoidance is incorporated to guarantee the evolution of the modified trajectories is always bounded within the allowable range. To this end, the novel Constrained Dynamic Movement Primitives (CDMPs) method is employed. CDMPs parameterize joint trajectories with the help of the hyperbolic tangent function. By exploiting the boundedness property of the hyperbolic tangent function, the modified joint trajectories are guaranteed to evolve within the specified range. Further, in view of the fact that the performance of obstacle avoidance is quite susceptible to the hyperparameters of the potential field, a reinforcement learning algorithm is used to find the most suitable hyperparameters. The final obstacle avoidance trajectory is tracked with a minimal intervention controller to endow the robot with variable impedance capabilities.
As a preliminary attempt to address obstacle avoidance issues under PbD, a number of topics remain to be investigated for future work. For example, the position of the obstacle is given beforehand in this paper, but it could be interesting to exploit the visual system of iCub such that the obstacle position can be autonomously determined. In addition, the proposed method could be considered for extension to the case of moving obstacles or obstacles with 3D shape.
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the article/Supplementary Material.
|
v3-fos-license
|
2018-04-03T01:46:39.415Z
|
2002-02-15T00:00:00.000
|
30809824
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/277/7/5040.full.pdf",
"pdf_hash": "b353d4b1741496b290ce16939ac8f0e19f2ebdff",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46680",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "6a7ef221bcc7d026b7753747b0ae31a55b87e53d",
"year": 2002
}
|
pes2o/s2orc
|
The Cleavage Efficiency of the Human Immunoglobulin Heavy Chain VH Elements by the RAG Complex
The human immunoglobulin heavy chain locus contains 39 functional human VH elements. All 39 VH elements (with their adjacent heptamer/nonamer signal) were tested for site-specific cleavage with purified human core RAG1 and RAG2, and HMG1 proteins in a 12/23-coupled cleavage reaction. Both nicking and hairpin formation were measured. The individual VH cleavage efficiencies vary over nearly a 30-fold range. These measurements will be useful in considering the factors affecting the generation of the immunoglobulin and T-cell receptor repertoires in the adult humans. Interestingly, when these cleavage efficiencies are summed for each of the VH families, the six VHfamily efficiencies correspond closely to the observed profile of unselected VH family usage in the peripheral B cells of normal adult humans. This correspondence raises the possibility that the dominant factor determining VH element utilization within the 1-megabase human genomic VH array is simply the individual RAG cleavage efficiencies.
The antigen receptor repertoire is a composite of many factors (1)(2)(3). One major factor is the efficiency with which the V, D, and J elements are cleaved by the recombinase complex. This complex contains RAG1 and RAG2 proteins (recombination activating genes), along with HMG1 (or HMG2). The RAG complex binds to the heptamer/nonamer signal sequences (also called RSS 1 for recombination signal sequences) associated with each V, D, or J element. The RAG complex makes an initial nick adjacent to the heptamer of each signal and then generates a hairpin configuration at that site at the coding end terminus of the V, D, or J element (4). The hairpin formation at the coding end results in a blunt-ended double-strand break at the end of the signal (4). A single recombination event involves two elements, such as a V and a J, or a D and a J, or a V and a DJ. The two elements always have recombination signal sequences (RSS) that are different in the spacing between their heptamer and nonamer (12 or 23 base pairs), and this is known as the 12/23 rule (5). The 12/23 rule is enforced at the hairpin formation step by the RAG complex (6 -8). The two coding ends (a D and a J, or a V and a DJ, for example) are joined by the nonhomologous DNA end joining repair pathway (2,9) to create the variable domain exon that encodes a portion of the binding pocket for the antigen receptor (immunoglobulins (Ig) or T-cell receptor). The two signal ends are also joined together by nonhomologous DNA end joining to form a signal joint.
There is a consensus sequence for the heptamer (CACAGTG) and for the nonamer (ACAAAAACA). This consensus appears to be the optimal one for V(D)J recombination. However, the actual RSS associated with each V, D, and J element usually deviates considerably from the consensus. The variations affect recombination efficiency over several orders of magnitude (10,11). In addition, the terminal two or three coding end nucleotides also influence the efficiency of nicking at the adjacent signal, and this coding end effect can influence the efficiency of recombination by an additional one to two orders of magnitude (12)(13)(14)(15). Of the more than 10 9 possible combinations of heptamer and nonamer variations, only a small number (fewer than 100) of the possible variations have been tested (10,11,16). Hence, although some of the principles have been established concerning how signal and coding end sequence can influence V(D)J recombination, the recombination or cleavage efficiencies of the actual V, D, and J elements relative to one another have not been systematically determined for any of the human or murine loci. Therefore, the actual efficiencies cannot be deduced from the current literature, and direct experiments are required to determine the efficiencies that generate the repertoire of the antigen receptor loci.
Another factor that is known to influence V(D)J recombination is chromatin structure (17). There are six antigen receptor loci, and they do not undergo recombination simultaneously because of differences in chromatin structure. CpG methylation is one major factor that determines the accessibility of any vertebrate genetic locus (18 -20), and the antigen receptor loci are no exception. CpG methylation is typically accompanied by histone deacetylation, which results in a tighter association between the nucleosome and the DNA wrapped around it. When RAG cleavage assays have been done on short DNA fragments that accommodate one nucleosome, cleavage is suppressed (21)(22)(23)(24). Although chromatin structure clearly determines the RAG complex accessibility differences between the six antigen receptor loci, it has been uncertain whether such effects influence the differential V, D, or J element utilization within any one locus. Individual active antigen receptor loci that have been examined have acetylated histones throughout * This work was supported in part by National Institutes of Health grants (to M. R. L.). The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact.
‡ the locus (25), consistent with much earlier data indicating that active antigen receptor loci are hypomethylated (26,27). In neonatal mice, it has been observed that V H segments in proximity to the J H cluster are used more frequently than the distal V H segments (28 -31). This proximity preference across the murine V H array is either much less marked or not detectable in adult mice (28 -30, 32). In fetal human, it has been unclear whether or not there is a small bias in favor of the most proximal V H (30). If there is a small bias in fetal human, it might suggest that in mouse and, to a lesser extent, in human, the locus opening (chromatin change) emanates from the intron enhancer during early development and that this proximity effect dissipates with progression to adulthood. This is especially obvious in humans where it is clear that, in adults, there is no proximity preference within the Ig heavy chain locus (30,(33)(34)(35)(36).
Once the primary repertoire is generated as a result of recombination and overlying chromatin effects, positive and negative selective forces shape the repertoire into its observed profile in the peripheral blood. Here, we have examined each of the V H elements in the six human V H families for their efficiency in the initial stages of V(D)J recombination by using human core RAG complexes to cleave oligonucleotide fragments encompassing the signal and the adjacent coding end nucleotides. We find that the cleavage efficiency of each of the six families corresponds very closely to data for the nonproductive V H usage observed in the peripheral B cells of normal adult humans. This significant similarity suggests that the initial (unselected) V H repertoire in the adult human adult is determined to a significant extent by the recombination cleavage efficiency. We polyacrylamide gel-purified the full-length form of each oligonucleotide. We then determined the concentration spectrophotometrically. Each cleavage substrate was labeled at the 5Ј-end of the coding flank with [␥-32 P]ATP (3000 Ci/mmol) (PerkinElmer Life Sciences, Boston, MA) and T4 polynucleotide kinase (New England BioLabs, Beverly, MA) according to the manufacturer's instructions. Unincorporated radioisotope was removed by using G-25 Sephadex (Amersham Biosciences, Inc., Piscataway, NJ) spin-column chromatography. It is important to note that labeled oligonucleotides were mixed with twice the molar amount of unlabeled complementary oligonucleotides in a buffer containing 10 mM Tris-hydrochloride, pH 8.0, and 100 mM NaCl (37). The mixture was heated at 95°C for 5 min and allowed to cool down to room temperature for more than 1 h. The amount of unannealed labeled single-stranded oligonucleotide was less than 5% in all cases (and was typically at undetectable levels (less than 2%)).
Protein Expression and Purification-Human RAG expression plasmids (core RAG1: amino acids 383-1008 and full-length RAG2) were kindly provided by Dr. Sadro Santagata and Dr. Patricia Cortes. The core RAG2 (amino acids 1-383) expression vector was made by inserting the corresponding PCR fragment into the pEBG vector (38). The sequence was confirmed by dideoxynucleotide sequencing. Recombinant human RAG proteins were expressed as glutathione S-transferase (GST) fusion proteins by cotransfection of RAG1 (core) and RAG2 (core) expression vectors into the human epithelial cell line 293T. (Previous work demonstrates the similar kinetics of nicking and hairpin formation for GST versus maltose binding protein RAG fusion proteins (8).) The coexpressed human core RAG proteins were then purified with glutathione-agarose (Sigma Chemical Co., St. Louis, MO). C-terminal truncated mouse HMG1 was expressed in bacteria as a six-histidinetagged protein and purified over a nickel-nitrilotriacetic acid column (Qiagen Inc., Valencia, CA). Protein concentration was determined against known concentrations of bovine serum albumin (fraction V) on a Coomassie Blue-stained gel with a densitometer (Model GS710, Bio-Rad, Hercules, CA) and quantified with Quantity One software (Bio-Rad, Hercules, CA).
Coupled Cleavage Assay-A 5-l reaction mixture containing 10 fmol of a 32 P-labeled V H substrate, 10 fmol of unlabeled D H 4-4 (DA4), and 25 mM MOPS, pH 7.0, 2.5 mM MgCl 2 , 30 mM potassium chloride, 30 mM potassium glutamate, 1 pmol of HMG1, and 20 ng of RAGs was incubated at 37°C for 30 min. The reaction was stopped by the addition of 5 l of formamide and immediately heated to 100°C for 5 min before plunging into ice water. At least two independent cleavages on two independent gels were done for every V H . In addition, independent annealings were done on a subset of V H segments that deviated more than expected relative to the most similar V H elements. The independent annealings were indistinguishable.
Five femtomoles of each substrate of the two specified substrates was used (see Fig. 5, lanes 3 and 6). Ten femtomoles was used when only one substrate is involved. The difference in band intensity for equal molar amounts of substrate is simply due to labeling efficiency differences. Such labeling efficiency differences are not a complication, because equal molar amounts are used, and the conversion to nicked and hairpin products is expressed as a percentage of the substrate input in that reaction.
Denaturing Polyacrylamide Gel Electrophoresis-Reaction products were separated on 15% polyacrylamide gels containing 7 M urea in 1ϫ Tris borate-EDTA buffer. Gels were visualized by autoradiography with a Molecular Dynamics PhosphorImager 445SI (Sunnyvale, CA) and quantified with ImageQuaNT software (version 5).
Statistical Analysis-Evaluations of the correlation between RAG nicking (or hairpin formation) and published values for nonproductive rearrangements were done as follows. The RAG nicking contribution (expressed as a percentage) for each V H family relative to the total was calculated as described in Table I 4. This gives the percentage that each individual V H would be expected to contribute to the repertoire. The numbers for individual V H usage in the peripheral blood nonselected repertoire are too limited; however, the data for each family are at reliable levels. Here, we summed all of the members of each human V H family for nicking and divided by the total nicking product of all 39 members. The same calculation was also done for hairpin formation. The correlation between RAG nicking and nonproductive rearrangements has a regression coefficient of 0.987 and a p Ͻ 0.001. The correlation between RAG hairpinning and nonproductive rearrangements has a regression coefficient of 0.926 and a p Ͻ 0.01. The average nicking value for each family is as follows (in order from families 1 to 6): 1.1%; 1.9%; 2.6%; 4.6%; 5.2%; and 2.6%. The average hairpin formation value is as follows (in order from families 1 to 6): age was plotted versus the nonproductive rearrangement frequency for each of the six families. The correlation coefficient was determined, and a probability value, p, was calculated, as given in Table I.
RESULTS
Experimental Strategy for Assessing the V H Array in Paired 12/23 Cleavage Reactions-The sequence alignment of all 39 functional human V H elements in the region relevant to V(D)J recombination (the heptamer and nonamer of the recombination signal sequence and the 15 bp of coding flank) illustrates that only five of the heptamer/nonamer signals conform to the CACAGTG/ACAAAAACC consensus ( Fig. 1) (39). These five are V H 3-9, 3-43, 4-34, 4-39, and 4-59. The most common sequence of the V H elements is shown in the top line, and this sequence differs from the optimal heptamer/nonamer in the fourth position of the nonamer. Five of the 39 V H elements have heptamers that deviate from the optimal sequence, and 33 V H elements have nonamers that deviate from the optimal se-quence. The effects of these deviations on recombination efficiency cannot be reliably estimated from the current knowledge of the limited number of tested signal sequence variations.
The V H elements have previously been grouped into six families based on the V H coding sequence (39). Within each of the six V H families, the 23-bp spacer region between the heptamer and the nonamer is highly homologous. The most distinct deviation is seen for the nine members of family 1, which, by comparison, have unique nonamer sequences and spacers relative to members from the other V H families.
Previous work has demonstrated that the RAG cleavage efficiency of each substrate is determined by its similarity to the optimal signal sequence (40,41) as well as for the coding end sequence adjacent to the signal (42). These biochemical cleavage efficiencies correspond very well with cellular V(D)J substrate quantitation, in those cases where equivalent substrates have been compared (12,42). We determined the RAG cleavage efficiency of each human V H element by using oligonucleotide-based DNA substrates, synthesized to correspond to the published sequences of human functional V H elements. Each V H substrate contains a 15-bp flank on the heptamer side (coding flank) and a 5-bp flank on the nonamer side. Previous studies have documented that these lengths of extension beyond the heptamer/nonamer signal are sufficient to recapitulate any surrounding sequence effects (42,43). One strand of each double-stranded oligonucleotide was labeled such that the nicked product and the hairpin product could be distinguished from the substrate on a denaturing polyacrylamide gel (42,44). Cleavage was carried out with purified human RAG proteins with Mg 2ϩ as the divalent cation. A 12-bp RSS partner substrate is necessary for the 12/23 coupling at the hairpin formation step (8), and this must be the same partner for all of the V H substrates to allow comparison. We chose D H 4-4 (also called DA4), which is efficiently cleaved by the human RAG complex (data not shown). However, it is important to note that the nicking step is independent of the presence or absence of a partner signal (44).
Each V H substrate was assayed in replicate sets of experiments. To compare the different sets, an optimal 23-bp spacer substrate (KY36/37) was included in each set as a standard to normalize for any slight variations of cleavage efficiency. Human core RAG1 and RAG2 were used rather than the more commonly used murine core RAG1 and 2, because we were interested in establishing the relative cleavage efficiencies of the human V H repertoire.
The RAG Cleavage Efficiency of Human V H Targets Varies Markedly-The reaction time courses conformed to the expected kinetics for nicking and hairpin formation (41,44,45), and the products increased over the initial 50 min when incubated at 37°C (Fig. 2). Cleavage efficiency was determined as the percentage of the substrate that is converted to the nicked or hairpin product at the 30-min time point. Although we used initial reaction rates in our previous study (44), the 30-min time point permits greater precision when comparing 39 different substrates. The nicking and hairpin formation of a subset of the human V H elements are illustrated in Fig. 3. Each V H element was analyzed in two independent experiments in duplicate, and the degree of concordance between measurements was quite good, as reflected in the standard deviations (Fig. 1).
The nicking and hairpin formation efficiency of the V H substrates varies markedly, depending on the sequence of the substrate. This is true even for members of the same family for those cases where there are differences in the signal sequence or coding end. The difference in nicking efficiency between the highest (V H 4-34) and lowest V H (V H 1-58) is 28-fold. Three of the six V H family 4 members (4-34, 4-39, and 4-59), the one family 5 member, and six of the 19 family 3 members have the highest cleavage efficiencies (Fig. 1). Not surprisingly, all five of the V H elements with optimal heptamer/nonamer signals (V H 3-9, 3-43, 4-34, 4-39, 4-59) are among the highest efficiency RAG-nicking targets. The most common deviation of the V H elements, the C in the fourth position of the nonamer, did not markedly reduce recombination efficiency as illustrated by the fact that V H 3-30, 3-53, and 3-66 are nicked at efficiencies that are only 1.1-to 2.6-fold lower than the V H elements with optimal heptamer/nonamer signals.
V H segments with substantial deviations in the heptamer or nonamer, such as V H 3-72, and most of the members in family 1, had lower cleavage efficiencies (Fig. 1). Some of the individual V H substrates vary only slightly from the consensus, and yet have large reductions in cleavage efficiency. For example, V H 3-20 has a consensus heptamer and only a one-nucleotide deviation from the consensus nonamer, and yet it is relatively low in cleavage efficiency.
Conversely, large deviations from the consensus are not necessarily associated with large reductions. For example, V H 5-51 has three nucleotide deviations from the consensus nonamer, and yet it is cleaved efficiently by the human RAG complex. However, V H 1-58 also has three nucleotide deviations from the consensus nonamer (at positions different from those of V H , and yet its cleavage is 20-fold reduced relative to V H 5-51. These instances illustrate the lack of predictability when deviations from the consensus are present and illustrate that direct measurements of the cleavage or recombination efficiency are essential. The one to three nucleotides of the coding end that directly adjacent to the heptamer have been previously demonstrated to affect V(D)J recombination (12)(13)(14), and this affect has been traced, at the biochemical level, to the nicking step (42). Here we see evidence of this when comparing V H 3-15 with V H 3-73 (also compare V H 4-34 with 4-59), where the only differences are not in the signal but in the coding end (Fig. 1).
Sequence variations in the spacer region or deep into the coding flank (more than three nucleotides from the nick site) generally show little effect on cleavage efficiency (compare V H 3-7 with 3-23, or 3-64 with 1-3, Fig. 1), consistent with previous findings that variations at these positions do not affect recombination efficiency in any large way (12, 15, 46 -48). Nevertheless, limited effects of spacer sequence may be observable (e.g. compare V H 3-23 or 3-15 with 3-30).
The hairpin formation efficiency also varies over nearly a 28-fold range. In general, there are no marked disparities be- tween the nicking and hairpin formation results (see "Discussion").
The Effect of Competition between Different V H Elements-At the genomic antigen receptor locus in the cell, the RAG proteins can bind at the signal sequence of any of a number of different V H elements; hence, there is potential competition between the elements. We were interested in whether the cleavage studies that we had done would be affected by competition or whether the V H substrates would be cleaved independently. We tested this by first determining the cleavage of V H substrates when individually paired with the D H element, as described above. We then tested two V H substrates with the D H element and followed the hairpin formation of these two V H substrates in the same reaction (Fig. 4). We were able to follow the hairpin formation of two V H substrates in the same reaction, because hairpins of different sequence have different gel mobility, despite having the same length.
In the first set of experiments, we chose two V H elements that are cleaved with comparable efficiency (Fig. 4, lanes 1-3).
In the second set of experiments, we chose two V H elements that have a 5-fold difference in hairpin formation efficiency (Fig. 4, lanes 4 -6). We find that the V H substrates are cleaved in this competitive study in a manner that was indistinguishable from their noncompetitive cleavage (Fig. 4, compare lane 3 with lanes 1 and 2; also compare lane 6 with lanes 4 and 5). Therefore, the V H substrates are cleaved independently, and in vitro competition does not affect their cleavage efficiency.
Analysis of RAG Cleavage
Relative to the Observed Unselected Repertoire-We were interested in comparing the data observed in vitro with the repertoire observed in the human adult peripheral blood. The nonproductive rearrangements among B cells in the peripheral blood are the most appropriate comparison, because these represent recombination events prior to any immunological selection. Other investigators have done measurements of V H usage among nonproductive rearrangements at the human heavy chain locus by using singlecell PCR (49 -51). The number of events is insufficient to evaluate each individual V H element, but comparisons can be done at the level of each family. To compare our cleavage data with those from single-cell PCR, we first calculate the cleavage efficiency of each individual V H relative to the sum of the 39 V H 4 and 5), or in a reaction that contains half the amount of each (lane 6). The ratio of the band intensity between a and b is 2.0 and is equal to the ratio between aЈ and bЈ. The ratio between c and d is 8.0 and is similar to the ratio between cЈ and dЈ, which is 7.6. S designates the substrate, HP designates the hairpinned form, and N designates the nicked form. elements. We then add the members for each family. We find that this aggregate family nicking or hairpin formation efficiency matches the data on the usage frequency of nonproductively rearranged V H elements surprisingly well (for nicking, p Ͻ 0.001; for hairpin formation, p Ͻ 0.01; Table I).
The usage frequencies calculated from our cleavage data as well as from the in vivo nonproductive rearrangement data (49,50) are not merely a reflection of the V H family size. For example, family 1 has nine members and family 4 has only six. Yet family 4 has higher cleavage values and in vivo usage (Table I). If all V H elements were used with equal frequency and the repertoire were shaped based only on family size, one would expect family 1 to contribute 23.1% and family 4 to contribute 15.4% of the repertoire (Table I). This is not the case for either our in vitro measurements or the in vivo nonproductive repertoire measurements (49,50).
The similarity, at the family level, between our cleavage data and the observed peripheral unselected repertoire may provide a mechanistic understanding for the V H usage frequencies in vivo. These in vivo frequencies may simply reflect how well the V H elements are cleaved by the RAG complex. If so, this could be of considerable practical significance. For example, with a clearer understanding of the repertoire generation, deviations from the baseline repertoire of V H usage might be more easily recognized and may be useful as a very early indication of monoclonal gammopathies due to B cell malignancies.
That the nicking efficiencies for the V H families are very similar to the nonproductive repertoire in the peripheral blood suggests that the rate-limiting step for V(D)J recombination may be (a) the nicking step, (b) the preceding step in which the RAG complex binds the substrate, or (c) a combination of these two steps. If any of the subsequent steps (hairpin formation, hairpin opening, or any of the steps of nonhomologous end joining) were rate-limiting, then the nicking efficiencies would not be expected to so closely match the nonproductive peripheral repertoire.
Biological Significance of a Predictable Initial Repertoire-The similarity of the RAG cleavage efficiencies and the nonproductive pre-immune repertoire is interesting from the standpoint of factors that affect the V H usage on human chromosome 14. The nonproductive pre-immune repertoire could conceivably be a reflection of more factors than simply RAG cleavage efficiencies, including such factors such as chromatin structure (histone acetylation and CpG methylation) and local transcription. The fact that the in vitro RAG cleavage efficiencies match those in the nonproductive pre-immune repertoire means that if factors other than RAG cleavage efficiency are anything other than negligible, then they may be offsetting each other.
The V H families 2, 5, and 6 are the clearest in suggesting that the RAG cleavage efficiencies are the dominant factor in dictating the representation in the peripheral blood repertoire. Families 5 and 6 have only one member each. V H 5-51 and V H 6-1 are used in proportion to their RAG cleavage (Table I), even though they are located 630 kb apart in the human genome. The three members of family 2 are spread across a similar distance of the V H array, and each has very similar RAG cleavage efficiencies in our experimental system. We find that the representation of family 2 in the peripheral blood also is in proportion to its cleavage efficiency (Table I). If RAG cleavage efficiency is a dominant factor determining V H element recombination, it suggests that the chromatin structure across the 1 megabase V H array does not vary dramatically, at least as it affects V(D)J recombination.
Our results are interesting in light of data from an independent line of work. When most of the human IgH array (35 functional V H ) was randomly integrated (via a yeast artificial chromosome) as a transgene into the germline of mice, the rearrangement of the V H gene families was surprisingly similar to that seen in humans (52). These studies were done on productively rearranged alleles. Hence, immunological selection in the mouse versus humans complicates the analysis. Nevertheless, in light of our work, the correspondence between the human unselected (nonproductive) repertoire and a transgene that is randomly integrated in a different species suggests that V H cleavage efficiencies are a dominant factor in shaping the repertoire.
As mentioned earlier, in neonatal mice, it has been observed that V H segments in proximity to the J H cluster are used more frequently than the distal V H segments (28 -31). In adult humans, there is no data to suggest any proximity preference within the Ig heavy chain locus (28 -30). Therefore, there is no contradiction between our suggestion that recombination signal strength could be a major factor and the murine fetal data on proximity as a major factor, because fetal mice and adult humans are quite different in this regard. This difference between mice and humans is not surprising. There are other examples where Ig repertoire diversification has been achieved by quite different mechanisms. For example, the diversification of IgH complementarities region 3 involves generation of a D protein, whereas there is no D in human, and quite different mechanisms are utilized to ensure diversification of this portion of the heavy chain (53).
Evolution of the Human V H Array and the Individual Recombination Efficiencies-If the RAG cleavage efficiency is a major determinant for the frequency of usage of V H elements, then it is reasonable to assume that the signal (and coding end sequences) evolved so as to optimize the level of each V H in the repertoire as needed to handle the threat of prevailing pathogens. Hence, there may have been two levels of evolutionary pressure at the DNA sequence level, one being the sequence of the V and the other being the efficiency of the signal (and adjacent coding end) for cleavage (54). The sequence of the V H coding region determines the range of antigens bound, whereas the sequence of the coding end and heptamer/nonamer determine the abundance of that V H element in the steady-state repertoire. Insofar as that steady-state repertoire is the initial response to an invading microbe, the balance of V H elements in that repertoire is important. It is intriguing that the relative ratios of family usage in this initial Ig heavy chain repertoire may be predictable from the relative RAG cleavage efficiencies in a manner that appears uncomplicated by other factors.
|
v3-fos-license
|
2018-12-16T16:29:10.313Z
|
2018-02-11T00:00:00.000
|
55631482
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://downloads.hindawi.com/journals/jat/2018/1732091.pdf",
"pdf_hash": "6a51494908a50e4a952f8ebc23d1fcc2bd8bff8f",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46682",
"s2fieldsofstudy": [
"Business"
],
"sha1": "b741aab498699d31bb8704ef463e9c2a92216b57",
"year": 2018
}
|
pes2o/s2orc
|
Minimizing the Impact of Large Freight Vehicles in the City : A Multicriteria Vision for Route Planning and Type of Vehicles
The impact of freight transport in cities is significant, and as such correct planning and management thereof help reduce their enormous negative impact. Above all, the special large vehicles have a greater impact than the remainder of freight vehicles, so a special attention should be paid to them.The vehicles which supply or pick up large amounts of goods at specific points throughout the city are an example of this type of vehicles.The aimof this paper is tominimize the cost of this freight transport type froma social, economic, and environmental viewpoint. To this effect, an optimization model has been proposed based on bilevel mathematical programming which minimizes the total system costs. City network model data are obtained on the lower level such as vehicle flow and travelling times, which are then used on the upper level to calculate total system costs. The model has been applied to a real case in Santander (Spain), whose final result shows the size and typology of the fleet of vehicles necessary to have the least impact on the city. The greater the vehicles size is (i.e., using fewer trucks), the less the cost of the freight transport is.
Introduction
Freight transport in the urban environment has a great impact on the city, that is, increase in congestion, pollution, costs, and so on.Many researchers have studied the impact of different policies in freight transport [1][2][3].In some cases, access to the city centre is restricted to reduce social, environmental, and economic impacts [4].
Different criteria can be used to choose which typology of vehicles can or cannot enter the city centre, such as weight limits in Santander [5] or vehicle emissions in the case of Rome [6].However, there are exceptional situations where this kind of vehicle is allowed to drive through the city centre due to specific circumstances (e.g., material delivery and pick-up on a construction site).This is why in those situations where the presence of freight transport increases at a specific point albeit due to increase in vehicle frequency and/or typology, consideration should be given to the reduction of its impact on the city.This paper studies the impact of the presence of these vehicles, less common in city centres.
This paper builds upon prior research by the authors on freight transport simulation and optimization models.The aim is go one step further and take a closer approach to the reality extant in cities; hence, unlike in prior studies [7], this one researched the impact of an increase in freight transport during a specific period of time, taking into account variable vehicle capacity and fleet.
This problem can be resolved via application of the supply chain problem when materials need delivering to a certain point following a specific schedule.Numerous researchers have also modelled and simulated the supply chain [8].Reiner and Trcka [9] modelled and designed the supply chain structure for a food company.Guo et al. [10] carried out a study where they had to minimize the system costs (construction, operation, information processing, and transport costs and likewise coal tax) for a route-planner model for fresh food e-commerce companies.Others, however, have undertaken more innovative studies, such as Smart supply chains [11] or how Smart Cities affect the supply chain [12].
Furthermore, a bilevel methodology has been applied to minimize the impact of large freight vehicles.Bilevel optimization is a kind of optimization that allows one problem to be nested within another, so it can explicitly represent the mutual action between the upper level and lower level.In addition, it allows solving two decision levels.For example, in the upper level, the freight systems planner makes the decisions, and, in the lower level, the private transport users make decisions based on the decisions of the first level.In essence, the bilevel model allows considering how heavy truck traffic affects traffic throughout the city.Moreover, bilevel programming can be used to analyse two different objectives and can reflect some practical problems better [13].Bilevel optimization methodology has been used by many transport researchers to study different subjects, such us public transport optimization (e.g., best bus stops locations [14], the relationship between transport and residence [15]), or it has been used in freight transport optimization for planning the delivery of supplies to large public infrastructure works [16].In this study, as in Romero et al. [7], the bilevel optimization takes into account in the upper level environmental, economic, and social costs, while there are other studies where only the economic cost has been considered [17].
Freight vehicles are the land transport vehicles with the greatest impact on pollution in cities; nevertheless, the movement of all vehicles must be considered regarding environmental contamination in the urban area.Transport significantly contributes to the major pollutant emissions (NOX, NMVOC, CH4, PM, and CO2), which is why they have been considered when studying the environmental impact of transport, as Romero et al. [7] did.It is worth highlighting that pollutant emissions from all transport sectors have dropped considerably since 1990, despite the general increase in vehicle movement [18].Adams et al. [19] described existing initiatives which endeavoured to improve the quality of data explaining air quality and pollutant emissions; air quality monitoring is an example [20].However, knowing the vehicle emission models is important when planning transport routes if we wish to reduce pollutant emissions [21].Wang et al. [22] researched gas dispersion characteristics and particles due to traffic in Hong Kong, considering CO and PM 2.5.
Freight transport optimization models can be studied considering one or multiple viewpoints.Behrends et al. [23] explain that sustainable urban freight transport (SUFT) must take into account three points of view: social, economic, and environmental.For that reason, these three points have been considered in this paper, as well as in Romero et al. [7].But, in other studies, one or two of these points have been applied.Yan et al. [24] designed a model whose sole aim was to minimize operation costs, excluding social and environmental costs.Nevertheless, many other studies also did consider social and environmental costs.He et al. [25] built a multipurpose model on logistics network planning to minimize total logistics network costs and likewise total carbon emissions.Browne et al. [2] reviewed the initiatives implemented by local authorities to reduce the environmental and social impact of urban freight transport in cities in 4 countries (UK, Japan, the Netherlands, and France).
To summarise, this paper studies the impact of the increase of large freight vehicles movement during a certain period of time within a city when they have a specific goods' delivery/pick-up point.The study has been carried out from an economic (operation costs), social (congestion), and environmental (emission of pollutants) point of view.
This problem has been studied by other researchers.Moura et al. [16] proposes an optimization-simulation model for planning the transport of supplies to large public infrastructure works located in congested urban areas.But, the difference lies in that the methodology developed in this paper considers different types of the fleet-in size and typology-to minimize the total cost of the system, and nevertheless Moura et al. do not consider it The introduction and State of the Art have been presented in this section, and the rest of the paper is arranged as follows: Section 2 describes the methodology used, then Section 3 applies the proposed methodology in a city along with presentation of main results, and, finally, Section 4 sets out the main conclusions.
Methodology
This paper presents a model to optimize large freight vehicle management and planning in city centres (lorries) used to load and unload large quantities of goods.The model considers a number of potential routes, which are defined by the restrictions of space and turning radii required by vehicles of this kind.
The purpose of the model is to determine journey distribution via different routes by defining fleet capacity and size, to minimize economic, social, and environmental costs.The lorries interact with the other transport means travelling in the city, like cars, buses, and smaller freight vehicles (vans or light trucks) and should be considered in the network modelling and calibration to which the optimization model will be applied.The optimization model considers total cost of the system, that is, social costs comprising bus and car user costs, bus operation and freight vehicle costs, and likewise environmental costs of all vehicles.
The optimization model is based on application of a bilevel mathematical program (Figure 1) [7,16,29] to find the best alternatives from an economic, social, and environmental viewpoint.From the lower level, via the city network model, vehicle flow, access, waiting and travelling times, and so on are obtained and then used on the upper level.At the upper level, an exhaustive search optimization is used to evaluate all the possibilities, to afterwards obtain those solutions that minimize total system cost: where where Cop is total operating costs; Cop B is bus operating cost; Cop Tr is truck operating cost.Bus operating costs (Cop B ) are made up of three factors: cost proportional to travelled distance (CR), personnel costs (CP), and fixed costs (CF).
Total cost due to the distance travelled by the buses is equal to where CR is unit cost per kilometer covered by bus.
where is length of route ; is frequency of route .Employee costs are calculated considering only the personnel who are really working on the buses: where CP is the hourly employee cost.
where is time of a round trip; ℎ is headway on route .
Fixed costs are calculated with the following formula that only considers the buses that are really circulating: where CF is fixed cost per hour of bus.
where is time of a round trip; ℎ is headway on route .The environmental costs (Cma) were calculated for different alternatives considering 5 types of pollutants (): NO, NMVOC, CH4, PM2.5, and CO2, as well as the difference vehicle typologies (V): petrol cars, diesel cars, buses, heavy lorries (HGV), medium lorries, and lightweight lorries.
where Cma is total environmental cost. is amount of pollutant . is environmental cost of pollutant .km cong V is km routed with congested network per vehicle type V. km uncong V is km routed with uncongested network per vehicle type V. Consum.congV is consumption per vehicle type V with congested network.Consum.uncongV is consumption per vehicle type V with uncongested network. V is conversion factor kg/litres of fuel per vehicle type V. Emmisions ,V is emissions of type pollutants per vehicle typology V. Firstly, fuel consumption per alternative is obtained.Once the emission of pollutants is known, it is calculated per type (Table 1).The emission of pollutants depends on how congested the network is, distance travelled, and vehicle typology [26].Finally, this is converted into monetary terms (Table 2).
Case Study
The methodology proposed has been applied to a real case in Santander, a city in northern Spain.Santander is a mediumsized coastal city with approx.180,000 inhabitants, and approx.75% of the city is bordered by water, so it has few access points.
A large construction project in the southeast of the city required delivery of 180 cubic metres of building material during the rush hour.This is the reason for studying the economic, social, and environmental impact caused by an increase in demand for large freight vehicles.The building material would be transported via a fleet of homogenous vehicles, selected from several typologies characterized by their maximum capacity and speed (Table 3).
Beginning with Santander network model data [29], three different routes will be considered: the first one (R1) goes through roads with two lanes per direction, with the exception of an 800-metre tunnel at the end of it; the second route (R2) differs only from R1 in that instead of going through the tunnel, it borders the city using coastal 1-lane per direction roads; and the third route (R3) travels through some streets that, though they have two or three lanes per direction, are also the most congested ones in the city.See Figure 2.
Each route has two legs, which contribute to its total travel time and length: an urban one (inside Santander) and an external one (from the quarry to the outskirts of the city).First of all, model behaviour was verified studying all possible means of moving the vehicle fleet from the quarry to building site via 3 predefined routes.The vehicle fleet may be of different typologies as already mentioned (Table 3); furthermore, 3 different fleet sizes were considered, that is, 10, 15, and 20 vehicles.Application of the model resulted in the obtainment of operation, user, and environmental costs per scenario in monetary terms (Figure 3).The difference between each scenario is the number of vehicles travelling along each route.
With this figure we can see that the model behaves as expected; that is, for the same size fleet, the combined user and operator costs increase the heavier and slower the lorry is, whereas environmental costs increase when the fleet consists of smaller lorries.Nevertheless, as might be expected costs increased with the size of the fleet for the same lorry typology.Moreover, as can be seen per scenario, if the majority of the fleet takes route 3 (R3) (blue dots at the top of the graph), environmental costs are higher than if routes 1 and/or 2 are taken; whereas should the majority of the fleet use route 2 (R2), the combined operation and user costs are higher.In terms of both combined user and operator costs and environmental costs, the best solutions correspond to those scenarios where most of the fleet uses route 1 (red dots).
Figure 4 shows the total costs for the different fleet size and typology combinations, indicating the total average cost per combination and likewise the total maximum and minimum cost.
Getting back to our study case where the building site required delivery of 180 cubic metres during the rush hour, per vehicle typology, the solutions with lower costs are shown in Figure 4. Figure 5 shows these 3 possibilities in greater detail together with the number of vehicles per route for the average cost.
If the planner needs the solution that minimizes environmental costs, he should move in a line parallel to the -axis ( = 0 ∘ ), starting at Cma = 0, and choose the first solution touched by it, which corresponds to sending 10 heavy trucks through R1.If our objective were to minimize Cop + Cu, the process would be similar, but using a line parallel to the -axis instead ( = 90 ∘ ).In this case, we should also use 10 heavy trucks, with ten of them going through R3.If only medium or light trucks were available, to minimize Cop + Cu most of the fleet should travel through R3 or R1, respectively.Other intermediate Pareto solutions, with different emphases in the environmental or in the user and operator costs, can be found using values of between 0 ∘ and 90 ∘ .Figure 5 shows which alpha value to use if Cma and (Cop + Cu) had the same importance.Below, the following is shown and represented (Figure 6): the 3 vehicle typologies (heavy, medium, and light), the minimum, average, and maximum costs necessary to move the building material from the quarry to the site, and likewise the number of lorries which would use each of the 3 routes available.Regarding total costs, it can be seen that using a fleet of 10 heavy trucks results in significantly lower total costs (minimum as well as average or maximum).There is little difference between using a fleet of 15 medium trucks or/and 20 light trucks, the former being cheaper than the latter in terms of minimum and average costs, while it is slightly more expensive when considering maximum costs.In conclusion, if a fleet of 10 heavy trucks is available, it is the option less costly from the environmental, economic, and social points of view.
Conclusions
This paper presents a model to optimize management and planning of special freight vehicles coming from outside the city to a specific point in the centre.The proposed model minimizes freight vehicle impact for those occasions when there is movement of large quantities of goods.The strength of the model is that it is based on bilevel mathematical programming, making a more realistic reflection of the practical problem possible by allowing variables that represent aspects of the city network to be nested within the problem of cost minimization.Furthermore, the model minimizes freight vehicle impact for those occasions when there is movement of large quantities of goods.
The model was applied to Santander (Spain) for the case where a large quantity of building materials had to be delivered to a specific point in the city characterized by a high mobility rate of both private and public transport, due to a large building site.In addition to selecting their typology, vehicles may also be assigned to different routes to minimize total system costs.
The model provides solutions for different vehicle typologies.The most economical solution considering the total system costs, that is, user, operation, and environmental costs, would be to use 10 HGVs, 1 vehicle via route R1 and the other 9 via route R3.However, if the purpose is to find the best solution from an environmental viewpoint, then these 10 HGVs should use route R1.Nevertheless, should only lightweight vehicles be available, then the best environmental solution would be to assign 15 vehicles to route R1 and 5 to route R3; and regarding total costs the best solution would be 10 vehicles via R1, 9 via R2, and 1 via R1.
In conclusion, we have learnt from the model proposed that large vehicle management and planning reduce impact on cities, from a social, economic, and/or environmental viewpoint.Selection of both vehicle typology and routes to be used and followed is important in the reduction of the costs which the movement of said vehicles causes.
Figure 6 :
Figure 6: Minimum, average, and maximum cost for the three vehicle typologies.
|
v3-fos-license
|
2020-12-10T09:07:02.640Z
|
2020-12-04T00:00:00.000
|
230642031
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3417/10/23/8693/pdf",
"pdf_hash": "52b68cf3412613acd3a38153f4ec3e80868314c2",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46684",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "8626c3292a0ad84a98816948bbb41f780952f99f",
"year": 2020
}
|
pes2o/s2orc
|
A 5-Years (2015–2019) Control Activity of an EU Laboratory: Contamination of Histamine in Fish Products and Exposure Assessment
Featured Application: This work o ff ers an overview of the quality of fresh fish and fish products consumed in Puglia and Basilicata regions (south part of Italy), analyzing their histamine content. Indeed, the recent Italian reports released by the Italian Ministry of Health, concerning the o ffi cial food control activities, indicated the biogenic amines (mainly histamine) as the analyte class with the highest percentage of non-compliant samples, among chemical contaminants. The histamine concentrations detected were elaborated for each type of seafood analyzed, obtaining useful information on the overall quality of fish products. The exposure assessment was also developed. This elaboration gave useful parameters also for other scientists who wish to carry out more extensive risk assessment studies. Abstract: Histamine contamination was evaluated on 474 batches (3130 determinations) of fish products collected in Puglia and Basilicata (southern part of Italy) during the years 2015–2019, using a high-throughput two-tier approach involving a screening (ELISA test) and confirmatory method (HPLC / FLD with o-phthalaldehyde derivatization). Histamine concentration > 2.5 mg kg − 1 was detected in 51% of total batches with the 2.5% of non-compliance. Except for two samples of fresh anchovies, all non-compliant samples were frozen, defrosted and canned tuna. Among 111 fresh tuna batches, 9 had a content of histamine between 393 and 5542 mg kg − 1 , and scombroid poisoning cases were observed after their consumption. Good quality canned tuna and ripened anchovies sold in Italy was observed. Furthermore, the analysis of the processing technology and storage practice critical points were reported in this study, with useful considerations to minimize the histamine risk for consumers. Finally, based on these results, several considerations about risk exposure were reported.
Introduction
In the last decade, a sharp increase in the marketing and consumption of fresh fish and fish products has been observed [1,2]. EFSA [3] and WHO [4] recommend a consumption of 1-2 fish-based meals per week as fish is essential for a complete diet, due to its high content of proteins, free amino acids and other health-enhancing components such as vitamins, minerals and omega-3 fatty acids [5]. At the same time, the high level of water in the tissues (80%), a low percentage of connective tissue in the muscles and the poorly acidic pH of the tissues make the fish easily subjected to microbial attack and decomposition. tissues, masking deterioration and HIM formation. In the spring of 2017, more than 150 people in Spain were affected by scombroid poisoning after tuna consumption [20]. In the Puglia and Basilicata regions (south part of Italy), in the same period, several cases of intoxication after the consumption of tuna-based meals were reported. Tuna samples used for meal preparation, seized by competent authorities and analyzed in our laboratory were found to be non-compliant to the HIM limits fixed by EC. These cases are discussed in the present study, which is an overview of the quality of fresh fish and fish products consumed in the Puglia and Basilicata regions. In this study, a contribution to the evaluation of exposure to HIM from fish consumption is reported. The results obtained by analyzing fresh, canned and ripened fish products for the determination of HIM are described and elaborated as risk exposure. The analyses were carried out within the official control plans in charge to the Istituto Zooprofilattico Sperimentale della Puglia e della Basilicata (IZS-PB) in the last five years (2015-2019), by using validated and accredited analytical techniques.
Sample Collection
Data about the content of HIM in fish products were obtained from official control analyses performed on 474 batches (3130 determinations) of imported and national fish products collected in Puglia and Basilicata (Italy) in the years 2015-2019. Generally, each batch was collected by the technicians of the local Health Service and border control authorities in nine sample units, as indicated in the Commission Regulation (EC) 2073/2005 [15]. However, in specific cases (suspected poisoning episode or sampling in the retail market) only a sample unit was taken. The samples were subdivided into three categories: 294 (62%) fresh/frozen and defrosted products-yellowfin tuna (Thunnus albacares, Bonnaterre 1788), skipjack tuna (Katsuwonus pelamis, Linnaeus 1758), bluefin tuna (Thunnus thynnus, Linnaeus, 1758), anchovies (Engraulis encrasicolus, Linnaeus, 1758), mackerel (Scomber scombrus, Linnaeus, 1758), sardines (Sardina pilchardus, Walbaum 1792), herring (Clupea harengus, Linnaeus, 1758), other kind of fish as swordfish (Xiphias gladius Linnaeus, 1758), salmon (Salmo salar, Linnaeus, 1758), and other species as gadidae and cephalopods; 119 (25%) canned products (mainly tuna and mackerel), and 61 (13%) ripened products (mainly anchovies and sardines). Canned tuna samples-made with skipjack tuna or yellowfin tuna-and canned mackerel samples, in water or in oil, were stored at room temperature until analysis. The majority of tuna samples were in 80 g cans while mackerel samples were in 125 g cans. The only non-compliant sample among the canned products was commercialized in a 2500 g can. The fresh fish samples were brought to the laboratory in cold storage conditions and stored at −20 • C until analysis.
The dietary exposure estimates are usually derived from the minimum, mean and maximum residue levels found in each category of product monitored. These levels, combined with representative data about food consumption, generate three different exposure scenarios (low, average and high) that represent the likely exposure across a population [22].
During the survey, different samples with high HIM concentrations (up to 5542 mg kg −1 ) were registered. Obviously, these high concentrations are responsible for acute poisoning and the risk assessment under high exposure scenario loses its sense. Consequently, the risk assessment was developed taking into account the average exposure scenario. Moreover, in order to give a more significant contribution to the evaluation, the HIM concentrations detected during monitoring were subdivided into 6 ranges (C ≤ 2.5 mg kg −1 , 2.5 mg kg −1 < C ≤ 10 mg kg −1 , 10 mg kg −1 < C ≤ 50 mg kg −1 , 50 mg kg −1 < C ≤ 100 mg kg −1 , 100 mg kg −1 < C ≤ 200 mg kg −1 , C > 200 mg kg −1 ) and the risk assessment was also developed taking into account the mean concentration registered in the range with the highest amount of data (most likely scenario).
All samples monitored during this survey were commercialized in Italy. So, the reference data about fish consumption were obtained from the INRAN-SCAI 2005-06 [23][24][25], which provides the mean consumption of the Italian population relating to the following categories of seafood: fresh and Appl. Sci. 2020, 10, 8693 4 of 17 frozen, preserved (canned and ripened fish) and overall. For each category, data on 5 subgroups of the population were available: infants (0-2 a), children (3-9 a), adolescents (10-17 a), adults (18-64 a), the elderly (65-97 a). Among these 5 subgroups, only the last 4 were taken into account for risk assessment, since the estimations relating to the first one (infants) was not representative (n < 30) [26]. The risk assessment was elaborated using data related to the whole population and to the seafood consumers, since the reference document supplied this type of information.
A special focus was reserved for canned tuna, which is the fish species mainly analyzed in this survey and, at the same time, the most consumed marine species in Europe, with an average annual consumption equal to 2.78 kg per capita in 2016, corresponding to 11% of all seafood consumed [27]. Given the Italian consumption of canned tuna, recently estimated at 2.5 kg per capita [28], the exposure assessment was also carried out for canned tuna consumption.
For estimating the exposure and the resultant risk, under a probabilistic approach, the HIM no-observed-adverse-effect level (NOAEL) was taken as reference. As reported by several official documents, studies and scientific opinions, for HIM this level corresponds to 50 mg Die −1 [29][30][31].
Sample Preparation
The sample preparation was related to the type of sample analyzed. For fresh fish, the skin and bones were removed just before homogenization. Frozen fish was left to defrost at 4 • C and then prepared. Canned fish samples were dried on the absorbent paper to remove as much preserving liquid as possible (oil or water). For salted herrings and anchovies, the salt present in large grains in the sample was removed with the aid of a sharp knife and of a paper cloth. Once the salt was removed, the bones were taken off. To avoid the increase in HIM levels, the fish was kept at room temperature in the strictly necessary time for the sample pre-treatment. The knives used for fish pre-treatment were cleaned with ethanol to avoid contamination of the samples.
ELISA Test Reagents, Equipment and Procedure
Screening analyses were performed using a Neogen Veratox ® Histamine kit. All reagents used for the ELISA screening analysis were included in the commercial kit. Sample preparation was carried out according to the kit instructions with minor changes. Briefly, the pre-treated sample was homogenized using a commercial blender. One gram of each aliquot was transferred into a 15 mL centrifuge polypropylene tube and 9 mL of distillate water was added. After shaking at room temperature for 10 min, the mixture was centrifuged at 2112 g and 10 • C for 10 min; 100 µL of the aqueous supernatant was diluted with 10 mL of kit diluent buffer. A further 5-fold dilution was necessary to make the concentration compliant to the standards curve. Thus, the sample was ready to be analyzed according to the instructions reported in the manufacturer's kit manual.
A microtiter plate spectrophotometer (Anthos HT2, GSG ROBOTIX s.r.l. (Cinisello Balsamo, Milano, Italy) was used for reading the ELISA plates at 620 nm. A calibration curve was obtained, plotting absorbance values against the concentration of standard solutions at 0, 20, 50 mg kg −1 . Sample concentration values were multiplied to the dilution factor of five. In each analytical batch, positive and negative quality control samples were included. The negative quality control was a blank fish (i.e., tuna and/or ripened anchovies) depending on the type of samples analyzed in the daily batch, while the positive quality control was the same blank tuna spiked at 100 mg kg −1 and/or the same blank ripened anchovies spiked at 200 mg kg −1 . Unknown samples, standard solutions and quality control were pipetted twice into the microplate wells. Elisa test, allowing the detection of HIM content at concentrations between 2.5 and 250 mg kg −1 , was fully validated. Details of validation are reported elsewhere [32].
Samples analyzed by the ELISA method with an HIM content greater than 77 mg kg −1 (Maximum Limit-2 × relative standard deviation of the screening method repeatability) for fresh and canned fish and 177 mg kg −1 (Maximum Limit-2 × relative standard deviation of the screening method repeatability) for processed fish were judged "suspicious non-compliant" samples and they were subsequently analyzed by an HPLC/FLD method with online derivatization with o-phthalaldehyde.
A 1000 mg L −1 stock solution of HIM was prepared in water and stored at +4 • C for up to 4 months. A 10 mg L −1 solution was freshly prepared in 5% TCA. Working standard solutions (0.04, 0.5, 1, 4, 8 mg L −1 ) were obtained by dilution with 5% TCA, just before analysis.
Next, 1.70 g of potassium phosphate monobasic, 2.04 g of potassium phosphate bibasic, and 0.49 g of sodium 1-decanesulfonate were dissolved in water for the preparation of the phosphate buffer solution, containing an ion-pair reagent. The pH value of this solution was adjusted to 5.40 ± 0.05 with concentrated hydrochloric acid and the volume was made up in a 1 L flask with water. The post-column derivatization solution was freshly prepared by dissolving 0.2 g of Thiofluor™ in about 1 mL of OD104 and was added to a solution consisting of 0.2 g of o-phthalaldehyde in about 1 mL of methanol; the mixture was adjusted to a final volume of 200 mL by OD104. This solution was stored in a brown glass bottle and, when not in use, was kept at 4 • C for up to 2 days.
Chromatographic separations were performed on an HPLC system, Agilent Technologies SL 1200 Series (Waldbronn, Germany) consisting of a binary pump provided with a micro vacuum degasser, a thermostatable autosampler, a column compartment, and a fluorescence detector.
Online post-column chemical derivatization was performed by using a commercially available system supplied by LCTech GmbH (Dorfen, Germany) and consisting of a double-piston pump (model K-120) and a thermostatable post-column reactor (model CRX 400) equipped with a 0.5 mL knitted reaction coil. Separations were performed using Phenomenex columns: Luna C8 column (250 mm × 4.6 mm i.d., particle size 5 µm) at a flow rate of 1.0 mL min −1 by isocratic elution. The mobile phase consisted of 0.01 M phosphate buffer solution, containing 0.002 M sodium 1-decanesulfonate as ion-pair reagent, at pH 5.4 (A) and acetonitrile (B) in a ratio of 80:20 (v/v). The injection volume was 2 µL and the column temperature was set at 40 • C. The output of the column was connected by a T-joint with the derivatization line, pumping at a flow rate of 0.4 mL min −1 , and carried to the reaction coil, set at the temperature of 40 • C, for the derivatization process. Fluorescence detection was performed at the excitation and emission wavelengths of 343 and 445 nm, respectively. The system was interfaced, via network chromatographic software (Agilent ChemStation), to a personal computer to control instruments, data acquisition and processing.
A 5 g portion of pre-treated sample was added to 30 mL of 5% TCA and the mixture was homogenized, in a single-step, with the aid of IKA ULTRA-TURRAX T18 basic (Werke GMBH & Co. KG, Germany). After centrifugation at 2112 g for 10 min at 10 • C and separation of the supernatant, few milliliters of 5% TCA was added to the sample extract, to obtain a final volume of 30 mL. An aliquot of 1 mL of this solution was filtered with an Anotop 10 LC filter (0.2 µm, 10 mm, Whatman) obtaining an extract stable for 3 days at +4 • C. An amount of 100 µL of this extract was diluted with 900 µL of water and then injected in HPLC. The procedure resulted in a 60-fold dilution of HIM, evaluated against a known concentration added to a real blank sample. Likewise, in the screening test, in each confirmatory batch of the method, both positive and negative quality control samples were included. Together with the screening suspect samples, a blank and a spiked sample fortified at regulatory limit (i.e., 100 or 200 mg kg −1 depending on the type of fish product analyzed) were run. Each sample unit was analyzed in duplicate and the results were calculated as an average of respective replicates. In 2017, the instrumental method (HPLC/FLD) used in this study was compared to the EN ISO 19343: 2017 [33] obtaining better or comparable analytical performances (i.e., limit of quantification and precision).
Methods Quality Control
ELISA and HPLC/FLD methods, fully validated and accredited by the Italian Organism for laboratories accreditation ACCREDIA, are currently used in our laboratory in routine analyses and their reliability is checked every year by participating in proficiency tests. This external quality control involves the analysis of two unknown lyophilized tuna muscle samples "incurred" with HIM at two different levels. The Z-scores obtained in 2019 were equal to 1.10-0.53, and 1.11-1.22 for the ELISA and HPLC method, respectively, confirming the suitability of both procedures for official control.
Data Handling and Statistical Analysis
The samples resulted as contaminated if the HIM concentration was above 2.5 mg kg −1 , which is the limit of quantification of both ELISA and HPLC/FLD methods, used respectively for screening and confirmatory analysis. In the case of sampling and analysis of nine unit samples (332 cases on 474 batches), the HIM content was evaluated considering the mean value, replacing with LOQ/2 values below LOQ. This approach was used since it represents a balance point between the solution that underestimates (=0) or overestimates (=LOQ) the true value, and it is reasonably precautionary in health-related studies [34]. One-way ANOVA and Student's t-test (p < 0.05) were used to compare the contamination levels of the different types of seafood investigated. Statistical handling of the results was performed using the Excel Microsoft Office software version 2016 for Windows.
Results
The overall results are reported in Figure 1, which shows the number of total samples associated with the different concentration ranges. These ranges were set taking into account both the compliance limits m and M (as reported in the Regulation (EC) No. 2073/2005) and the different levels of fish freshness, as follows: C < 10 mg kg −1 = fish of good quality; 10 < C < 50 mg kg −1 = significant deterioration; C > 50 mg kg −1 = evidence of definitive decomposition [35,36]. HIM level > 2.5 mg kg −1 was detected in 242 of the 474 analyzed samples (51%), with 12 non-compliant responses (2.5%). HIM concentration less than 10 mg kg −1 was detected in 410 samples (87% of the total).
Methods Quality Control
ELISA and HPLC/FLD methods, fully validated and accredited by the Italian Organism for laboratories accreditation ACCREDIA, are currently used in our laboratory in routine analyses and their reliability is checked every year by participating in proficiency tests. This external quality control involves the analysis of two unknown lyophilized tuna muscle samples "incurred" with HIM at two different levels. The Z-scores obtained in 2019 were equal to 1.10-0.53, and 1.11-1.22 for the ELISA and HPLC method, respectively, confirming the suitability of both procedures for official control.
Data Handling and Statistical Analysis
The samples resulted as contaminated if the HIM concentration was above 2.5 mg kg −1 , which is the limit of quantification of both ELISA and HPLC/FLD methods, used respectively for screening and confirmatory analysis. In the case of sampling and analysis of nine unit samples (332 cases on 474 batches), the HIM content was evaluated considering the mean value, replacing with LOQ/2 values below LOQ. This approach was used since it represents a balance point between the solution that underestimates (=0) or overestimates (=LOQ) the true value, and it is reasonably precautionary in health-related studies [34]. One-way ANOVA and Student's t-test (p < 0.05) were used to compare the contamination levels of the different types of seafood investigated. Statistical handling of the results was performed using the Excel Microsoft Office software version 2016 for Windows.
Results
The overall results are reported in Figure 1, which shows the number of total samples associated with the different concentration ranges. These ranges were set taking into account both the compliance limits m and M (as reported in the Regulation (EC) No. 2073/2005) and the different levels of fish freshness, as follows: C < 10 mg kg −1 = fish of good quality; 10 < C < 50 mg kg −1 = significant deterioration; C > 50 mg kg −1 = evidence of definitive decomposition [35,36]. HIM level > 2.5 mg kg −1 was detected in 242 of the 474 analyzed samples (51%), with 12 non-compliant responses (2.5%). HIM concentration less than 10 mg kg −1 was detected in 410 samples (87% of the total). Figure 2 shows HIM concentration ranges in fish categories (fresh/frozen and defrosted products, canned products, ripened products). For fresh/frozen and defrosted category, 260 samples out of 294 analyzed showed an HIM concentration below 10 mg kg −1 and 11 non-compliant samples out of the 294 analyzed were detected. On a total of 119 batches of canned samples, 117 contained HIM levels below 50 mg kg −1 , with 110 samples below 10 mg kg −1 . Finally, as concerns ripened products samples (61 on a total of 474), HIM levels less than 50 mg kg −1 were detected in 55 samples. In Table 1 is reported the HIM content in mg kg −1 (mean, minimum and maximum) for each kind of product. From a statistical analysis, a significant difference was verified among different seafood types, using one-way ANOVA (p < 0.05). The highest HIM levels were quantified in fresh/frozen/defrosted tuna. The results of these levels were statistically higher than in those detected in canned tuna and other fresh fish and were comparable to the concentrations quantified in ripened anchovies, fresh/frozen mackerel, canned mackerel, and fresh/frozen sardines and herrings (t-test, p < 0.05). Table 1 also reports the number and the concentration levels of non-compliant batches, ranging from 328 to 5542 mg kg −1 . Finally, the exposure assessment from seafood consumption, under an average exposure scenario ( Table 2) and under the most likely exposure scenario (Table 3), are reported. In Table 1 is reported the HIM content in mg kg −1 (mean, minimum and maximum) for each kind of product. From a statistical analysis, a significant difference was verified among different seafood types, using one-way ANOVA (p < 0.05). The highest HIM levels were quantified in fresh/frozen/defrosted tuna. The results of these levels were statistically higher than in those detected in canned tuna and other fresh fish and were comparable to the concentrations quantified in ripened anchovies, fresh/frozen mackerel, canned mackerel, and fresh/frozen sardines and herrings (t-test, p < 0.05). Table 1 also reports the number and the concentration levels of non-compliant batches, ranging from 328 to 5542 mg kg −1 . Finally, the exposure assessment from seafood consumption, under an average exposure scenario ( Table 2) and under the most likely exposure scenario (Table 3), are reported.
HIM Levels in Fish Samples
The screening/confirmation double approach adopted in this work was successfully applied, especially in preventive seizure cases, to check quickly and reliably the conformity of fish batches. The high methods sensitivity (quantification limit: 2.5 mg kg −1 ) and the good precision (RSDr < 10% for ELISA test and RSDR < 5% for HPLC/FLD method) allowed us to quantify HIM over the whole contamination range, generating useful data for the exposure assessment. The results showed a lower occurrence of non-compliant samples (2.5%) in comparison to data found in the literature in older monitoring, where a 5% and 4.9% non-compliance percentage was found [32,37]. Instead, Cicero et al. [38] reported in a recent study a comparable rate of non-compliance, of about 2.79%. Finally, HIM concentration less than 10 mg kg −1 , detected in the 87% of the total samples, indicated an overall good quality of the analyzed products in comparison than that reported by Petrovic et al. [39] about fish products imported into Serbia (67.03% on a total of 273 samples below 10 mg kg −1 ).
Fresh, Frozen, and Defrosted Fish
As shown in Figure 2, the fresh, frozen, and defrosted fish category, mainly represented by tuna, anchovies, and mackerel, showed a particular situation: the highest number of samples with concentrations below 10 mg kg −1 . At the same time, the highest number of "non-compliant" samples was found in this category. The non-compliances concerned nine frozen and defrosted tuna samples and two anchovie samples with concentration values from 328 to 5542 mg kg −1 , much higher than the limits set in the Regulation (EC) No. 2073/2005 (Table 1). Critical points such as inadequate frosting/defrosting cycles, incorrect treatment before consumption of the defrosted tuna, and inappropriate packaging or storage conditions at the retail market are, presumably, at the basis of these findings in agreement with those reported by Altieri et al. and Ðordević et al. [40,41] Furthermore, fraudulent practices in the treatment of raw tuna with red extracts containing nitrates and/or high levels of antioxidants or with carbon monoxide, as reported in the last years [20], may be other causes of the HIM presence. As shown in Table 1, in the fresh, frozen and defrosted fish category, tuna showed the highest HIM concentrations, in accordance with most of the studies reported in the literature [38,[42][43][44], confirming that tuna is more susceptible to HIM development than mackerel, anchovies, and sardines because of its high content of free histidine and its composition and presence of high levels of bacterial flora. As reported in Table 1 for fresh anchovy samples, an HIM content in the range 2.57-559 mg kg −1 was found, with a mean concentration of 19.6 mg kg −1 . This value was lower than that of 41.1 mg kg −1 reported by Park et al. [45] In the same study, the authors found a mean HIM concentration of 39.3 mg kg −1 on 30 frozen mackerel lots. This value was higher than that of 7.8 mg kg −1 detected, in the present survey, in fresh/frozen mackerel samples that showed also HIM levels ranging to 2.57 from 68.4 mg kg −1 . The HIM levels found in mackerel were also much lower than those reported by Ali et al. in Indian mackerels (Rastrelliger kanagurta) of Pakistan (144.72 ± 2.47 mg kg −1 ) [46]. Finally, our results are quite similar to those reported by Cicero et al. for Mediterranean mackerel sampled in the years 2010-2012 but were much lower with respect to the HIM values found in the same kind of product analyzed in 2014 [38].
HIM concentrations less than 10 mg kg −1 were detected in non-scombroid fish such as salmon and other species for which no limits were set in the Regulation (EC) No. 2073/2005. These findings are consistent with the literature data [47] even though salmon consumption is reported as a cause of intoxication [48,49] and recent studies showed the presence of HIM in non-scombroid species such as mahi-mahi and swordfish in fillets [50,51].
Two samples of tuna slices stored in the freezer, taken from a local restaurant and from a company canteen, collected after allergic reaction cases, showed HIM concentration levels of 4895 mg kg −1 and 5542 mg kg −1 , respectively. Restaurants, company canteens and sandwich shops are often reported as sources of scombroid poisoning cases [52]. Apart from frozen or defrosted tuna, used in restaurants for meal preparation but previously stored in inappropriate conditions and not consumed within 24 h, canned tuna is also used as an ingredient in salad, sandwiches and pizza, and when opened a long time before consumption, can be subject to bacterial contamination and HIM production as reported by Cattaneo et al. [53].
Canned Products
As previously reported in the Results section (Figure 2), on a total of 119 batches of canned samples, 98% of the samples contained HIM levels below 50 mg kg −1 , with 92% of the samples below 10 mg kg −1 .
A number of 91 in a total of 101 canned tuna samples were processed by Spanish companies destined for fish transformation with raw tuna of EU and extra-EU origin. The majority of canned mackerel (9 samples out of 12 analyzed) were produced in Morocco. In this country, some critical issues in the canned product processing chain were verified over the past few years. Tunas and mackerels are often caught far from canning factories and so an inadequate interim conservation can lead to bacterial spoilage and HIM formation. Once formed, there is no method of fish preparation available, including thermic treatment, which can degrade the toxin. Therefore, a low content of HIM in the canned analyzed samples could indicate a general improvement in the production chain of canned products, coming from good quality of raw material, a correct maintenance of fish cold chain and adequate handling in the working place and in the processing [54]. No appreciable differences in HIM contamination (p < 0.05) were noted between samples of tuna in different preserving liquid (in oil or natural) as already reported by Sadeghi et al. [55] Good quality canned tuna sold in Italy has already been reported in the literature [32,37,38]. Our results are comparable with those reported by Silva et al. [56] with an HIM mean level for canned tuna products marketed in Brazil lower than 20 mg kg −1 . Similar results were obtained by Yesudhason et al. [57] that found an HIM ranging from 1 to 22.9 mg kg −1 in 78.9% of 290 samples of canned products from Oman with an overall mean value of 3.18 mg kg −1 . HIM levels in the range 2.6-30.4 mg kg −1 were reported by Er et al. [58] in Turkish canned tuna fish samples. Even worse, HIM content exceeding the limits of 50 mg kg −1 set by the U.S. Food and Drug Administration, was found in 18.33% of canned fish samples marketed in Iran as reported by Peivasteh-Roudsari et al. [59], which also underlined the significant difference in HIM concentrations between canned tuna in oil and in brine. Among canned tuna, the only non-compliant sample with a concentration of 2219 mg kg −1 was a "residual of the meal" associated with a scombroid poisoning case. This was a piece of tuna from a retail market that the seller had sold from a large open can and stored refrigerated for several days. Poisoning cases resulting from this common selling practice, already reported by Piersanti et al. [37], may be a risk for consumers health. As shown in Table 1, the highest HIM level found among canned mackerel batches was 105 ± 13 mg kg −1 . This value, associated with a compliant sample, taking into account measurement uncertainty, was higher than the maximum values reported for canned mackerel (24 mg kg −1 and 19.1 mg kg −1 reported by Tsai et al. and Petrovic et al., respectively [39,60]).
Ripened Products
As shown in Figure 2, all ripened products, in large part processed in Albania (43% of the total), were below the regulatory limits of concentrations lower than 50 mg kg −1 in 90% of the samples. These findings are in contrast with the trend reported by Vosikis et al. [61] and in our previous survey [32]. In the past, the presence of HIM above limits in ripened anchovies was frequently the subject of several RASSF alert notifications and a matter for Commission Decision (EC) 642/2007 [62] regarding risk related to the consumption of fishery products imported from Albania. The low contamination level found in our survey among ripened products could be related to more efficient control activity plans undertaken by the EU for this kind of product.
Risk Exposure
As shown in Table 2, the highest intakes were registered for adolescent males, with percentages of NOAEL up to 12.1%, obtained for consumers of fresh and frozen fish. The lowest mean intake was obtained for children (female), for which the NOAEL percentage does not exceed 8.4% related to fresh and frozen fish consumers. Regarding the overall consumption of seafood, the global intake of HIM estimated for the global population follows the order: adolescents > adults > children > elderly, with NOAEL percentages ranging from 5.7-4.5%. Due to significantly lower consumption, consumers of preserved seafood have registered NOAEL percentages up to 35-fold lower than those of overall fish. In Table 3 the exposure assessment from fish consumption, under a most likely exposure scenario, is reported. Gram for gram (the most likely levels of HIM found in the survey were about 11, 1.6 and 15-fold lower than the average for overall, preserved and fresh/frozen seafood, respectively), the results confirmed the intake distribution obtained under an average exposure scenario. In particular, all NOAEL percentage intakes resulted in levels lower than 1%, with the highest value equal to 0.80% measured for male consumers of overall seafood. The highest mean intakes were registered for adolescent male consumers, with mean percentages of NOAEL corresponding to 0.78%, 0.19%, and 0.80% related to fresh/frozen, preserved, and overall seafood, respectively. According to the state verified under average exposure scenario, regarding the overall consumption of seafood, the global intake of HIM estimated for the global population follows the order: adolescents > adults > children > elderly, with NOAEL percentages ranging from 0.53-0.42%. Analogously, consumers of preserved seafood have registered NOAEL percentages significantly lower than those of overall seafood, in the order of about four-fold. This mean that the HIM intake difference between preserved and overall seafood consumers is less pronounced under the most likely exposure scenario, and that no substantial risk subsists relating to HIM intake from fish consumption. Regarding the specific exposure assessment developed for canned tuna consumption, the intake under average exposure scenario resulted equal to 0.03 mg Die −1 , corresponding to 0.07% of NOAEL. The values slightly increase under the most likely scenario, becoming equal to 0.04 mg Die −1 , corresponding to 0.08% of NOAEL. The results of the present exposure assessment are in agreement with those recently reported by Rahmani et al. in Iran [63] for canned tuna consumption, by Yesudhason et al. in Oman [57] for fresh and processed fish, and by Hariri et al. in Morocco [64] for chilled, frozen, canned, and semi-preserved fish. All these studies confirmed that, although control is needed due to possible acute poisoning phenomena, in most cases the post-catching and commercialization practices of fish are adequate, guaranteeing the good overall quality of fish.
Conclusions
A highly practical approach involving screening and confirmatory methods was used to efficiently determinate the histamine content in 474 samples of fish and fish products collected in Puglia and Basilicata regions (south part of Italy) during the years 2015-2019, in the framework of the official control. Although the results showed an increasing good quality of fish and a low risk exposure for the consumers, some scombroid poisoning cases, related to non-compliant tuna samples, confirmed the need for constant monitoring to avoid health risk. Furthermore, future studies will be oriented to investigate the presence of unauthorized additives to HIM-contaminated tuna samples, as required by the EU Commission.
|
v3-fos-license
|
2021-05-18T13:20:29.993Z
|
2021-04-10T00:00:00.000
|
234753244
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-8994/13/4/639/pdf",
"pdf_hash": "76236250f5e727cf39a7df355f983f33f45b596b",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46685",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "8b7b64114aacbebd195a136fb5338524fea9ed7b",
"year": 2021
}
|
pes2o/s2orc
|
Biomechanical Symmetry during Drop Jump Landing and Takeoff in Adolescent Athletes Following Recent Anterior Cruciate Ligament Reconstruction
This study investigated asymmetry between lower extremities during the landing and takeoff phases of a vertical drop jump (VDJ) in adolescent athletes following anterior cruciate ligament reconstruction (ACLR) and examined if performance was affected by reducing jump height. Thirty-three athletes who underwent ACLR and were referred for 3D biomechanical assessment before returning to play (mean age 15.9, SD 1.3 years; 16/33 female; mean time since surgery 7.4, SD 1.2 months) completed the VDJ while kinematics and kinetics were collected using motion capture. Lower extremity symmetry was compared between phases using paired t-tests. Jump height was calculated to measure performance. Asymmetries in ankle inversion, ankle adduction, knee adduction, hip adduction, hip adduction moment, and hip rotation moment were observed in both phases. Asymmetry was also observed in both phases for sagittal moments and power integrals at the knee and ankle and total power integral, with the magnitude of asymmetry being smaller during takeoff for power absorption/generation. Jump height was related to power generation integrals during takeoff but not to the asymmetry of power generation. Since asymmetries are translated from landing through takeoff, rehabilitation should address both phases to decrease injury risk and maximize performance after return to play.
Introduction
Anterior cruciate ligament (ACL) injuries have become increasingly common in adolescent athletes in the past two decades with the number of ACL tears increasing by 2.3% per year in youth 6 to 18 years of age [1,2]. This increase has been influenced by a greater number of adolescents participating in organized, high-level sports [1]. Studies have found a number of biomechanical factors, such as dynamic knee valgus and greater side to side knee abduction moment asymmetry, that contribute to a higher risk of sustaining an ACL injury [1,[3][4][5]. Surgical reconstruction of the ACL is the preferred treatment to allow for patients to return to their prior level of activity [6,7]. McCollugh et al. found that 63% of high school athletes did return to their sport following ACL reconstruction (ACLR). However, just 43% of athletes who returned to sport felt that they were playing at their pre-injury level of performance [8]. Research has shown that athletes returning to sport after ACLR have an increased risk of sustaining a new injury to both the ipsilateral and contralateral limbs and developing early onset osteoarthritis (OA) in the reconstructed knee even with successful surgery [9,10].
Due to this increased risk of sustaining a second ACL injury, there is much debate over the objective clinical criteria used to determine when an adolescent athlete should be Symmetry 2021, 13, 639 2 of 11 allowed to return to sport following ACLR [5,[10][11][12][13]. Some of the commonly used return to sport measures include time since surgery, movement analysis, and the strength and range of motion symmetry between limbs [5,14]. In addition, physical performance tests that include hopping and jumping are also often used clinically to examine knee stability and return to sport readiness [5,14,15].
The vertical drop jump (VDJ) test is one such measure commonly used as it mimics a deceleration task often seen in sport and is a way to assess limb symmetry during a single movement [11,16,17]. In addition, it has been shown that uninjured adolescents perform a VDJ task with only slight asymmetries, supporting that biomechanical symmetry is an appropriate goal for return to sport testing [18]. The landing phase of this task has been extensively researched as this is the point when most non-contact ACL injuries are thought to occur [19][20][21]. Research has shown that poor biomechanics are present in patients following ACLR in tasks such as the VDJ landing [11,19,22]. Mueske et al. found that there is decreased energy absorption, decreased sagittal plane external flexion moments at the knee and ankle, and decreased peak dorsiflexion angle during the landing phase in adolescent athletes who had undergone ACLR compared to their uninvolved side [23]. Dynamic valgus, reduced active shock absorption, and decreased flexion of the lower extremity seen during the landing phase of the VDJ task have been associated with an increased risk for ACL injury [20,[24][25][26].
The takeoff phase of the VDJ task has been less researched as it is normally used as a performance-based measure [27,28]. Little is known about the takeoff phase and how it relates to between limb symmetry following ACLR. Currently, no study has examined the biomechanical strategies utilized during both the landing and takeoff phases in the VDJ task and their implications in rehabilitation following ACLR. The purpose of this study was to investigate the occurrence and degree of asymmetries of kinetics and kinematics between the lower extremities during the landing and takeoff phases of a VDJ task in adolescent athletes following recent ACLR. We hypothesized that asymmetries seen in the landing phase will carry over to the takeoff phase, which will likely lead to decreased performance from a reduced jump height.
Materials and Methods
This retrospective study examined data from patients 13 to 18 years old who were seen in the Motion and Sports Analysis Laboratory between March 2015 and February 2018 for 3D biomechanical assessment following unilateral ACLR for a non-contact injury ( Table 1). Patients were excluded from the study if they had a history of other serious lower extremity injury, previous ACL injury, were unable to complete the VDJ, or had missing motion analysis data during VDJ landing and takeoff. Patients had not yet been cleared for return to full activity at the time of testing. Data were accessed retrospectively under either a waiver of consent or signed consent approved by the Children's Hospital Los Angeles Institutional Review Board.
Data collection was performed by two experienced pediatric physical therapists who had specialized training in sports biomechanical assessment and motion analysis. Anthropometric measurements were obtained using standard clinical procedures, and a VDJ was performed as part of a more extensive biomechanical testing protocol [29]. Prior to all biomechanical testing, the subject warmed up with approximately 5 min of treadmill running. For the VDJ task, participants were instructed to drop off a 41 cm box, land with each foot on a different force plate, and then jump straight up as high as possible, landing back on the same force plates to keep the jump vertical ( Figure 1). Following two to three practice trials, three trials for data collection were performed, and all trials with useable data (a minimum of two per subject) were averaged for further analysis [30]. Data collection was performed by two experienced pediatric physical therapists wh had specialized training in sports biomechanical assessment and motion analysis. Anthro pometric measurements were obtained using standard clinical procedures, and a VDJ wa performed as part of a more extensive biomechanical testing protocol [29]. Prior to a biomechanical testing, the subject warmed up with approximately 5 min of treadmill run ning. For the VDJ task, participants were instructed to drop off a 41 cm box, land wit each foot on a different force plate, and then jump straight up as high as possible, landin back on the same force plates to keep the jump vertical ( Figure 1). Following two to thre practice trials, three trials for data collection were performed, and all trials with useabl data (a minimum of two per subject) were averaged for further analysis [30]. Three-dimensional lower extremity motion analysis data were recorded during the VDJ using an 8 to 10 camera motion capture system at 120 Hz (Nexus 2, Vicon Motion Systems Ltd., Oxford, UK) and two analog force plates at 2400 Hz (AMTI OR6-5, Advanced Medical Terminology, Inc., Watertown, MA, USA). An experienced physical therapist placed markers on the participant's trunk, pelvis, and lower extremities following a custom 6-degree of freedom (DOF) model [30]. Marker trajectories were filtered using a Woltring filter with a mean squared error of 10 mm 2 . Force plate data were filtered using a 16 Hz Butterworth filter.
Kinematic and kinetic measures were calculated during the landing and takeoff phases of the VDJ task. The landing phase was defined as the first initial contact to sacral marker velocity change of vertical direction, and the takeoff phase was defined as sacral marker velocity change of vertical direction to foot off. Kinematics were calculated based on segment orientations determined by 6DOF segment optimization in Visual3D (C-Motion, Inc., Germantown, MD, USA), and kinetics were calculated using inverse dynamics in the same software. Average joint angles, external moments and power in the sagittal, frontal and transverse planes were collected and used to calculate limb symmetry as the surgical limb minus the contralateral limb. Positive values indicate flexion, adduction, and internal rotation of the hip and knee and dorsiflexion, adduction, and inversion of the ankle, along with the corresponding external moments. The power integral was calculated during landing as the integral of power absorption (negative values) over the landing phase. Similarly, the power integral was calculated during takeoff as the integral of power generation (positive values) over the takeoff phase. In both cases, the integral is reported as a positive value indicating energy absorption during landing and energy generation during takeoff.
Jump height (H jump ) was calculated as a measure of performance using the impulsemomentum method based on the integral of vertical ground reaction force (GRF) from peak knee flexion (PKF) to foot off [31].
Equation (1): where g is the acceleration due to gravity and takeoff velocity is Equation (2): where F GRF is the total (left + right) vertical GRF, and m is subject mass. Difference between limbs (significance of asymmetry) during each phase and difference in magnitude of asymmetry between the landing and takeoff phases were evaluated using paired t-tests. The relationship of jump height to power generation integral at each joint and in total was evaluated using Pearson's correlation. Pearson's correlation was also used to examine the relationship between jump height and asymmetry of power generation. All statistical analysis was performed in Stata (version 14, StataCorp LLC, College Station, TX, USA) with a significance level of 0.05.
Results
Thirty-three patients (16 female; mean age 15.9, standard deviation (SD) 1.3 years) met the eligibility criteria and were included in this study (Table 1). Their mean time since surgery was 7.4 months (range 6 to 10 months). All patients had autograft reconstructions, 13 with hamstring tendon, 13 with patellar tendon, 6 with quadriceps tendon, and 1 with iliotibial band.
Kinematics and Kinetics: In the frontal and transverse planes, significant asymmetry was observed, with the surgical limb having higher values during both landing and takeoff for ankle inversion, knee adduction, ankle adduction, and hip rotation ( Table 2). The hip was less abducted on the surgical side during both landing and takeoff, but the difference only reached statistical significance during landing. The hip adduction moment and ankle adduction moment were significantly lower on the surgical side during both landing and takeoff. Knee and hip rotation moment were lower on the surgical side during both landing and takeoff, but the difference only reached statistical significance during landing. The magnitude of asymmetry was greater during landing than during takeoff for hip abduction angle and moment, ankle adduction moment, and hip rotation moment.
In the sagittal plane, asymmetry was observed during both landing and takeoff for ankle, knee, and hip angles, ankle and knee flexion moments, ankle and knee power integrals, and total power integral. Angles, moments, and power integrals were lower for the surgical compared with the contralateral side. The magnitude of asymmetry was greater during landing than during takeoff for ankle and knee angles and moments and the power integral at all joints and in total. At the hip, flexion moments were higher for the surgical limb during both landing and takeoff, but the difference was only statistically significant during landing (Figure 2). Jump Height: Jump height, as a measure of performance, was significantly related to the power generation integral during takeoff at all joints and in total regardless of limb injury status (p ≤ 0.05) (Table 3). However, jump height was not related to asymmetry of the power integral at any joint or in total (p ≥ 0.52).
Discussion
Reduced active shock absorption and decreased flexion of the lower extremity seen during the landing of jumping or cutting tasks are factors that have been associated with an increased risk for obtaining an ACL injury [24,26,32]. The current study found that asymmetries in shock absorption occurred during the landing phase of the VDJ for the ankle and knee, and that energy absorption during landing was lower overall on the surgical limb compared to the contralateral side. This decreased energy absorption could contribute to inadequate shock absorption at the knee joint which has been shown to be one risk factor associated with ACL injury during landing and cutting tasks [33,34].
Previous studies have shown that between limb asymmetries are often seen as the surgical limb offloading to the contralateral limb during the landing phase of the VDJ task, though specific asymmetric strategies differ by study [12,26,35,36]. Our study found decreased energy absorption on the surgical limb during landing which is consistent with previous research [12,23,37]. This is often thought to be a protective compensation while the surgical knee is still recovering. We also found that the surgical limb had higher values for ankle inversion, ankle adduction and hip rotation moment compared to the contralateral limb in both the landing and takeoff phases. Increased hip internal rotation angles are one of the main components of dynamic limb valgus which has been well documented as a risk factor for initial and subsequent ACL tears [3,11,21]. However, contrary to other research which has shown increased knee abduction moment as a risk factor for ACL injury, we found increased knee adduction moments on the surgical limb. This could indicate that these subjects have further protective offloading of their surgical knee which is seen clinically and has been documented in previous research, highlighting the importance of training the patient to appropriately and symmetrically load the knees in rehabilitation following ACLR [38,39]. In addition, this persistent offloading of the surgical limb shifts the load and increases the demand on the contralateral side, leading to potential overloading and contributing to the high rate of contralateral injuries following an initial ACL tear [9]. As this was not the focus of our paper, this finding deserves further investigation such as separating the groups into male and female subjects to examine if there is a sex difference or including strength or electromyography data to further examine clinical rehabilitation measures.
An avoidance of active shock absorption may suggest that the patient continues to be apprehensive and protecting the reconstructed knee or does not have adequate strength to eccentrically control their landing. Consistent with previous research, we also found asymmetries for ankle and knee flexion moments during the landing phase for the surgical limb compared to the contralateral limb [23,26,40,41]. This reduced knee flexion moment further contributes to decreased shock absorption upon landing and can cause the knee to be more at risk to reinjury since the joint experiences more shear stress and resultant ACL tears when it is in a more extended position [19]. In addition, research has shown that in this Symmetry 2021, 13, 639 8 of 11 timeframe following ACLR patients with patella or quadriceps tendon grafts have more sagittal plane deficits and slower recovery of symmetry than those with hamstring tendon grafts [19]. Disruption of the knee extensor mechanism and eccentric knee flexion control of the knee could be a factor in decreased knee flexion moments seen in these patients. Increasing active shock absorption during landing tasks is an important component of rehabilitation and return to sport training following an ACL tear to decrease the chance of reinjury and osteoarthritis development of the surgically reconstructed knee and maximize performance of athletic tasks.
We also found that patients had decreased energy generation on the surgical limb during takeoff illustrating that offloading strategies persist in this phase of the VDJ task. While asymmetries carry over from landing to takeoff, the magnitude of asymmetry of energy transfer appears to be attenuated during takeoff. If not properly addressed during rehabilitation, offloading strategies and compensations can persist leading to weakness on the ACLR limb and can also increase patients' long-term risk for OA in the reconstructed knee by hindering normal cartilage production [33,34,42]. This persistent weakness can also lead to reduced performance and may contribute to some patients stating they do not achieve their preinjury level of play. The resolution of avoidance strategies and the resulting improved lower extremity symmetry as a criterion for return to sport clearance could produce a sounder assessment and a more confident return to sport decision and reduce the risk of subsequent ACL injuries to the ipsilateral and contralateral side. The specific metrics of these strategies can be captured with motion analysis and shows the importance of this type of assessment in a clinical setting [19].
Previous research has typically used the takeoff phase of the VDJ tasks as a performance metric based off either jump height or flight time [27,28,43]. We found that jump height was significantly related to the power generation integral during takeoff at all joints. However, asymmetry of the power integral between limbs was not found to be related to jump height or performance of the task. Asymmetry may not have a large impact on an athlete's performance, and other research has shown that kinematic asymmetry has no effect on running efficiency or energy expenditure [44].
The limitations of our study include the retrospective design and limited sample size. Due to the relatively small sample size, the male and female participants were not analyzed separately, despite some evidence for biomechanical differences between the sexes [45]. The biomechanical model used a single segment foot and therefore did not model separate hindfoot and forefoot or midfoot motion. Following the common convention in motion analysis, we refer to the joint connecting the shank and foot as the ankle. In this context, ankle adduction is equivalent to ankle rotation in the transverse plane. Also, patients were between 6 to 10 months post-surgery and were not yet cleared for return to sport, which may limit generalizability to other patient populations. Because the patients in this study had not yet been cleared to return to full activity at the time of testing, they would be expected to continue to improve over time in terms of lower extremity symmetry and deviations from normal landing and takeoff biomechanics [19].
Conclusions
Similar sagittal, frontal, and transverse plane asymmetries were present during both the landing and takeoff phases of the VDJ task. Because asymmetries are translated from the landing through the takeoff phase, specifically in energy absorption to generation, this may give early insight as to why some athletes do not return to their preinjury level of play and are at higher risk of sustaining a future ACL injury. Though we did not find a significant relationship between asymmetries and performance, we found that jump height was significantly related to power generation which will not be maximized if one side produces less than maximal output. Targeting asymmetry and focusing on both landing and takeoff mechanics during rehabilitation may help to decrease the rate of injuries and maximize performance. Because premature return to sport may put athletes at an increased risk for future injury, further research is needed to establish more comprehensive,
|
v3-fos-license
|
2021-11-17T16:21:47.171Z
|
2021-01-01T00:00:00.000
|
244163396
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2021/18/matecconf_iceaf2021_02007.pdf",
"pdf_hash": "e2e946d2a7433e25d44443ea1780c4cfb49eeef3",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46686",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "6f2812278a039f9d2011ffbc5e73366ea3e7c465",
"year": 2021
}
|
pes2o/s2orc
|
Alumina-Forming Austenitic (AFA) steels and aluminium-based coating on 15-15 Ti steel to limit mechanical damage in presence of liquid lead-bismuth eutectic and liquid lead
To limit corrosion of the steels in contact with liquid metal (Pb or Pb-Bi), both different solutions based on the presence of alumina at the surface of the steels were selected: 2 Alumina-Forming Austenitic (AFA) steels and the 15-15 Ti steel coated by alumina layer. These technical options to mitigate corrosion by lead or Pb-Bi were investigated in terms of mechanical performances in liquid lead or liquid Pb-Bi. So the liquid metal embrittlement (LME) sensitivity of the different materials selected for their good corrosion resistance was evaluated by tensile tests or small punch tests carried out in presence of liquid metal, and by post-mortem analysis of the cracking and of the fracture surfaces. No LME sensitivity for the tested conditions has been observed for the Al2O3 coated 15-15Ti steel and for the two AFA steels (16Ni-14Cr-2.5Al-2.5Mn-1Nb), one without and one with 2% wt.% W and 0.02% wt.% Y in their as received state. But a 650 °C thermal aging promotes modifications of the microstructure specially precipitations and then LME sensitivity of the AFA steels, according the nature of the precipitation.
Introduction
The corrosion resistance and the mechanical behaviour assessment of structural alloys (steels) is crucial for the durability and the safety of the Lead cooled Fast Reactor (LFR) and Accelerated Driven Systems (ADS) for which the structural materials are in contact with liquid lead or liquid lead-bismuth eutectic (LBE). Indeed, the presence of liquid lead or LBE promotes for the steels different types of corrosion according to the oxygen content in the liquid metal: 1. at high oxygen content, oxidation of the steel with the PbO formation -2. at low oxygen content, dissolution of the steel or of a selected element which can promote microstructure evolution [1][2][3][4][5]. The oxygen level in the liquid metal being low to avoid the formation of oxides and plugging, to solve corrosion damage by liquid metal, the presence of a protective layer at the surface of the steels has been proposed because it aims at avoiding the contact between the steel and the liquid metal, limiting all steel / liquid metal interactions and therefore reducing the corrosion phenomena. Among the solutions explored over the past 10 years [6], one of the most promising seems to be the presence of a layer of alumina on the surface of steels, because of its stability whatever the oxygen content in the liquid metal [1]. Two types of solution have been proposed: 1. the development of Al2O3 coating to protect the steel [7][8][9], 2. the development of adapted steels containing aluminum to promote the insitu formation of a stable and protective Al2O3 oxide layer [3,[10][11][12][13][14][15][16]. Note that concerning the coatings, one of the problems could be the damage of the steel in the case of surface deterioration of the coating (cracking, wear, scaling, delamination, dissolution).
Though the essential challenge of these solutions is the corrosion resistance in liquid Pb or/and LBE, the mechanical resistance in liquid metal must be ensured to guarantee the life time of the structure under mechanical loading (due to pure mechanical stresses or due to stresses induced by temperature fluctuations). Indeed, though tough and ductile metallic alloys are selected, they may become brittle when stressed in liquid metal exhibiting thus liquid metal embrittlement (LME), one of the liquid metal assisted mechanical damages. Furthermore, for coated steel, the mechanical resistance of the substrate could be affected by the coating, not only in air but also in presence of a liquid metal.
The presented work concerns the mechanical behaviour in liquid LBE or liquid lead of steels which have been modified for corrosion mitigation in liquid metal. In particular, it aims at evaluating if the corrosion mitigation techniques impact the susceptibility of three steels to LME: an Al2O3 coated 15-15Ti steel and two Alumina-Forming Austenitic (AFA) steels. So, the influence of the presence of the liquid metal was investigated by performing monotonic tests (Small Punch Tests and tensile tests) in air and in liquid LBE or/and liquid lead and then by cracking and fracture surfaces analyses by scanning electron microscopy (SEM). Different parameters known to impact the LME sensitivity have been considered: the temperature, the strain rate, a thermal aging.
2 Al2O3 coated 15-15Ti steel 15-15Ti austenitic stainless steel in the form of plates obtained by rolling has been covered by a layer of aluminium oxide using the Pulsed Laser Deposition (PLD) technique in the facilities of the Istituto Italiano di Tecnologia (IIT), Milano, Italy [8,[17][18]. The Al2O3 coating had an average thickness of 1.5 µm and a roughness parameter Rz inferior to 1 µm.
To evaluate LME sensitivity and the influence of the presence of the coating, tensile tests were performed in air and in lead for the 15-15Ti steel, and in lead for the coated 15-15Ti steel. The used samples are plate specimens with a gauge length of 9.8 mm, for a thickness of 2 mm and a width in the gauge length of 1.5 mm. Since a low oxygen content in liquid lead promotes LME sensitivity of the steels by limiting the in-situ formation of oxide layer at the steels surface [19][20], tests were carried out in liquid lead with a low oxygen content (inferior to 10 -8 wt.%) using the setup developed by Ye et al. [19] for tests in liquid LBE and adapted for tests in liquid lead. The tests were performed at 400 °C and 500 °C. Furthermore, hardness and strain rate influence LME sensitivity, the tensile specimens were extracted in the rolling direction (noted L) and perpendicularly to it (noted T). Two strain rates were considered: 5 10 -5 s -1 and 5 10 -6 s -1 . For each condition, at least two tests were performed.
For all tested conditions, the behaviour of the 15-15Ti steel is ductile (Fig. 1). Contrary to what is expected for an austenitic steel, the studied 15-15Ti steel does not present a strain hardening, what can be explained by the absence of heat treatment after the final cold rolling step of the steel plate. The tensile strength and the yield strength are higher than those reported in the literature for the solution-annealed 15-15Ti steel which presents an appreciable strain hardening [21]. Because the high strength materials are generally more sensitive to LME [22][23][24], the studied 15-15Ti steel is probably be more sensitive to LME compared to the solutionannealed state. No influence of the presence of lead on the tensile curves obtained at 400 °C and at 5 10 -5 s -1 has been noted for the tested 15-15Ti steel (Fig. 1). For the direction L, no significant evolution is shown between the curves obtained at the different strain rates and temperatures. But, for the direction perpendicular to the rolling direction (T), in contrast with the absence of temperature effect, the strain rate influences the tensile curves: the stresses are more important with the decrease of the strain rate which is unusual but can be explained by the presence of the TiC precipitates. Finally, as expected, for the direction T, the values of the uniform elongation and the elongation at rupture are lower and, the tensile strength, the yield strength are more important by comparison with the direction L (the rolling direction).
In liquid lead, the behaviour of the coated steel ( Fig. 2) is ductile and similar to the behaviour of the steel without coating. But some differences in the stress values are noted for some conditions: direction L at 400 °C and 10 -5 s -1 , direction T at 500 °C and 10 -6 s -1 . After test, the fracture surfaces were analysed by SEM. Prior to the SEM examinations, the samples tested in liquid metal were cleaned in a solution containing CH3COOH, H2O2 and C2H5OH at a ratio of 1:1:1 to remove the solidified lead. In all cases, with or without coating, a ductile fracture with the presence of dimples has been observed (Fig. 3). We observed neither embrittlement of the 15-15Ti steel near the surface of the sample with Al2O3 coating nor cracking of the coating.
The 15-15Ti steel presents a ductile behaviour in air as in liquid lead (with a low oxygen content) in the tested conditions. This result is consistent with that of Hojna et al. [25]. The temperature and the strain rate do not influence significantly the characteristics of the tests curves except for the direction T.
The 15-15Ti steel coated with Al2O3 to limit corrosion in lead presents a ductile behaviour with a ductile fracture (Fig. 3). No LME has been observed in the conditions of test. But, in MATEC Web of Conference 349, 02007 (2021) ICEAF-VI 2021 https://doi.org/10.1051/matecconf /202134902007 some cases, the mechanical strength is somewhat lower for the coated steel as compared with the uncoated one but the ductility is the same. The coating process seems therefore to soften the substrate while keeping the same ductility. Modifications of the microstructure of the steel are probably promoted by an increase in temperature of the steel during the Al2O3 coating process and by the fact that after cold forming, the steel has not been heat treated and so may not be in a state of stable microstructure. Thus, if no LME sensitivity is observed, the steel coated with Al2O3 does not appear as an optimal option in regards with the mechanical properties which seem in some conditions to be modified not by the presence of the liquid metal or of the Al2O3 coating but by the microstructural evolutions generated during the coating elaboration. Optimization of the elaboration process and heat treatments should allow to improve this promising solution.
AFA steels
The two studied low-alloyed AFA steels, noted AFA3 and AFA8 were produced by Kanthal, part of Sandvik Group. Solidified heats obtained in a vacuum induction melting furnace were hot-worked and then annealed. The chemical composition of the two steels obtained in the form of bar is given in Table 1. AFA8 heat was produced with additions of W and Y to improve creep strength by solid solution strengthening and by precipitation hardening of Laves phase. Small punch tests (SPT) in air and in oxygen saturated liquid LBE (44 wt% Pb and 56 wt% Bi) were performed. Indeed, this test has appeared very sensitive to evidence LME [26] and is appropriated to study different conditions (environment, temperature, strain rate, microstructure state). The SPT is based on punching a flat small specimen (disc diameter around 7 mm in our case and thickness equal to 0.5 mm) by a tungsten carbide ball until fracture [26]. For tests in liquid metal, the upper surface of the specimen was in contact with the liquid LBE and was submitted to tensile loading. SPT were carried out in air and in LBE at 350 °C and 450 °C. The punch was carried out with a controlled cross-head displacement velocity of 0.5 mm/min which corresponds to an average strain rate around 5 × 10 -3 s -1 . For each condition, at least two tests were performed. The recorded load-displacement curves were corrected by a calibration to take into account the deformation of the set-up. After SPT, the fracture surfaces and cracking were analysed by SEM. Prior to the SEM examinations, the samples tested in liquid LBE were cleaned in a solution containing CH3COOH, H2O2 and C2H5OH at a ratio of 1:1:1. For each condition, two tests were carried out. In the case of large scatter, a third test was performed.
Concerning the SPT carried out for the "as received" steels (noted AFA3-AR and AFA8-AR), the load displacement curves show a typically ductile behaviour in air as in oxygen saturated LBE at 350 °C and at 450 °C (Fig. 4). Furthermore, the main crack is circular without radial cracks often suggesting embrittlement. In addition, fracture surfaces present dimples. Rarely, ductile secondary micro-cracks have been observed at the fracture surfaces for tests in LBE. No significant differences have been observed and noted between tests in air and in LBE (data obtained from the curves, cracking and fracture surface). So, in oxygen saturated LBE at 350 °C and at 450 °C, AFA3 and AFA8 steels in the as-received condition remain ductile without any evidence of LME. As it is known that slower strain rate can increase susceptibility to LME by LBE [19,20], SPT were carried out at 350 °C at a displacement speed of 0.005 mm/min. No effect of the liquid metal has been observed.
Then SPT were performed on steels aged at 650 °C to study whether eventual modifications in the microstructure due to the thermal aging lead or not to a sensitivity to LME by LBE for the two steels. So SPT at 350 °C and 450 °C in air and in LBE at 0.5 mm/min were carried out after an aging of 1008, 2088, 3028 and 5044 hours (steel state respectively noted AFA3-1008/AFA8-1008, AFA3-2088/AFA8-2088, AFA3-3028/AFA8-3028, AFA3-5044/AFA8-5044).
For the AFA3 steel, all the load displacement curves obtained after the different aging durations show a ductile behaviour, confirmed by the observation of ductile fracture and main circular crack (Fig. 5). The absence of significant evolution being observed between results of tests in air and in liquid LBE suggests that the aging does not lead to LME sensitivity of the AFA3 steel. But for some samples tested in LBE, radial cracks accompany the main circular crack and ductile fracture surface with some brittle zones were observed.
The load displacement curves obtained for the AFA8 steel after the different aging durations and for SPT at 350 °C and 450 °C in air and in LBE show a ductile behaviour (Fig. 6) with in some conditions (AFA8-1008 at 450 °C, AFA8-3028 at 350 °C and 450 °C, AFA8-5044 at 450 °C) a significant decrease of the ductility and of the mechanical resistance in presence of LBE. However, for the other cases, due to the scattering of the results, the evolution is not so meaningful. After SPT carried out in air, circular cracking and ductile fracture surfaces have always been observed. But, the circular crack is accompanied sometimes by radial cracks, and some intergranular micro-cracks were observed at the level of the fracture surface (Fig. 6). Concerning the samples tested in LBE, either a main circular crack and a ductile fracture surface or a circular crack with developed radial cracks with a MATEC Web of Conference 349, 02007 (2021) ICEAF-VI 2021 https://doi.org/10.1051/matecconf /202134902007 very large brittle fracture surfaces were observed (Fig. 6). Furthermore, at least half of the AFA8 samples tested in LBE show a brittle fracture surface. To summarize, the 650 °C thermal aging does not promote any significant mechanical behaviour evolution for the AFA3 steel tested by SPT in air, but an embrittlement of the AFA8 steel even if it is very localized. So, the aging at 650 °C leads to a modification of mechanical properties of the AFA8 steel without promoting major embrittlement in air and at 350 °C and 450 °C. The macroscopic mechanical behaviour of the AFA3 steel after thermal aging is not affected by the presence of the LBE. In most of conditions, the aging at 650 °C does not promote a LME sensitivity of the AFA3 steel by the liquid LBE. But, some signs of LME have been observed in a minority of samples. Aging AFA8 steel at 650 °C induces sensitivity to liquid LBE embrittlement at 350 °C and at 450 °C. The macroscopic mechanical behaviour of the AFA8 steel is ductile in air as in LBE, but liquid LBE affects the mechanical properties and mechanical behaviour. In presence of LBE, brittle fracture of the AFA8 steel after aging at 650 °C is observed.
To understand the effect of the thermal aging the evolution of the hardness and of the microstructure has been evaluated according to the duration of the 650 °C aging. The main evolution concerns the precipitation and apparition of phases. After 1008 hours, a phase rich in Fe and Cr at grain boundaries and a phase rich in NiAl appear for the AFA3 steel. For the AFA8 steel, we observed a phase rich in NiAl after a 2088 hours thermal aging while a phase rich in Fe and Cr was detected at grain boundaries after a 5044 hours thermal aging. Contrary to what was observed for the AFA3, the precipitation for the AFA8 steel presents a heterogeneous distribution. Moreover, from a thermal aging of 1008 hours, a significant precipitation of phase rich in W is observed. Concerning the hardness, despite evolutions in grain size and precipitation, few changes are measured for the AFA8 steel according to the thermal aging duration ( Table 2). The precipitation of the phase rich in Fe and Cr leads to an increase in the hardness of the AFA3 steel. Thus, this change in the hardness of the AFA3 steel could explain the local LME sensitivity, as has already been observed and explained for T91 steel with different microstructure states [23][24]. Concerning the AFA8 steel, the LME sensitivity cannot be attributed either to the evolution in the hardness or to the presence of a phase rich in NiAl since LME was observed after a 1008 hours aging. However, precipitates rich in W appear from the first 1008 hours of thermal aging. These precipitates are suspected to be responsible for a large part of the LME sensitivity. The heterogeneity in the distribution of the precipitates rich in W partly explains the dispersion in the results of AFA8 obtained after thermal aging in the presence of LBE.
Conclusions
In this work, technical options to mitigate corrosion by lead or LBE were investigated in terms of mechanical performances in liquid lead or liquid LBE. So the LME sensitivity of different materials selected for their good corrosion resistance in lead or LBE was evaluated.
Concerning the 15-15Ti steel protected by a 1.5 µm Al2O3 coating, no LME sensitivity for the tested conditions has been observed. But the reliability of the coating for large durations or after an impact/degradation of the surface/coating is questionable. So, as the coating is not self-forming, the corrosion resistance as the absence of LME can not be guaranteed.
The two tested AFA steels (16Ni-14Cr-2.5Al-2.5Mn-1Nb), one without and one with 2% wt.% W and 0.02% wt.% Y, are not sensitive to LME for the tested conditions and in their as received state. But aging the AFA8 steel in vacuum at 650 °C for durations between 1000 hours to 5000 hours promotes modification in the mechanical response and brittle fracture, so LME. This embrittlement seems to be attributed to the precipitation. Optimization of the composition and microstructure is necessary to avoid precipitation which promotes LME of these promising steels. Indeed, AFA steels promote an in-situ forming stable and protective oxide layer from the metallic elements of steel to limit the contact between the liquid metal and the materials.
|
v3-fos-license
|
2018-12-29T01:19:26.168Z
|
2018-09-01T00:00:00.000
|
73553269
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/rbsmi/v18n3/1519-3829-rbsmi-18-03-0461.pdf",
"pdf_hash": "aaacc78be77004bdff64c291dcccc3203e0f193f",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46687",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "aaacc78be77004bdff64c291dcccc3203e0f193f",
"year": 2018
}
|
pes2o/s2orc
|
Mother-to-child transmission of HIV in the Southern Region of Santa Catarina , from 2005 to 2015 : analysis of risk factors to seroconversion in newborns
Objectives: to analyze both frequency and risk factors for seroconversion among newborns of HIV-positive mothers to HIV. Methods: a cohort study was conducted with children residing in Southern Region of Santa Catarina. Secondary data from the notification files and medical records of newborn’s mothers of infected infants were used. The participants were all the newborns from 2005 to 2015 that were exposed to HIV through vertical transmission and attended a municipal health care center. Results: there were 104 cases of infant exposure to HIV. Seroconversion was confirmed in three cases, two of them died of AIDS during the study period. Breastfeeding (PR= 32.7; CI95%= 10.7-99.5; p= 0.002) and non-use of antiretroviral drugs during pregnancy (PR=18.2; CI95%= 2.0-163.0; p= 0.008) were risk factors for HIV seroconversion. Conclusions: seroconversion rates among neonates in Southern Region of Santa Catarina were similar to the national average. Seroconversion was associated with non-use of antiretroviral therapy during pregnancy and breastfeeding.
Introduction
The pandemic of human immunodeficiency virus (HIV) infection represents one of the most serious global health issues.From its emergence in the early 1980s to 2016, more than 78 million people have been infected with HIV, and about 39 million have died of acquired immunodeficiency syndrome (AIDS). 1 According to estimates by UNAIDS, about 37 million people were living with HIV globally at the end of 2016, 2.1 million of whom were children. 1,2ccording to the United Nations, 49% of the pregnant women infected by HIV worldwide received antiretroviral therapy (ARV) in 2010.In 2014, ARV coverage was as high as 73% of the 1,070,000 pregnant women. 1,2In Cuba, ARV coverage among pregnant women reached 98% in 2014, being considered by the World Health Organization, at that time, the first country in the world free of vertical transmission, having only one to two new cases of HIV infection in children under five years of age, annually. 1,3n Brazil, 92,210 HIV-infected pregnant women were notified from 2000 to June 2015, most of whom living in the Southeast (40.5%) and Southern (30.8%) Regions.The rate of HIV detection in pregnant women has been increasing in Brazil, particularly within the 2005-2015 period. 4In 2005, infant mortality rates were 2.0 cases per 1,000 live births, which increased to 2.6 cases per 1,000 live births in 2014, representing a 30% increase. 4In 2014, five Brazilian states had HIV detection rates in pregnant women higher than the national average, as follows: Rio Grande do Sul (8.8 cases per 1,000 live births), Santa Catarina (5.8 per 1,000 live births), Rio de Janeiro (4.0 per 1,000 live births), Amazonas (3.8 per 1,000 live births), and Pará (2.7 per 1,000 live births).An increase in rates was also observed among the regions of the country, except for the Southeast, which presented the same rate of 2.3 cases per 1,000 live births in 2005 and 2014.In 2014, the Southern Region had the highest detection rate of all regions, approximately 2.1 times higher than the rate in Brazil. 5he AIDS detection rate among children under five years of age has been used as an indicator to monitor HIV vertical transmission.In Brazil, there was a reduction of 42.7% in the detection rate between 2005 and 2015.In the Southern Region, the detection rate decreased by 63.4% from 2006 to 2015.According to the Ministry of Health, 5 the detection rate of AIDS in children under five years in Brazil was 2.5 per 100 thousand people in 2015.
][6][7] Although vertical transmission is determined by the number of AIDS cases among children under 5 years of age, few studies have examined seroconversion in neonates to verify the risk factors for HIV infection and what strategies should be implemented to prevent mother-to-child transmission.The aim of this study was to investigate the frequency of seroconversion and analyze its risk factors among newborns of HIV-positive mothers between 2005 and 2015 in the Southern Region Santa Catarina, Brazil.
Methods
An epidemiological, historical cohort study was conducted in a specialized health care service (CAES, Portuguese acronym), a public health care service in Tubarão, state of Santa Catarina, Brazil, that encompasses an HIV/AIDS referral center for 18 municipalities that form the Association of Municipalities of the Laguna Region (AMUREL, Portuguese acronym), namely: Armazém, Braço do Norte, Capivari de Baixo, Grão-Pará, Gravatal, Imaruí, Imbituba, Jaguaruna, Laguna, PedrasGrandes, Pescaria Brava, Rio Fortuna, Sangão, Santa Rosa de Lima, São Ludgero, São Martinho, Treze de Maio and Tubarão.These municipalities had a total population estimated at 353,989 residents in July 2014. 8It should be noted, however, that not all HIV-exposed children are monitored in this health care service, since the municipalities of Capivari de Baixo, Laguna, and Imbituba also have specialized care centers for HIV/AIDS and other infectious diseases.
The CAES is a leading health center for infectious diseases that treat individuals with sexually transmitted infections/AIDS, tuberculosis, and leprosy.All children born to HIV-positive mothers in Tubarão must be monitored, as of the notification, by the municipal service, and active search is made in cases of non-compliance. 6The municipality of Tubarão is served by two hospitals with obstetric center and neonatology service, being a reference in the region.As a result, a large number of births occur in the AMUREL region.
The study included all infants aged 0-18 months, both boys and girls, who had vertical HIV exposure, born in the study period, attended at CAES, in the AMUREL region, between 2005 and 2015.All chil-Seroconversion among newborns of HIV-positive mothers dren born to seropositive mothers were notified and followed up for seroconversion investigation.Parturients who had no prenatal care or those who underwent HIV testing during pregnancy underwent a rapid HIV testing upon admission for delivery.HIV-infected pregnant women were notified by the maternity health services.Data for this study were extracted from the compulsory notification reports of children exposed to HIV by vertical transmission filed at the CAES in Tubarão.The information was entered into an electronic database for further analysis.
The case definition in this study was based on the seroconversion confirmation during the infant follow-up period (up to 18 months of age).The following maternal variables were examined: age (in years) presented as mean, standard deviation and age group; ethnicity/skin color (white and non-white); education (0-8 and >8 years of schooling); occupation (housekeeper, salaried employee, unemployed, student); residency (Tubarão, other municipalities of the region, other municipalities of Santa Catarina) and area of residence (rural or urban); factors associated with vertical transmission, such as use of ARV during pregnancy (yes or no), type of delivery (vaginal delivery or cesarean section), and breastfeeding or cross-feeding (yes or no).The following variables were also included for the newborns: gender (male or female); ethnicity/skin color (white and non-white); peripartum prophylaxis (if ARV was used during labor); duration of ARV regimen after delivery (in weeks); and outcome (infected, not infected, deceased or lost to follow-up).Infants with HIV-confirmed infection at the end of follow-up by mother-to-child transmission had their maternal data on the last CD4 + T-Lymphocyte count and viral load during pregnancy also included in the analysis.The data not found in the records were shown as ignored.
Statistical analysis was performed using IBM SPSS Statistics ® software, version 21 (IBM ® , Armonk, New York, USA).In the descriptive step, categorical variables were expressed as proportions, and nume-rical variables as mean and standard deviation.In order to verify the association between the variables of interest and the HIV infection outcome, Fisher's exact two-tailed test was used for the categorical variables.The level of significance was set at 5%.The number of live births in the AMUREL municipalities within the study period was used to calculate the rate of seroconversion and infant exposure to HIV. 8,9 The estimated total population in these municipalities was based on the 2014 estimates. 8he OpenEpi 10
Results
The study encompassed 104 HIV-exposed infants.The mean age of the seropositive mothers was 27.5 years (SD = 6.4 years), ranging from 16 to 39 years.
Table 1 presents the characteristics and data regarding the gestational follow-up of the seropositive mothers who were followed up to determine HIV seroconversion of their children.
Table 2 shows the demographic and prophylactic characteristics for vertical HIV infection in newborns of seropositive mothers.
Table 3 presents the description of confirmed cases of seroconversion for HIV infection during the study period.
Seroconversion was documented in 4.8% during the study period, ranging from 6.7% in 2005 to 20.0% in 2012 (Table 5).Considering the total number of live births in the AMUREL region during the study period, HIV infection rate in pregnant women was 2.2 per 1,000 live births, whereas HIV infection in newborns was 0.1 per 1,000 live births.
Discussion
In the period between 2005 and 2015, there were 104 infants exposed to HIV, 5 of whom had confirmed seroconversion associated with breastfeeding and no use of antiretroviral drugs during pregnancy.These data corresponded to a rate of 2.2 infected pregnant women per 1,000 live births, which was similar to the national average (2.7 per 1,000), but lower than the average in the state of Santa Catarina (3.2 per 1,000 live births).In Santa Catarina, 2,779 HIVpositive exposures during gestation and delivery were reported among the 861,168 live births between 2005 and 2015.Of these, 84 children were infected with HIV, 7 died of AIDS, and 66 were lost to follow-up. 11These data corresponded to a seroconversion rate of 0.1 per 1,000 live births, the same rate found in the present study.The use of zidovudine (AZT), defined by the Pediatric Clinical Trials Group 10 protocol, reduced vertical transmission in almost 70% of cases.Currently, a triple antiretroviral medicine regimen is recommended, together with indicating the route of delivery, depending on the plasma viremia, for less fetal involvement.Furthermore, HIV-infected mothers should avoid breastfeeding, and AZT should be administered orally to the newborn until six weeks after birth.When these recommendations are followed, vertical transmission reaches rates no higher than 1% or 2%. 12,13The use of AZT, even when administered at a late stage of pregnancy, or when administered only to the newborn after birth, reduces vertical HIV transmission, regardless of the viral load level. 14Findings from this study revealed that the non-use of ARV during gestation and breastfeeding were the factors statistically associated with seroconversion of HIV infection in the newborn.Late diagnosis, coupled with ignorance about HIV transmission routes, such as breastfeeding, may contribute to increase vertical transmission.Furthermore, a little more than half of the newborns used ARV until the recommended six-week period after birth, which revealed the mothers' failure to comply with the vertical transmission prophylaxis protocol.This behavior may be explained by the difficulty mothers of infected infants (data not shown) have in adhering to their own treatment regiments.][16][17] A small number of pregnant women living with HIV did not use ARV during gestation and delivery.This unsatisfactory coverage revealed that the health surveillance system should have a better information system to identify pregnant women who lack prenatal care 7,15,16 or are unaware of their serological status before delivery.Pregnant women who come late for antenatal care (after week 16 of gestation) are less likely to receive an effective prevention.When HIV infection is diagnosed during pregnancy, ART should be started immediately, and no later than week 14 of gestation to avoid prolonged exposure to a high viral load. 18,19In the cases of confirmed seroconversion, the majority had a high viral load and a low CD4 T-lymphocyte count, which revealed immunodepression among women, favoring mother-to-infant transmission. 14,16,19It should be noted that the date of initiation of ARV in gestation is not included in the Brazilian information system for notifiable diseases (Sinan, Portuguese acronym).The Sinan form only indicates whether the pregnant woman is or is not using ARV, without informing the initiation date, which makes it difficult to assess this aspect in relation to the infected outcome.
Seropositive pregnant women with a viral load greater than or equal to 1,000 copies/mL, or unknown status after week 34 of gestation should undergo cesarean section, with infusion of AZT three hours before surgery until delivery. 6,7If a woman arrives at the hospital in early labor, AZT should be started right away until delivery, avoiding umbilical cord blood and amniotic fluid collection, and the use of forceps, for example. 6,10,20In the present study, cesarean section was the predominant mode of delivery.However, there was no record of maternal viral load in the peripartum.In less than 20%, vaginal delivery was chosen due to the evolution of labor.In vaginal delivery, episiotomy should be avoided, and labor should be monitored using an evolution chart (partograph), avoiding repeated vaginal touches. 6,7n 2004, a study conducted at the Clinical Hospital of the Federal University of Minas Gerais assessed 85 seropositive pregnant women who attended the health care service.More than half of the pregnant women (56.7%) received ARV as a therapeutic indication and 43.3% as a prophylaxis for vertical transmission.Vaginal delivery accounted for 27.6% of total deliveries, and no vertical transmission was observed. 15In 2007, a study conducted on 389 seropositive pregnant women in Belo Horizonte, Minas Gerais, revealed that 48.6% had started treatment between week 14 and week 27 of gestation, and seroconversion was 5.7%, results that were similar to those of the present study. 21lobal HIV 90-90-90 targets were set to end the AIDS epidemic by 2020, which means that by 2020, 90% of all people living with HIV will know their HIV status, 90% of all people with diagnosed HIV infection will receive antiretroviral therapy, and 90% of all people receiving antiretroviral therapy will have viral suppression to end Aids epidemic by 2030. 22For that purpose, the disease evolution must be monitored concomitantly with the advancement of society to assess the need for rearrangement in HIV/Aids programs, adequacy of public policies, protocols of coping strategies, and hospital care routines.It is also necessary to evaluate program management, and roles and responsibilities of the different levels of government to address this issue.Although there are efficient protocols for the preven-Brazilian state.The Sinan does not provide data on seroconversion for regional, state, or national comparisons.
Our study concluded that HIV seroconversion among live births was 4.8% between 2005 and 2015.Seroconversion was associated with no use of antiretroviral therapy during gestation and breastfeeding, showing flaws in prenatal care.Evidently, the WHO targets for ending vertical transmission have not yet been met.Therefore, it is important to provide prenatal care to all pregnant women, with an early diagnosis of HIV infection, to follow the prophylaxis protocol for vertical transmission.
Education programs focusing on pregnant women about mother-to-child HIV transmission may increase adherence to ARV and halt breastfeeding.beginning with the data being collected from secondary records, with several information gaps and follow-up losses that may have influenced the data analysis.The notification forms were not properly filled, with some blank answers, high rates of loss of follow-up or transfer out, when the case closes without a conclusion, and double notifications within the same state, revealing the fragility of data contained in the national bulletins.
There were some technical difficulties to collect such data, given that, in spite of the national notification system of exposure, the records were kept on physical files.At the end of the 18-month period, the case was notified according to the outcome, without correlation between maternal and newborn HIV exposure.Therefore, the infant could be notified more than once, according to the health service.Death and seroconversion data are only available through a direct online consultation with the Epidemiological Surveillance System in each software was used to calculate the relative risk, with a confidence level of 95%.This study was approved by the Research Ethics Committee of the University of Southern Santa Catarina, (Protocol Nº 1,137,723), on July 15, 2015, according to recommendations by Resolution 466 of the National Health Council ofDecember 12, 2012.
Table 1
Gestation-related features of HIV seropositive pregnant women (n=104), attending the town's health care center.Southern Region of Santa Catarina, 2005-2015.
Table 2
Epidemiological characteristics of newborns tested for HIV seroconversion through vertical transmission (n = 104) at the town's health care center.Southern Santa Catarina, 2005-2015.
Table 3
Seroconversion description of five cases of HIV-infected newborns through vertical transmission, monitored at the town's health care center.Southern Santa Catarina, 2005-2015.
†Last record available during pregnancy.
Table 4
Factors associated with HIV seroconversion in newborns through vertical transmission, monitored at the town's health care center.Southern Santa Catarina, 2005-2015.
Table 5
Case distribution and seroconversion rate in live births, monitored at the town's health care center.Southern Santa
|
v3-fos-license
|
2019-03-17T13:06:54.683Z
|
2015-11-01T00:00:00.000
|
80240066
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://jrcs.sljol.info/articles/10.4038/jrcs.v20i1.10/galley/9/download/",
"pdf_hash": "337e4dfbec37ca7ba770ae3fafd5f7e80102fcdf",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46689",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "18bbb0bc7a3d82da077f6f49a61a7dead5eb678a",
"year": 2015
}
|
pes2o/s2orc
|
Spinal tuberculosis paraplegia associated with anterior spinal artery infarction: a very rare presentation.
A forty five year old lady from Colombo presented with subacute onset paraplegia with sensory level up to T8. She was apparently well until two months back when she developed cough which was not productive and not associated with fever for which she received some oral antibiotics. Over the next two weeks she developed severe backache localized to thoracic region, which was aggravated by movements but not radiating to legs. There was no associated spinal deformity or kyphosis. Subsequently she noticed weakness and numbness of bilateral lower limbs which developed within two days duration while upper limbs were preserved. Same time she developed urinary and faecal incontinence. Examination revealed sensory level of T6 with impaired pain and temperature sensation below T6 level with exaggerated deep reflexes of lower limbs. Joint position sensation and vibration were intact.
Introduction
Tuberculosis remains a major cause of morbidity and mortality in developing countries. Paraplegia is the most dreaded complication of spinal tuberculosis. Here we present a patient with a clinical picture consistent with spinal tuberculosis associated with anterior spinal arterial infarction presenting as paraplegia.
Case report
A forty five year old lady from Colombo presented with subacute onset paraplegia with sensory level up to T8. She was apparently well until two months back when she developed cough which was not productive and not associated with fever for which she received some oral antibiotics. Over the next two weeks she developed severe backache localized to thoracic region, which was aggravated by movements but not radiating to legs. There was no associated spinal deformity or kyphosis. Subsequently she noticed weakness and numbness of bilateral lower limbs which developed within two days duration while upper limbs were preserved. Same time she developed urinary and faecal incontinence. Examination revealed sensory level of T6 with impaired pain and temperature sensation below T6 level with exaggerated deep reflexes of lower limbs. Joint position sensation and vibration were intact.
Initially transverse myelitis was suspected on clinical grounds and intravenous immunoglobulin therapy was given along with intravenous dexamethasone for five days. There was some improvement in paraplegia but it was not sustained. A gadolinium-enhanced MRI of the brain excluded multiple sclerosis which could be associated with transverse myelitis. Urgent magnetic resonance imaging (MRI) of brain and whole spine revealed bilateral homogenously enlarged lacrimal glands, a soft tissue density in anterior epidural space extending from C8/T1 to T6/T7 level with mild spinal cord oedema. MRI and High-resolution computed tomography chest showed a soft tissue density lesion in left hilar region with multiple nodular opacities scattered in both lung fields. There was no mediastinal or hilar lymphadenopathy. (Figure 1) Paraplegia associated with dorsal spine anterior epidural soft tissue density as well as the hilar mass raised the suspicion of lymphoma or tuberculosis while bilateral enlarged lacrimal glands pointed towards sarcoidosis. Angiotensin-converting-enzyme inhibitor level and serum calcium level were persistently low. She underwent vedio-assisted thoracoscopy and biopsy was taken from the hilar mass. Interestingly microscopy of the lung tissue revealed multiple and large foci of granulomas composed of aggregates of epitheloid histiocytes, langerhans type multinucleated giant cells and central caseation. Ziehl-Neelsen stain showed fragmented acid fast bacilli and the diagnosis of caseous tuberculosis was strongly suggested.
Erythrocytic sedimentation rate was 75mm, sputum for acid fast bacilli, sputum culture and mantoux test were negative. She was started on anti-tuberculosis treatment (ATT) with isoniazid, rifmpicin, pyrazinamide and ethambutol. On day 10 she developed severe drug induced hepatitis leading to hepatic encephalopathy and was admitted to the intensive care unit for observation. Once she became clinically and biochemically stable, under very close observation ATT regime was reintroduced gradually and was tolerated.
During the treatment process MRI whole spine was repeated ten weeks later which, to our curiosity, showed clearing of the previously noted epidural soft tissue density revealing a low intensity lesion in T1 weighted images and a high signal intensity lesion in T2 weighted images indicating a short segment anterior spinal artery territory infarction from T3 to T5 level (Figure 2.3). The
Journal Of The Ruhunu Clinical Society
December 2015 lesion was darker than the gray matter in contrast to the substance of cord. Thrombophilic screening proved negative. 2D Echocardiogram excluded any cardiac source of embolism. Tuberculosis endarteritis leading to the obliteration of the anterior spinal artery which is a rare cause for spinal tuberculosis paraplegia was diagnosed. With antituberculosis treatment and continuous physiotherapy gradually she gained some sustained improvement of being able to move the legs but movement of lower limbs against gravity was not achieved and sensory impairment and incontinence persisted.
Discussion
Around 10% of patients with extra pulmonary tuberculosis may have bone and joint involvement and half of them will have tuberculosis of spine. [1] Among spinal tuberculosis the incidence of neurological involvement is 10-46%. [2] Spinal tuberculosis is common among children and young adolescents but may occur in any age group. Paraplegia is the most dreaded complication of spinal tuberculosis.
Among the patients with active spinal tuberculosis, paraplegia may be caused by direct involvement of meninges and spinal cord by the tuberculous infection or by direct mechanical pressure on the spinal cord by tubercular abscess, caseous granulation tissue and debris or by intrinsic changes in the spinal cord such as inflammatory oedema. Infective thrombosis or endarteritis of spinal vessels leading to infarction of the spinal cord is a very rare cause of paraplegia. [3] Granulomatous endarteritis of anterior spinal artery is a rare specific manifestation of spinal tuberculosis. In our patient the infection seemed to have spread from the hilar caseous lesion via haematological seeding to the arteries of spinal cord. Infarction of spinal cord is an unusual but important cause of paraplegia and it is caused by endarteritis, periarteritis or thrombosis of an important tributary to the anterior spinal artery caused by inflammatory reaction [4].
Spiller WG (1909) first described anterior spinal artery syndrome as having pain at the level of lesion disturbance of pain and temperature, paraparesis or tetraparesis and urinary incontinence [5]. Currently MRI is considered as the best modality of choice for confirming the diagnosis of anterior spinal artery syndrome in suspected patients. [6] Relative to normal spinal cord, the lesions of anterior spinal artery infarction appear hyperintense in T2 weighted images while it appears as hypointense lesions in T1 weighted images and the lesions are usually seen anteriorly in spinal cord. [6] A finding of vertebral body infarction contiguous to a cord signal abnormality on MRI is a definite indicator of ischemia and a useful confirmatory sign if present; however, this is established in only 4 to 35 percent of patients, and its absence does not exclude spinal cord infarction. [7] We believe this patient suffered from an infraction of the spinal cord because of the typical clinical features on presentation, absence of cord compressions, and exclusion of other known neurological diseases. A typical loss of motor neuron function with dissociated sensory impairment below the level of the lesion pointed to anterior spinal artery syndrome [8] as the result of an infraction occurring in the region supplied by this artery. MRI during the acute stage revealed soft tissue intensity in anterior epidural space with mild spinal cord oedema. Ten weeks later repeat MRI revealed a low intensity lesion in T1 weighted images and a high signal intensity lesion in T2 weighted images indicating distinct focus of short segment anterior spinal artery territory infarction from T3 to T5 level. A similar finding was reported in patients with spinal cord ischemia. [9] This unique case emphasizes that obliterative tuberculous endarteritis leading to anterior spinal artery occlusion should be considered in the differential diagnosis of any paraplegia, especially in countries with a high incidence and prevalence of tuberculosis.
|
v3-fos-license
|
2018-12-15T14:30:10.631Z
|
2016-12-01T00:00:00.000
|
56568609
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "http://jurnal.unpad.ac.id/pcpr/article/download/15218/pdf",
"pdf_hash": "7300eab27e9f37f6c96a1928e0e7af093b3fd155",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46691",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "7300eab27e9f37f6c96a1928e0e7af093b3fd155",
"year": 2016
}
|
pes2o/s2orc
|
Efficacy and Side Effects of Deferasirox and Deferiprone for Thalasemia Major in Children
Thalassemia major (TM) is an inherited disease caused by defective or absent of hemoglobin chain synthesis. Regular chelation therapy is necessary to reduce excess iron in several organs of TM patients. The most commonly used chelating agents are deferasirox and deferiprone. However, information regarding their effectiveness and side effects in Indonesian children population with TM were limited. This study was conducted to assess the effectiveness and side effects of deferasirox and deferiprone in pediatric patients with TM. This was an observational study with prospective analysis which was conducted during April-August 2015. We included pediatric patients with TM who visited a hospital in Bandung, Indonesia, using consecutive sampling method. Thirty two subjects were divided into two groups, i.e., deferasirox and deferiprone group. Review of medical records and interview were performed for each participants. Effectiveness was defined as reduction in ferritin level. Side effects were assessed using Naranjo scale. Data were analyzed using Mann-Whitney test, Wiloxon test and Chi square test. P value < 0.05 defined statistical significance. We found that deferasirox was more effective than deferiprone for the treatment of TM in pediatric patiens, with less side effects. The use of deferasirox as iron chelating agent is recommended for patients with TM.
Introduction
Thalassemia major (TM) is an inherited disease caused by defective or absent of hemoglobin chain synthesis.TM genes are particularly frequent among the people of Mediterranean origin, Middle East, and South East Asia, including Indonesia.The World Health Organization (WHO) reported that about 7% of the world's population is TM carriers and approximately 300,000 to 500,000 babies are born with TM every year. 1 TM patients should undergo monthly transfusions during their lifetime in order to maintain hemoglobin level in the range of 9-10 g/dl.However, transfusion can not prevent accumulation of 200 mg of iron in various organs.Therefore, the use of chelating agent is necessary to reduce excess iron in some organs. 2helation therapy can be started if TM patients had undergo transfusion as much as 10-20 times or ferritin value has reached 1000 mg/dl The side effects generated by each chelating agent can be anticipated by monitoring the function of liver and kidney function periodically.The most commonly chelation therapy used by thalassemia patients is deferasirox and deferiprone. 3ferasirox used for treating iron toxicity by making a bond with trivalent iron and forming a stable complex that is eliminated via the kidneys.Deferiprone is also chelating agent which is more selective to iron. 4 Chelation therapy requires compliance and thorough attention to the proper way of administration.Desired therapeutic effect can be reached with such actions. 5Information regarding deferasirox and deferiprone effectiveness and side effects in Indonesian children population with TM was limited.
This study was aimed to assess the efficacy and safety of deferasirox and deferiprone in Indonesian patients with TM.
Methods
The population of this study was TM pediatric patients attending the outpatient clinic at Dr. Hasan Sadikin General Hospital, Bandung, during 2015.We included pediatric patients who received blood transfusions > 10 times or had serum ferritin level > 1,000 ng/m, received deferiprone or deferasirox therapy, aged between 6-12 years, and had complete medical records, i.e., the monthly results of serum creatinine and ureum, quarterly results of ferritin, SGOT, and SGPT.We excluded patients who used the combination of deferasirox and deferiprone therapy. 6is was an observational studies with prospective cross-sectional analysis.The study was conducted by reviewing medical records and interviewing the patients during April until August 2015.From each medical records, we extracted data regarding name, sex, age, volume of tranfusion, ferritin serum, creatinine, urea, SGOT, and SPGT.Interview was conducted using Naranjo scale to assess side effects based on patients reported outcomes. 5Data were analyzed using student t-test, chi-square, Mann-Whitney, and Wilxocon tests.P < 0.05 defined statistical significance.
Results and Discussion
Thiry two participants were included in this study.Participants were then divided into two groups based on the type of chelation therapy used, including deferasirox or deferiprone.General characteristics of the subjects can be found in Table 1.Based on the chi-square analysis, there was no association between gender, age, and transfusion volume on the type of chelating agents used.Initial dose of deferiprone was 75 mg/kg of BW per day, while that of deferasirox was 20 mg/kg of BW per day.Previous study showed that the use of this dose is effective in the binding and excreting excess iron. 2 Reduction of ferritin in deferasirox group was significantly different with that of deferiprone (Table 2).The mean value of ferritin reduction in deferasirox was 45.76 2).This finding was comparable with previous study.However, patients treated with deferiprone had less myocardial iron burden due to its ability to remove cardiac iron.The combination of these two drugs could potentially result in more favourable results [6][7][8][9][10] Patient adherence is one the major issues in the treatment with chelating agents in TM patients.One of the driving factors of the adherence is the inconvenient dosing schedule.The use of once daily dosage of deferasirox is more likely to be resulted in the improvement in patients adherence to medication. 11 this study, the assessment of side effect was performed by conducting the interview with participants using the Naranjo scale.Within the Naranjo scale, there were 10 questions related to the adverse events experienced by the patiens during the therapy.We found that more patients in deferipirone groups reported side effects.The use of chelating agent was significantly associated with the increasing level of ureum and creatinine (Table 4).In TM patients, the incomplete formation of erythrocite could result in anaemia.This condition may alter the functions of the kidney. 13evious study showed that long term use of chelating agent may induce hepatotoxicity. 12n this study, the parameters related to the liver functions, such as SGOT and SGPT, were not significantly different between the two groups (Table 4).It might be caused by the relatively similar time range in the use of chelating agents among the participants.Patients with pre-existing liver disease should use
Pharmacology and Clinical Pharmacy Research ISSN: 2527-7322 | e-ISSN:2614-0020 Volume 1 No. 3 December 2016 chelating
12ent with caution.Polymorphisms of the hepatic genes known to be involved in deferasirox excretion may also influence the development of hepatotoxicity.12ConclusionDeferasiroxwas more effective than deferiprone for the treatment of TM in pediatric patients, with less side effects.The use of deferasirox as iron chelating agent is recommended for patients with TM.
|
v3-fos-license
|
2023-10-05T07:52:21.767Z
|
2023-10-02T00:00:00.000
|
263625795
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "HYBRID",
"oa_url": "https://www.ijfmr.com/papers/2023/5/6717.pdf",
"pdf_hash": "36ff7c9207ea1a04835810ae585ae3d77cbb14ab",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46692",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "36ff7c9207ea1a04835810ae585ae3d77cbb14ab",
"year": 2023
}
|
pes2o/s2orc
|
Light Bending and Stability Analysis in Weyl Conformal Gravity
Employing a recent proposed method by Rindler-Ishak, the bending of light is calculated to second order, which reveals the exact Schwarzschild terms as well as the effects arising out of the parameters of the Mannheim-Kazanas solution of Weyl conformal gravity. Next using the approach of autonomous dynamical system, the stability of circular motion of massive and massless particles in the motion has been investigated. The main results justify why Rindler-Ishak method has to be preferred over text book methods when asymptotically non flat spacetime has been concerned. It turns out that there is no stable circular radius for light motion in the considered solution.
Introduction Classical Einstein's general relativity theory (EGRT) has been nicely confirmed within the weak field regime of solar gravity and binary pulsars.Certainly it continues to remain as one of the cornerstone of modern physics.However, it must be said in all fairness that within the ambit of classical EGRT, there still exists serious challenges.For instance, observations of flat rotation curves in the galactic halo still lack a universally accepted satisfactory explanation.The most widely accepted explanation, based on EGRT, hypothesizes that almost every galaxy hosts a large amount of nonluminous matter, the so called gravitational dark matter [1], consisting of unknown particles not included in the particle standard model, forming a halo around the galaxy.This dark matter provides the needed gravitational field and the required mass to match the observed galactic flat rotation curves.The exact nature of either the dark matter or dark energy is yet far too unknown except that the former has to be attractive on the galactic scale and the later repulsive on the cosmological scale.These requirements lead us to explore alternative theories, such as Modified Newtonian Dynamics (MOND) [2.3], braneworld model [4], scalar model [5].A prominent candidate is Weyl Conformal Gravity that keep intact the weak field successes of EGRT and potentially resolves the dark matter/dark energy problem without hypothesizing them.By itself, Weyl Conformal Gravity seems quite as elegant as other theories because it is based on the conformal invariance with an associated 15-parameter largest symmetry group.An interesting solution in this theory is the Mannheim-Kazanas (MK) metric [6] that has successfully interpreted galactic flat rotation curves without invoking the elusive dark matter.The MK solution contains two arbitrary parameters and ᴋ that are expected to play prominent roles on the galactic halo and cosmological scale respectively.The fit to galactic flat rotation > 0 and a numerical value of the order of inverse Hubble radius [6c].Therefore, it is expected that > 0 would lead to an enhanced bending of light due to the non-luminous halo over the usual Schwarzschild one due to luminous galactic mass.This enhancement is consistent with the observed attractive halo gravity.
Interestingly ᴋ cancels out of the light path equation and one might be led to believe that ᴋ has no role in light bending.The text book methods of calculation of bending using that path equation would then lead to diminished bending -, which conflicts with observation.The purpose of the present article is to justify why Rindler-Ishak method [7] has to be preferred over text book methods when asymptotically non flat spacetimes are concerned.The method not only gives the needed enhanced bending due to the attractive halo gravity but also leads to a new additional effect right in the first order bending of light.Next, we proceed to investigate the stability of circular orbits of massive and massless particle via approach of dynamical systems, which also suggests that > 0. All the results are summarized at the end.
II. Geodesic Equation
The Weyl action is given by = ∫ 4 (−) −1/2 (1) where is the Weyl tensor, is the dimensionless gravitational coupling constant.Variation of the action with respect to the metric gives the field equations where is given by We can immediately confirm that the Schwarzschild = 0 solution is indeed an exterior solution to the theory so that the success of solar system tests are already embedded in to Weyl gravity.An interesting solution of the field equation is MK metric given by [6] (vacuum speed of light 0 = 1, unless restored): where and are constants.The numerical value of ≈ 10 −56 −2 and ≈ 3.06 × 10 −30 −1 as determined from the fit to galactic flat rotation curve data [6c].
, we get the following path equation for a test particle 0 on the equatorial plane = /2 as follows: where ℎ = 0 0 , the angular momentum per unit test mass.Due to conformal invariance of the theory, geodesics for massive particles would in general depend on the conformal factor 2 (), but here a fixed conformal frame has been considered not the other conformal variants of the metric.For photon, 0 = 0 implies that ℎ → ∞ and one ends up with the conformally invariant equation but without making its appearance: In the Schwarzschild-de Sitter (SdS) metric, such a cancellation has been noted for long [8].The cosmological constant ˄ does not appear in the light path differential equation and hence it is believed that ˄ does not influence light bending [9][10][11][12][13].Here we find that the cancellation of occurs despite the presence of in the metric.Exactly, as in the SdS case, one would now expect that the bending of light would be the same, to any order, with or without .However, Rindler and Ishak [7] have shown that this need not be the case.They argued that "the differential equation and its integral are only half of the story.The other half is the metric itself, which determines the actual observations that can be made on the , orbit equation.When that is taken in to account a quite different picture emerges: the cosmological constant ˄ does contribute to the observed bending of light".This argument also finds support in the fact that the effect due to must appear via the consideration of the full metric in the calculation of physically observable effects, such as the bending of light rays.
III. Bending of light rays
Although the MK metric is different from the SdS metric, it will be shown that the influence of still appears in the bending provided the calculations are done using the Rindler-Ishak method.Thus the light path equation in zeroth order is where . The solution of Eq.( 7) is a straight line 0 = parallel to x-axis, where is the distance of closest approach to the origin ( just perpendicular distance).Following the method of small perturbations [14], we derive the solution up to second order in 2 as Assuming that → 0 for → 2 − , the solution can be rewritten as, 16 3 [10( − 2) − 3 + 22] (9) The method of Rindler and Ishak [7] is based on the invariant formula for the cosine of the angle between two coordinate directions and such that Differentiating Eq.( 9) with respect to , and denoting = (, ), we get Eq.( 10) then yields or in a more convenient form when = 0, we get from Eq.( 12) The one sided bending angle is given by = − and let us calculate = = 0 when = 0. Putting the value from Eqs.( 14), (15) in Eq. ( 13), we get Expanding in powers of in second order for a small angle 0 (or, 0 ≅ 0 ), we obtain the following expression: The roles of and are quite evident in the above.It is found that the contribution is exactly same to the bending due to as in Ref. [7] as well as the exact first and second order Schwarzschild terms in derived by Bodenner and Will [14].The result shows that the effect of does influence the bending although the trajectory equation ( 6) does not contain .For = 0 it is found that the total Schwarzschild bending is enhanced by a halo contribution .The result is quite consistent with the attractive halo gravity.
Next, an entirely new effect has been noticed: The last term in Eq.( 17) contains a coupling between and giving rise to a dimensionless factor 3 64 that adds a constant to unity.This leads to a Weyl gravity modification to the observed first order bending itself.To get an idea of the magnitude involved, let arcsec be the total first order bending in the solar gravity.Then where 0 is the first post-Newtonian parameter.The prediction from EGRT gives = 4 = 1.7504 arcsec.Putting this in the above, we get Assuming ≈ 10 −30 −1 and the solar mass to be ≅ 3 × 10 5 , we get a value 0 ≈ 1 − 1.5 × 10 −27 .Currently estimated value is 0 = 2 × (0.99992 ± 0.00014) − 1 [15], which is close to 1 up to an accuracy of 10 −4 .Note that the second post Newtonian correction demands an accuracy of the order of 10 −6 but its measurement is already beset with some technical difficulties though not unsurmountable (See Refs [14][15][16]).Naturally, the accuracy of 10 −27 demanded by the matching of from the rotation curve data with that from solar gravity is technologically unattainable even in far future.
|
v3-fos-license
|
2020-08-20T10:12:18.938Z
|
2020-07-01T00:00:00.000
|
221704532
|
{
"extfieldsofstudy": [
"Medicine",
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://academic.oup.com/tas/article-pdf/4/3/txaa151/33735867/txaa151.pdf",
"pdf_hash": "36acf2b40a774ee25850ca3a587ae5ce4a273b48",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46694",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "c0dc27c57e997b7a1e61ed060b73df554603e24d",
"year": 2020
}
|
pes2o/s2orc
|
Effects of medium-chain fatty acids as alternatives to ZnO or antibiotics in nursery pig diets
Abstract The objective of this experiment was to evaluate the effects of medium-chain fatty acids (MCFA) on nursery pig performance in place of ZnO and carbadox. In this trial, 360 weanling pigs (DNA 200 × 400; 5.4 ± 0.07 kg BW) were fed for 35 d, with 6 pigs/pen and 10 replicate pens/treatment. Upon weaning, pigs were weighed and allotted to pens based on BW in a completely randomized design to one of six treatment diets: 1) Negative control (no added ZnO or carbadox); 2) Control + 3,000 ppm ZnO in phase 1 and 2,000 ppm ZnO in phase 2; 3) Control + 50 g/ton carbadox; 4) Control + C6:C8:C10 MCFA blend; 5) Control + Proprietary Oil Blend (Feed Energy Corp.); and 6) Control + monolaurate blend (FORMI GML from ADDCON). Treatment diets were fed through two dietary phases and a common diet fed through phase three. Pigs and feeders were individually weighed on a weekly basis to determine average daily gain (ADG) and average daily feed intake (ADFI). From days 0 to 19, pigs being fed the ZnO or Carbadox diets had the greatest ADG. These pigs had significantly higher (P < 0.05) ADG than pigs fed the control or Feed Energy Proprietary Oil Blend, whereas pigs fed the C6:C8:C10 blend or FORMI GML diets had similar (P > 0.05) ADG compared with those fed carbadox. These effects were primarily driven by feed intake, which was greatest (P < 0.05) in pigs fed ZnO and carbadox. Treatment diet had a marginally significant effect (P = 0.078) on G:F. Increased day 19 BW (P < 0.05) was observed for pigs fed ZnO and carbadox compared with the negative control, whereas other treatments were intermediate. Additionally, blood data and fecal scores were collected throughout the trial. On day 21, pigs fed ZnO or carbadox had higher (P < 0.0001) glucose values than those fed the Feed Energy Proprietary Oil Blend, with other diets being intermediate, showing potential health benefits of carbadox. Although ZnO resulted in higher glucose values, it may also contribute to hepatic issues. Although replacing ZnO and carbadox with MCFA did not result in significant changes in gut microflora, it did affect fecal consistency by softening the feces during the treatment period. Overall, these results show that ZnO and carbadox are valuable additives to help maximize growth performance in early stages of the nursery. Some MCFA products, like FORMI GML, may result in similar performance, whereas others restrict it. Thus, additional research is needed to study the effectiveness of MCFA to replace ZnO or feed-based antibiotics.
INTRODUCTION
The postweaning period is typically a time of health challenge and limited growth performance. Pigs can be stressed from being placed in a new environment, and immature digestive systems can result in reduced feed intake and feed efficiency. Additionally, increased risk for intestinal health problems can often be prevalent with diarrhea stemming from bacterial sources (Pluske, 2013). Antimicrobial agents have been utilized for decades to treat these conditions and ultimately improve nursery pig health and growth performance. For example, supplementation of pharmacological levels (2,000 to 3,000 ppm) of ZnO is a common practice to reduce postweaning diarrhea (Liu et al., 2018). Additionally, feed-based antibiotics, such as carbadox, are widely used additives in swine diets, especially during the nursery stage when newly weaned pigs are subject to enteric diseases and reduced feed intake. Controlled research has shown that including antibiotic growth promoters, like carbadox, can increase growth rate and feed efficiency in weanling pigs by 16.4% and 6.9%, respectively (Cromwell, 2002). Despite these benefits, concerns with potential antibiotic resistance and antibiotic residue in animal products have surfaced (Bager et al., 2000;Gallois et al., 2009). Additionally, the use of pharmacological levels of ZnO has posed environmental concerns due to increased excretion of zinc in swine waste utilized as fertilizer (Jondreville et al., 2003). That said, their use is strictly regulated by the FDA to avoid the risk of potential residues and to maintain environmental and consumer health. With these regulations increasing and a rise in consumer pressure to eliminate the use of feed-based antibiotics in swine production, this leaves swine producers searching for antimicrobial replacements that can yield the same positive outcomes, while avoiding any negative consequences ( Landers et al., 2012;Center for Disease Control, 2013). One potential alternative is thought to be medium-chain fatty acids (MCFA). MCFA are saturated fatty acids with carbon chains 6 to 12 atoms long and consist of caproic (C6), caprylic (C8), capric (C10), and lauric (C12) acids that naturally occur in triglycerides of various feed ingredients. Their ability to be easily digested allows them to be utilized by the pig for growth, or by cells within the pig's gut to improve development and overall health (Zentek et al., 2011). Since MCFA are directly absorbed into circulation and easily oxidized by the liver, they can also serve as a very rapid energy source for pigs during stressful times (Babayan, 1987;Lee et al., 1994).
Their inclusion in swine diets has been demonstrated to reduce the risks of viruses in swine feed, and Cochrane et al. (2018) described their ability to replace 400 g/ton chlortetracycline in phase 2 nursery diets. Additionally, controlled research has reported increased growth performance when MCFA are fed in mid-to-late-nursery diets, even in the absence of health challenges (Thomson et al., 2018;Thomas et al., 2018). However, field research has shown mixed results, especially when feeding begins in early nursery. Therefore, the objective of this study was to evaluate the effectiveness of three different MCFA combinations as replacements for ZnO and carbadox on growth performance, fecal consistency, fecal dry matter, and blood parameters during the nursery phase.
MATERIALS AND METHODS
All experimental procedures adhered to guidelines for the ethical and humane use of animals for research according to the Guide for the Care and Use of Agricultural Animals in Research and Teaching (FASS, 2010) and were approved by the Institutional Animal Care and Use Committee at Kansas State University (IACUC #4036.20).
Animal Housing, Dietary Treatments, and Experimental Design
A total of 360 weanling pigs (DNA 200 × 400; 5.4 ± 0.07 kg BW; approximately 21 d of age) were used in a 35-d experiment with 6 pigs per pen and 10 replicate pens per treatment. Upon weaning, pigs were individually weighed and allotted to pens based on BW to one of six dietary treatments: 1) Negative control (no added ZnO or carbadox); 2) Control + 3,000 ppm ZnO in phase 1 and 2,000 ppm ZnO in phase 2; 3) Control + 50 g/ton carbadox; 4) Control + C6:C8:C10 MCFA blend; 5) Control + Proprietary Oil Blend (Feed Energy Corp.); and 6) Control + monolaurate blend (FORMI GML from ADDCON). Diets were isocaloric, with choice white grease used to balance the energy level. Diets were fed in three phases: phase 1 from days 0 to 7; phase 2 from days 7 to 21; and phase 3 from days 21 to 35 ( Figure 1). Phase 3 was a common diet fed to all pigs. All diets were made at the O.H. Kruse Feed Mill (Kansas State University, Manhattan, KS) and were fed in pellet form in phase 1 and in meal form in phases 2 and 3 of the nursery. Diets were also blinded and analyzed for proximate analysis and fatty acid profile at Midwest Laboratories (Midwest Laboratories, Omaha, NE). Target conditioning temperature for pelleting was ~51.7 °C for 30 s, with target hot pellet temperature ~71.1 °C. Pelleting parameters were die size of 3/16″ x 1 1/4 ″ (L/D = 6.0), 1.560 lb/h production rates, and approximately 72 °F ambient temperature.
Pigs were housed in a controlled environment nursery facility (Kansas State University Swine Research and Teaching Center, Manhattan, Kansas) with six pigs per pen. Each pen (1.52 × 1.52 m) included a 4-hole dry self-feeder and a cup drinker to provide all pigs ab libitum access to feed and water.
Sample Collection, Analyses, and Calculations
All pigs and feeders were weighed on a weekly basis to determine average daily gain (ADG) and average daily feed intake (ADFI). Whole blood samples were collected on days 0 and 21 and submitted to the Kansas State University Veterinary Diagnostic Laboratory (Kansas State University, Manhattan, KS) for complete blood panel, serum chemistry, and hepatic profile. Additionally, fecal swabs were taken from the same three pigs in each pen on days 0, 7, 14, 21, 28, and 35. Three fecal samples from the same pen were pooled for subsequent analysis for fecal microflora and antimicrobial resistance. Fecal scoring was conducted by two independent, trained scorers on days 0, 1, 2, 7, 14, 19, 28, and 35 to categorize the consistency of piglet feces per litter. A numerical scale from 1 to 5 was used: 1, being hard pellet-like feces; 2, a firm formed stool; 3, a soft moist stool that retains shape; 4, a soft unformed; and 5, a watery liquid stool. , 2007). Additionally, diets were analyzed for fatty acid profiles to determine the levels of C6:0, C8:0, C:10, and C:12 fatty acids (method 996.06, AOAC, 2007).
Data were analyzed using the GLIMMIX procedure of SAS (SAS Institute, Inc., Cary, NC) with pen as the experimental unit and room as a random effect. Results were considered significant if P ≤ 0.05 and marginally significant if 0.05 > P ≤ 0.10.
Nursery Pig Growth Performance
In the first week postweaning, pigs fed diets containing carbadox had greater (P < 0.05) ADG than those fed the MCFA or Feed Energy Proprietary Oil Blend. Feed intake was greater (P < 0.05) when pigs were fed diets supplemented with ZnO compared with those with the MCFA or Feed Energy Proprietary Oil Blend. This led to a marginally significant impact of diet on G/F from days 0 to 7, with the greatest feed efficiency occurring in pigs fed carbadox or FORMI GML and the poorest feed efficiency in pigs fed the MCFA blend. Although the FORMI GML product resulted in similar performance as ZnO and carbadox, the other MCFA products had adverse impacts. These findings somewhat refute research done by Hong et al. (2012) that describes the ability of MCFA to increase ADFI for the first two weeks following weaning when compared with diets containing antibiotics. Similarly, Rodas et al. (1990) found that MCFA inclusion at a rate of 20 to 60 g/kg could increase ADG and G/F in weanling pigs shortly after supplementation. A primary reason to describe this is the piglet's ability to effectively absorb and use MCFA (Odle et al., 1989). More specifically, Odle et al. (1999) explain that MCFA are able to be absorbed without hydrolysis by lipase, and they enter the liver faster; thus, they are hydrolyzed quicker and digested easier. However, the discrepancies between results of these experiments and the current study warrant further research to evaluate MCFA impact on feed intake and feed efficiency during the first week postweaning. Figure 1. Impact of dietary treatment on fecal score. Fecal scoring was categorized as a numerical scale from 1 to 5 as follows: 1, hard pellet-like feces; 2, firm formed stool; 3, soft moist stool that retains shape; 4, soft unformed; and 5, watery liquid stool.
In phase 2 (days 7 to 19), pigs fed diets containing ZnO had greater (P < 0.05) ADG than those fed either the control or diets containing the MCFA or Feed Energy Proprietary Oil Blends. This was due to pigs consuming the ZnO diet having greater (P < 0.05) feed intake than those fed the MCFA or Feed Energy Proprietary Oil Blends and greater (P < 0.05) G/F than pigs consuming the control diet. Similarly, Cemin et al. (2018) showed that weanling pigs being fed added ZnO had increased ADFI and enhanced growth performance. Yet, research conducted byMellick et al. (2019) showed increased ADFI by inclusion of a MCFA blend. This contrasts results found in the current study, suggesting that further evaluation of feeding MCFA during mid-nursery is needed to better understand how different fatty acid blends and commercial products can affect growth performance (Table 1).
During the entire treatment period (days 0 to 19), pigs fed diets containing ZnO or carbadox had greater (P < 0.05) ADG than those fed control diets or diets containing the Feed Energy Proprietary Oil Blend. Other controlled research has demonstrated that the use of antibiotics, like carbadox, results in increased growth performance. In fact, Puls et al. (2018) found similar results when feeding two antibiotic feeding programs, one of which consisted of carbadox, and comparing them to nonmedicated control diets. Their results also displayed an increase in ADG, but no significant effect on feed efficiency. However, in the current experiment, there was a substantial feed intake improvement (P < 0.05) in pigs fed diets containing ZnO compared with those fed control diets or the MCFA or Feed Energy Proprietary Oil Blends, but there was no overall difference in feed efficiency during the treatment period.
As expected, there were no discernable differences (P > 0.10) in pigs fed common diets during phase 3 (days 19 to 35). However, there was sufficient difference in early growth performance to cause significant differences in both ADG and ADFI overall (days 0 to 35). Although all treatments had pigs starting with the same average weight, up to 0.32-kg difference in body weight was observed among treatments just 1-wk postweaning. By the end of the 35-d experiment, pigs fed diets containing ZnO or carbadox were at least 1.05 or 0.73 kg heavier than those fed control diets or diets containing the MCFA or Feed Energy Proprietary Oil Blends.
This study shows that ZnO and carbadox are valuable additives to help maximize performance in the early nursery period. These findings coincide with other research that states ZnO can promote growth performance during the postweaning period when included at pharmacological levels (Sales, 2013). However, this research also demonstrates that some lipid-containing feed additives, such as FORMI GML, may result in similar performance as ZnO and feed-based antibiotics. Yet, it was also determined that other MCFA products may actually reduce feed intake and subsequent growth when included in early nursery diets. Thus, when comparing the results of this study to others within this field, findings are variable. The current experiment showed that the FORMI GML product can yield similar growth performance as ZnO and carbadox, but it is unknown what specific mode of action allowed this product to perform in such a way. One possibility could be the specific MCFA profile in FORMI GML. The analyzed feed samples suggest that there was some variation in levels of MCFA in each diet, which could have affected the efficacy of each product in this scenario. Controlled research by Gebhardt et al. (2017) described that a blend of MCFA in nursery pig diets can result in improvement in growth performance; however, the effects of MCFA depend on the type and inclusion rate. Improvements in nursery pig growth performance were observed by Gebhardt et al. (2017) by including 0.50% C6 or C8 with 0.25 to 1.50% of a 1:1:1 blend of C6, C8, and C10. Therefore, further research is needed to study specific MCFA concentrations and how they affect growth performance at different inclusion levels (Table 2).
Blood Parameters
Day 0 blood data were collected and analyzed as a baseline for comparison. Although discrepancies were detected (P = 0.0351) for day 0 bicarbonate concentrations, by day 21 these values became similar (P = 0.0372). On day 21, pigs fed ZnO or carbadox had higher (P < 0.0001) glucose values than those fed the Feed Energy Proprietary in higher (P < 0.0001) alkaline phosphatase concentrations compared with all other treatments. Differences in day 21 urea nitrogen and anion gap were marginally significant (P = 0.056 and P = 0.070, respectively). No significant impact (P > 0.10) was found for day 21 concentrations of creatinine, protein, albumin, globulin, phosphorus, sodium, potassium, chloride, sorbitol dehydrogenase, creatine kinase, or bilirubin. These findings indicate that carbadox may provide a health benefit to pigs. Although the ZnO diet proved higher blood glucose values, it may also contribute to hepatic issues. Other diets remain intermediate. Further research is necessary to better comprehend the effects of MCFA on blood serum chemistry and hepatic profiles (Table 3).
Fecal Consistency and Gut Microflora
Initial fecal scoring on day 0 of the experiment showed similar fecal scores for all pigs at placement. However, on days 1, 2, 7, 14, and 19, pigs fed the ZnO and carbadox treatment had significantly lower fecal scores (P < 0.05) when compared with those being fed the control diet or diets containing MCFA blend, Feed Energy Proprietary Oil Blend, or FORMI GML. Similarly, findings from A total of 360 weanling pigs (6 pigs per pen, 10 pens/treatment) were fed treatment diets during phase 1 (days 0 to 7) and phase 2 (days 8 to d 19). A common diet was fed from days 20 to 35. abc Means within a row that do not share a common superscript differ P < 0.05.
|
v3-fos-license
|
2024-05-31T15:02:17.587Z
|
2024-05-28T00:00:00.000
|
270125521
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ijsrm.net/index.php/ijsrm/article/download/5356/3307",
"pdf_hash": "1c744abdfa39911909b0b5237c07c884f64c25b8",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46695",
"s2fieldsofstudy": [
"Business"
],
"sha1": "62808c3be01f4827cb2eb4cf82a3d45da81830b9",
"year": 2024
}
|
pes2o/s2orc
|
Decoding the Consumer Mindset: Exploring the Role of E-WOM, Online Experiences and Brand Trust in KKV One-Stop Shop’ Purchase Intention
: This quantitative study aims to contribute to this body of knowledge by investigating the influence of e-WOM, online shopping experiences and brand trust on purchase decisions at KKV one-stop shopping store, a prominent e-commerce platform specializing in one-stop shopping. By examining the roles of e-WOM, online experiences, and brand trust in shaping consumer mindsets and purchase decisions at KKV one-stop shopping store, this study aims to contribute to the growing body of knowledge in consumer behavior and e-commerce research, particularly in the context of one-stop shopping platforms. Primary data were collected through a questionnaire distributed through social media platforms using the Google Form application. The study used a purposive sampling technique and the final sample consisted of 121 respondents. The results show that e-WOM, online shopping experience and brand trust have positive and statistically significant effects on purchase decisions. Specifically, favorable e-WOM communication, positive online shopping experience and high levels of brand trust increase the likelihood that consumers will purchase from KKV one-stop shopping store. The findings highlight the importance of cultivating positive e-WOM, providing seamless online shopping experiences and fostering brand trust as strategies to improve consumer purchase decisions in the e-commerce space.
Introduction
The exponential growth of e-commerce and the increasing prevalence of online shopping have significantly transformed the retail landscape 1 .As consumers become more technologically savvy and digitally connected, their purchase decisions are heavily influenced by various online factors.Among these factors, electronic word-of-mouth (e-WOM), online experiences, and brand trust have emerged as crucial determinants shaping consumer mindsets and behavior 2 .E-WOM, defined as the exchange of product or service information among consumers via digital platforms, has become a powerful force in influencing purchase decisions 2 .The ubiquity of social media, online reviews, and consumer forums has enabled consumers to access a wealth of peer-generated information, shaping their perceptions, attitudes, and purchase intentions.Previous studies have demonstrated the significant impact of e-WOM on consumer decision-making processes, highlighting its ability to amplify or diminish brand reputation and sales 2 .Concurrently, online experiences have emerged as a pivotal factor in shaping consumer behavior.The seamless integration of digital technologies into the shopping journey has heightened consumer expectations for personalized, convenient, and engaging experiences 3 .From intuitive website design and user-friendly interfaces to efficient order fulfillment and responsive customer service, online experiences profoundly influence consumer satisfaction, loyalty, and repurchase intentions 4 .Moreover, brand trust has become a cornerstone of successful e-commerce strategies.In the digital realm, where physical interactions are limited, consumers heavily rely on the perceived trustworthiness, credibility, and reliability of brands 2 .Fostering brand trust through transparent communication, data privacy practices, and consistent delivery of high-quality products and services can significantly impact consumer decision-making processes and long-term brand equity 5 .Today's consumer has more choices than ever before.In this vast marketplace, with competing price points, varying fulfilment speeds and a huge range of products, retailers who can offer the best of everything have a significant advantage over those whose products are more niche.As more retailers expand their inventory to include more categories, the appeal of one-stop shopping will continue to grow 6 .In the context of one-stop shopping stores, which offer a diverse range of products and services under a single digital platform, understanding the interplay between e-WOM, online experiences, and brand trust becomes paramount 1 .These stores, such as KKV one-stop shopping store, have gained immense popularity due to their convenience, variety, and competitive pricing.However, the complex nature of these platforms, encompassing multiple product categories and diverse consumer segments, poses unique challenges in decoding consumer mindsets and preferences 7 .Previous research has explored the individual impacts of e-WOM, online experiences, and brand trust on consumer behavior, but few studies have comprehensively examined their combined influence within the context of one-stop shopping stores 8 .This gap in the literature highlights the need for a deeper understanding of how these factors collectively shape consumer mindsets and influence purchase decisions in this rapidly evolving retail environment.By investigating the roles of e-WOM, online experiences, and brand trust in shaping consumer mindsets and purchase decisions at KKV one-stop shopping store, this study aims to contribute to the burgeoning body of knowledge in consumer behavior and e-commerce research.The findings of this research have the potential to yield valuable insights for retailers, marketers, and e-commerce platforms, enabling them to develop strategies that resonate with consumer preferences, foster trust and loyalty, and ultimately drive sustainable business growth in the dynamic and competitive milieu of online retailing.
Literature Review
The advent of e-commerce and the proliferation of online shopping have catalyzed a paradigm shift in consumer behavior, prompting extensive scholarly inquiry into the factors shaping purchase decisions in the digital realm.Extant literature has delved into the roles of electronic word-of-mouth (e-WOM), online experiences, and brand trust as critical determinants influencing consumer mindsets and purchase intentions.
Electronic Word-of-Mouth (e-WOM)
The concept of electronic word-of-mouth (e-WOM) has garnered significant attention from researchers in recent years.e-WOM refers to the exchange of product or service information among consumers through digital platforms, such as social media, online reviews, and consumer forums5,6,8,9.Previous studies have highlighted the profound influence of e-WOM on shaping consumer perceptions, attitudes, and purchase intentions10, 11.As consumers increasingly rely on peer-generated content for decision-making, e-WOM has emerged as a critical factor in the e-commerce landscape, with the ability to significantly impact brand reputation, product evaluations, and sales performance.
Online Experiences
The online shopping experience has become a pivotal determinant of consumer satisfaction and loyalty in the digital age.Researchers have explored various aspects of online experiences, such as website design, user interface, order fulfillment processes, and customer service interactions12.Numerous studies have demonstrated that delivering exceptional online experiences can foster customer loyalty, drive repeat purchases, and cultivate long-term brand advocacy.As consumers expect seamless, personalized, and engaging interactions that transcend traditional brick-and-mortar retailing, the quality of online experiences has become a critical differentiator for e-commerce businesses 13 .
Brand Trust
Brand trust has emerged as a cornerstone of successful e-commerce strategies, particularly in the absence of physical interactions and tangible product experiences.A growing body of literature has investigated the role of brand trust in shaping consumer confidence, brand equity, and purchase behavior14.Researchers have consistently emphasized the importance of transparent communication, robust data privacy practices, and consistent delivery of high-quality products and services in fostering brand trust and influencing consumer decision-making processes12.In the digital realm, where consumers heavily rely on perceived trustworthiness and credibility, brand trust has become a crucial factor in driving long-term success for ecommerce businesses.Furthermore, brand trust has been recognized as a cornerstone of successful e-commerce strategies, particularly in the absence of physical interactions and tangible product experiences.Scholars have investigated the role of brand trust in shaping consumer confidence, brand equity, and purchase behavior13,14,15.Research has highlighted the importance of transparent communication, robust data privacy practices, and consistent delivery of high-quality products and services in fostering brand trust and influencing consumer decision-making processes.By examining the roles of e-WOM, online experiences, and brand trust in shaping consumer mindsets and purchase decisions at KKV one-stop shopping store, this study aims to contribute to the growing body of knowledge in consumer behavior and e-commerce research, particularly in the context of one-stop shopping platforms16.
Based on the existing literature, the study hypothesis that positive e-WOM communication, favorable online shopping experiences and high levels of brand trust will positively influence consumers' purchase decisions in the KKV one-stop shopping store.Specifically, the following hypotheses are proposed: H 1 : E-WOM has a positive and significant effect on KKV one-stop shopping purchase decisions.H 2 : Online shopping experience has a positive and significant effect on KKV one-stop shopping purchase decisions.H 3 : Brand trust has a positive and significant effect on KKV one-stop shopping purchase decisions.
By empirically testing these hypotheses, the study aims to shed light on the complex relationships between e-WOM, online shopping experience, brand trust and purchase decisions in the context of KKV one-stop shopping.The findings may have practical implications for e-commerce practitioners, helping them to develop strategies that harness the power of e-WOM, optimize online shopping experiences and cultivate brand trust, ultimately influencing consumer purchase decisions and driving business growth in the dynamic e-commerce landscape.
Research Design and Approach
To investigate the influence of e-WOM, online shopping experiences, and brand trust on purchase decisions at KKV one-stop shopping store, a quantitative research design was employed.Specifically, a cross-sectional survey methodology was adopted, which is a commonly used approach in consumer behavior and ecommerce research to capture consumer perceptions, attitudes, and behaviors at a specific point in time 17 .
Sampling and Data Collection
The target population for this study consisted of individuals who had made at least one purchase from KKV one-stop shopping store.A non-probability purposive sampling technique was utilized to recruit respondent.Primary data were collected through a self-administered online questionnaire distributed via social media platforms using the Google Form application, which is a widely adopted method for data collection in contemporary consumer research.The final sample consisted of 121 respondents, which is an acceptable sample size for studies investigating consumer behavior and e-commerce-related phenomena 18 .
Survey Instrument
The survey instrument was developed based on an extensive review of relevant literature and wellestablished scales from previous studies.The questionnaire comprised several sections, including demographic information, online shopping behavior, and multi-item scales for measuring e-WOM, online experiences, brand trust, and purchase intentions.E-WOM was measured using a scale adapted from El-Baz et al. 19 assessing factors such as the influence of online reviews, recommendations from peers, and social media discussions on purchase decisions.The online experience scale was derived from the work of Yu et al. 20 capturing elements such as website usability, product information quality, order fulfillment efficiency, and customer service responsiveness.Brand trust was measured using a scale developed by Ha 15 , which evaluates aspects such as perceived credibility, reliability, and transparency of the KKV one-stop shopping store brand.Finally, purchase intentions were assessed using a scale adapted from Sari et al. 21measuring factors like the likelihood of making future purchases, recommending the store to others, and overall satisfaction with the shopping experience.Before data collection, the survey instrument underwent pilot testing with a small sample of respondents to ensure clarity, comprehensibility, and face validity.Necessary modifications were made based on the feedback received from the pilot study.
Data Analysis
The collected data were analyzed using various statistical techniques, including descriptive statistics, correlation analysis, and multiple regression analysis.Descriptive statistics were used to summarize the sample characteristics and provide an overview of the responses.Correlation analysis was employed to examine the relationships between e-WOM, online experiences, brand trust, and purchase intentions.Multiple regression analysis was then conducted to determine the relative influence of the independent variables (e-WOM, online experiences, and brand trust) on the dependent variable (purchase intentions).The statistical analyses were performed using IBM SPSS Statistics software.
Multiple linear regression test
Regression analysis, which is used to measure the strength of the relationship between two or more variables, also shows the direction of the relationship between the dependent and independent variables.This analysis is used to predict the value of the dependent variable if the value of the independent variable increases or decreases, and to determine whether the independent variable is positively or negatively related.
Table 1: Multiple regression analysis
Based on the results of these data tests, it can be concluded that the multiple linear equation between the variables e-WOM (X1), online shopping experience (X2) and brand trust (X3) on purchase decisions (Y) is as follows: Y = 2.932 + 0.278 EWOM + 0.471 OSE + 0.466 PI + e
Result and Discussion
The purpose of this study was to investigate the role of electronic word-of-mouth (e-WOM), online shopping experiences and brand trust in shaping consumer purchase intentions at the KKV One-Stop Shopping Store.The findings provide valuable insights into the factors that influence consumer decision making in the context of one-stop shopping platforms.First, the results indicate that e-WOM has a positive and significant effect on purchase intention (β = 0.278, p = 0.014).This finding is consistent with previous studies that have highlighted the influential role of e-WOM in consumer decision making 22 .In the age of digital connectivity and social media, consumers increasingly rely on peer-generated content, online reviews and recommendations from their social networks when making purchase decisions.Positive e-WOM can significantly increase consumer confidence, reduce perceived risk and ultimately influence their purchase intention 22 .Therefore, it is imperative for one-stop shopping stores such as KKV to actively monitor and manage e-WOM channels, encourage positive wordof-mouth and promptly address potential negative feedback.Secondly, the results demonstrate that online shopping experiences have a positive and significant impact on purchase intentions (β = 0.471, p = 0.001).This finding aligns with previous research emphasizing the importance of delivering exceptional online experiences in fostering consumer satisfaction and loyalty 23 .In the context of one-stop shopping stores, where consumers engage with a diverse range of products and services within a single platform, seamless online experiences become crucial.Factors such as website usability, product information quality, order fulfillment efficiency, and responsive customer service can significantly influence consumer perceptions and ultimately shape their purchase intentions 22,23 .One-stop shopping stores should prioritize enhancing their online platforms, ensuring intuitive navigation, comprehensive product information, and efficient order management processes to create a positive and engaging shopping experience for consumers.
Thirdly, the study confirms that brand trust exerts a positive and significant effect on purchase intentions (β = 0.466, p = 0.000).This finding is consistent with existing literature that underscores the pivotal role of brand trust in driving consumer decision-making, particularly in the online shopping environment 24 .In the absence of physical interactions and tangible product experiences, consumers heavily rely on the perceived credibility, reliability, and transparency of the brand when making purchase decisions.By fostering brand trust through transparent communication, robust data privacy practices, and consistent delivery of highquality products and services, one-stop shopping stores can build consumer confidence and enhance their propensity to make purchases 24 .KKV one-stop shopping store should prioritize establishing and maintaining a strong brand identity built on trust, which can serve as a competitive advantage in the crowded ecommerce marketplace.
Overall, the findings of this study contribute to the growing body of knowledge in consumer behavior and ecommerce research, particularly in the context of one-stop shopping platforms.By understanding the interplay between e-WOM, online experiences, and brand trust, businesses operating in this dynamic retail environment can develop effective strategies to resonate with consumer preferences, foster loyalty, and ultimately drive sustainable business growth.
Conclusion
In conclusion, this study has provided valuable insights into the role of electronic word-of-mouth (e-WOM), online shopping experience and brand trust in shaping consumer purchase intentions at the KKV one-stop shopping store.The results show that favorable e-WOM communication, positive online shopping experiences and high levels of brand trust have significant positive influences on consumers' likelihood to purchase from the one-stop shopping platform.These findings underscore the importance of cultivating positive e-WOM, delivering seamless online experiences and fostering strong brand trust as key strategies for e-commerce companies, particularly in the context of one-stop shopping stores.By addressing these critical factors, one-stop shopping platforms can effectively decode consumer mindsets, resonate with their preferences and drive sustainable business growth in the rapidly evolving digital retail landscape.
|
v3-fos-license
|
2018-04-03T04:23:04.927Z
|
2017-05-01T00:00:00.000
|
24985140
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.18295/squmj.2016.17.02.010",
"pdf_hash": "2745e53fd08b2498efd8e34d4de08e9bddf78317",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46697",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "2745e53fd08b2498efd8e34d4de08e9bddf78317",
"year": 2017
}
|
pes2o/s2orc
|
Outcomes of Multi-Trauma Road Traffic Crashes at a Tertiary Hospital in Oman Does attendance by trauma surgeons versus non-trauma surgeons make a difference ?
Objectives: Trauma surgeons are essential in hospital-based trauma care systems. However, there are limited data regarding the impact of their presence on the outcome of multi-trauma patients. This study aimed to assess the outcomes of multi-trauma road traffic crash (RTC) cases attended by trauma surgeons versus those attended by non-trauma surgeons at a tertiary hospital in Oman. Methods: This retrospective study was conducted in December 2015. A previously published cohort of 821 multi-trauma RTC patients admitted between January and December 2011 to the Sultan Qaboos University Hospital, Muscat, Oman, were reviewed for demographic, injury and hospitalisation data. In-hospital mortality constituted the main outcome, with admission to the intensive care unit, operative management, intubation and length of stay constituting secondary outcomes. Results: A total of 821 multi-trauma RTC cases were identified; of these, 60 (7.3%) were attended by trauma surgeons. There was no significant difference in mortality between the two groups (P = 0.35). However, patients attended by trauma surgeons were significantly more likely to be intubated, admitted to the ICU and undergo operative interventions (P <0.01 each). The average length of hospital stay in both groups was similar (2.6 versus 2.8 days; P = 0.81). Conclusion: No difference in mortality was observed between multi-trauma RTC patients attended by trauma surgeons in comparison to those cared for by non-trauma surgeons at a tertiary centre in Oman.
T rauma injuries represent a significant global burden, particularly in middle-and lowincome countries; the World Health Organization (WHO) has estimated that approximately five million people die every year from injuries, most often due to road traffic crashes (RTCs), violence or burns. 1 The physical effects of such injuries can be classified as either immediate, secondary (i.e.within hours of the event) or delayed (i.e.long-term complications).Most trauma-related deaths occur either at the scene of the injury or en route to or within hours of arrival at a healthcare facility. 1Therefore, pre-hospital and initial hospital-based trauma systems focus on reducing the immediate effects of injuries on mortality and morbidity by promptly providing essential life-saving trauma interventions, such as securing a patient's airway, maintaining adequate ventilation and controlling bleeding. 1 The WHO has advocated for the establishment of trauma care systems globally in order to mitigate the mortality and morbidity of trauma injuries. 2Moreover, the American College of Surgeons recommend that trauma surgeons be present upon the initial arrival of a seriously injured patient to the emergency department. 3man has one of the highest rates of RTCs globally; in 2015, there were approximately 6,279 RTCs and this type of accident was the direct cause of 675 deaths and 3,624 injuries. 4Moreover, the RTCrelated fatality rate in 2013 was 30.4 per 100,000 people annually compared to the global yearly average of 18 per 100,000 people. 57][8] Other complex behavioural issues related to modernisation, such as increased use of mobile phones while driving, may also be partially responsible for the high rate of RTCs in Oman. 6,9In some cases, traffic enforcement and legal authorities have failed to keep up with rapid modernisation, resulting in more lenient and less than optimal law enforcement.The trauma system in Oman consists of a pre-hospital emergency system and the Emergency Medical Services (EMS) run by the Public Authority of Civil Defence and Ambulances. 10,11Hospitals in Oman vary in size and resources, ranging from rural health centres staffed by junior non-specialist doctors to tertiary hospitals with qualified trauma and non-trauma surgeons.
The Sultan Qaboos University Hospital (SQUH) in Muscat is the only national tertiary hospital in the country which has board-certified trauma surgeons with training in trauma and critical and acute care surgeries, excluding emergency craniotomies. 10,11At SQUH, the Emergency Department receives an average of 900 trauma patients annually. 11Trauma teams consist of a team leader (either a trauma or non-trauma surgeon), an anaesthetist, an emergency physician, residents in emergency or general surgery and allied health personnel.However, while some of the trauma teams are led by board-certified trauma surgeons, others are led by non-trauma surgeons for whom the scope of emergency procedures is limited to abdominal damage control.It is therefore not clear whether the presence of board-certified trauma surgeons affects the outcome of injured patients.This study aimed to assess differences in outcomes among multitrauma patients injured in RTCs who were attended by board-certified trauma surgeons compared to those attended by non-trauma surgeons.
Methods
This retrospective study took place in December 2015 and utilised the same cohort as that of a previously published study. 11The electronic medical records of all RTC multi-trauma patients admitted to the Emergency Department of SQUH between January and December 2011 were reviewed.Data for all cases were collected, including the demographic characteristics of the patient and whether they were attended by a board-certified trauma surgeon or a non-trauma surgeon. 11Additionally, information regarding patient outcome, length of hospital stay and injury details, severity and management was recorded. 11The primary outcome was in-hospital mortality, with secondary outcomes constituting of admission to the Intensive Care Unit (ICU), surgical interventions and length of hospital stay.Data were analysed using the Statistical Package for the Social Sciences (SPSS), Version 22.1 (IBM Corp., Armonk, New York, USA).Differences between variables were initially determined using a univariate analysis.For continuous variables, a Student's t-test or Mann-Whitney U test was used whereas a Yates' Chi-squared test or Fisher's exact test was used for categorical variables, as appropriate. 12A P value of <0.05 was considered statistically significant.A general linear multivariate regression analysis was performed to determine if the primary and secondary outcomes were the same between the two groups, after controlling for variables with potential confounding effects, such as age, gender, ethnicity, time of injury, admissions over the weekend, triage status, Injury Severity Score (ISS) and the presence of a head injury.This study received ethical approval from the Ethics Committee of the Ministry of Health in Oman.
Results
A total of 821 multi-trauma RTC cases were admitted during the study period. 11Of these, 60 (7.3%) were attended by trauma surgeons and 761 (92.7%) were attended by non-trauma surgeons.The average age of patients attended by trauma surgeons was similar to that of patients attended by non-trauma surgeons; however, significantly more of the trauma patients attended by trauma surgeons were male compared to those attended by other surgeons (85.0%versus 65.8%; P = 0.01).A similar proportion of patients in both groups were of Omani ethnicity.Almost onethird of the patients in both groups were admitted to the hospital during weekends and approximately twothirds were transported to the hospital by the EMS.
The majority of cases in both groups had an ISS of 0-15.However, 26.7% of cases attended by trauma surgeons had head injuries compared to 8.7% of cases attended by non-trauma surgeons (P <0.01).Nevertheless, when patients with severe head injuries were compared using the Abbreviated Injury Score (AIS) for injuries of >3 AIS severity, there was still no difference in outcome between the two groups.After controlling for potential confounders in the regression analysis, no significant differences were noted with regards to in-hospital mortality between patients attended by trauma surgeons and those attended by non-trauma surgeons (P = 0.35), even when stratified by head trauma AIS or ISS scores (P = 0.42 and 0.50, respectively).In addition, no significant difference was observed with regards to the length of stay between the two groups (2.8 versus 2.6 days; P = 0.81) However, patients attended by trauma surgeons were significantly more likely to be intubated (relative risk [RR]: 13
Discussion
The burden of trauma in Oman has been steadily rising over the last few decades, thus highlighting the need for a well-established trauma system. 6,114][15][16][17] However, the trauma system in Oman has not kept pace with modernisation and rapid population growth in the country.Furthermore, a non-holistic approach has led to a lack of integration of trauma care services within the existing healthcare system; for example, as the pre-hospital trauma care system has developed faster than existing hospital systems, there can be a decline in care once the patient is transferred to a medical facility.At present, SQUH is the only facility in the country with qualified trauma surgeons who have undergone structured training.However, within established hospital trauma systems, it remains to be seen whether all trauma cases require a trauma surgeon.In settings where a trauma system is still in its infancy, such as Oman, determining whether this factor affects outcomes can potentially guide policy-makers in the hiring of additional human resources, if necessary, and to anticipate training requirements.
In the current study, the outcomes of multitrauma RTC patients attended by trauma surgeons were compared with those of patients cared for by non-trauma surgeons at SQUH.The primary practice of the non-trauma surgeons was elective general surgery and subspecialties other than trauma; however, they were all certified in the Advanced Trauma Life Support ® (American College of Surgeons, Chicago, Illinois, USA) training course and were aware of the written resuscitative protocols for trauma care set by the hospital's trauma committee.In addition, all nontrauma surgeons were included within the trauma case schedule and had a similar amount of exposure to trauma patients.No criteria currently exist at SQUH to decide which trauma patients should be attended by trauma surgeons; the presence of a trauma surgeon is instead determined by their call schedule, which covers a minimum of two days a week.
The findings of the present study indicated that multi-trauma RTC patients attended by trauma surgeons at SQUH had similar mortality rates to those cared for by non-trauma surgeons.In addition, the severity of injuries was similar between the two groups.These findings would suggest that-in hospitals with established trauma systems-trauma patients may not always require a trauma surgeon, as this factor did not play a significant role in improving patient outcomes.However, an interesting finding of the present study was that trauma patients attended by trauma surgeons were significantly more likely to undergo surgical interventions than those attended by non-trauma surgeons.This finding is probably a reflection of the confidence of trauma surgeons in operative trauma management compared to a potentially more conservative approach among nontrauma surgeons.
The findings of the present study are in line with some of those reported in the literature.A recent study from a rural trauma centre in the USA similarly found no difference in the mortality rate of trauma patients attended by trauma surgeons versus those attended by other surgeons. 18Podnos et al. also reported no difference in mortality among 1,427 patients at a level I trauma centre cared for either by trauma specialists or general surgeons. 19However, other researchers have obtained different results; Haut et al. reported significantly better outcomes among patients with severe head injuries treated by full-time trauma surgeons compared to those cared for by parttime trauma surgeons. 14In the present study, analysis of the outcomes of patients with head injuries did not indicate statistically significant differences.This finding may have been the result of grouping together all types of head injuries, including moderate head injuries.In addition, no significant results were found when stratifying patients by ISS category.Employing full-time surgeons dedicated exclusively to trauma surgery and surgical critical care, such as those employed in the setting described by Haut et al., is not yet feasible in Oman. 14Currently, there are no centres in Oman busy enough to support a surgical practice dedicated exclusively to trauma care; this could account for the differences in findings reported by this study.Moreover, it may be that the volume of multi-trauma patients seen by non-trauma surgeons at SQUH allows them sufficient opportunities to improve their skills to the level of that of specialised trauma surgeons.Indeed, Smith et al. have demonstrated an inverse relationship between patient volume and mortality rates at trauma centres in the USA. 20Konvolinka et al. also affirmed that increased surgeon experience with seriously injured patients was associated with improved outcomes while Haut et al. reported that surgeons with vastly different levels of training could safely provide trauma care and obtain equivalent patient outcomes. 15,21ased on these findings, it seems that more emphasis should be placed on building a cohesive trauma system rather than focusing on capacitybuilding individual components within the system.As such, all components in a hospital trauma system require equal development to assure that a satisfactory level of trauma care is provided, including institutional policies governing clinical and operational processes, round-the-clock availability of a structured trauma team, a dedicated trauma admitting unit and adequate training and qualification standards for healthcare providers involved in trauma care and the provision of essential equipment and services.Education and the application of evidence-based protocols and guidelines should also be prioritised among non-trauma surgeons in Oman.Another important component is the expansion of the available pool of trauma surgeons; however, the extent to which this is needed remains debatable.Nevertheless, trauma surgeons may act as advocates for better trauma care at the national level.
The current study has several limitations which may have affected the results.First, the study design was retrospective and the cohort was from a single institution.Second, the Glasgow Coma Scale of the patients at admission was not assessed, thus precluding further analysis of patients with severe head injuries.This limitation was minimised as much as possible by stratifying outcomes based on AIS categorisation; nevertheless, patients with head injuries may still have significantly improved outcomes when cared for by trauma surgeons.Third, the low rate of penetrating trauma injuries in the present study population may have resulted in a less defined outcome difference between the two groups.As such, it is possible that a more significant difference would have been evident had the study population been larger and included more severe trauma cases.A multicentre study is recommended for more accurate results.Finally, this study focused primarily on mortality and did not investigate morbidity, for which the presence of an attending trauma surgeon may potentially affect patient outcomes.
Conclusion
No significant difference was noted in the mortality rates of multi-trauma RTC cases attended by trauma surgeons compared to those attended by non-trauma surgeons.These findings indicate that addressing only one component of a trauma system (i.e. the presence of trauma surgeons) is not sufficient to achieve better patient outcomes.As such, better outcomes for trauma patients in Oman may potentially be achievable by developing all components of a trauma system to ensure that it is both effective and cohesive.
Table 1 :
Demographic and injury characteristics of multitrauma road traffic crash cases admitted to the Sultan Qaboos University Hospital, Muscat, Oman (N = 821)
Table 2 :
Outcomes of multi-trauma road traffic crash cases admitted to the Sultan Qaboos University Hospital, Muscat, Oman (N = 821)Outcomes of Multi-Trauma Road Traffic Crashes at a Tertiary Hospital in Oman Does attendance by trauma surgeons versus non-trauma surgeons make a difference?
|
v3-fos-license
|
2018-10-16T05:50:24.810Z
|
2018-10-01T00:00:00.000
|
52966612
|
{
"extfieldsofstudy": [
"Medicine",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1944/11/10/1932/pdf",
"pdf_hash": "2459a754fe522af631cfaa71e78d222a170b6ecc",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46700",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "ca0cfee54c6fb3faafe9c85c9beed5ba1c36e8ae",
"year": 2018
}
|
pes2o/s2orc
|
On the Improvement of Thermal Protection for Temperature-Responsive Protective Clothing Incorporated with Shape Memory Alloy
This study explored the application of shape memory alloy (SMA) springs in a multilayer protective fabric assembly for intelligent insulation that responded to thermal environment changes. Once the SMA spring was actuated, clothing layers were separated, creating an adjustable air gap between the adjacent fabric layers. The impacts of six different SMA arrangement modes and two different spring sizes on thermal protection against either a radiant heat exposure (12 kW/m2) or a hot surface exposure (400 °C) were investigated. The findings showed that the incorporation of SMA springs into the fabric assembly improved the thermal protection, but the extent to which the springs provided thermal protection was dependent on the arrangement mode and spring size. The effectiveness of reinforcing the protective performance using SMA springs depended on the ability of clothing layers to expand an air layer. The regression models were established to quantitatively assess the relationship between the air gap formed by SMA spring and the thermal protective performance of clothing. This study demonstrated the potential of SMA spring as a suitable material for the development of intelligent garments to provide additional thermal protection and thus reduce the number of clothing layers for transitional thermal protective clothing.
Introduction
Thermal hazards are common dangers faced by firefighters, industrial workers in the metallurgical and energy industry, military personnel, and race car drivers. Intense heat transfer from the thermal environment to human skin may result in severe skin burn injuries. Prevention of skin burn injuries associated with exposure to thermal hazards has long been identified as the primary function of thermal protective clothing (TPC) [1]. It is reported that there were an estimated 29,130 injuries to firefighters during fire ground operations in the U.S. in 2015 [2]. Therefore, TPC with excellent thermal protective performance is essential to protect the body from external heat exposure hazards.
To improve the thermal protective performance of clothing, new types of textile fibers need to be developed, as they are the basic raw materials needed to manufacture clothing. Adding layers or thickness to fabrics is also an effective way to improve the thermal insulation of protective clothing [3,4], thereby decreasing the heat transfer through thermal environments. However, clothing weight and bulkiness add additional physiological burden that can lead to heat stress problems [5,6] and induce discomfort in stressful conditions typically associated with occupations such as firefighting [7]. Multilayered and thick fabrics can also store a lot of thermal energy during exposure, which may result in great heat discharge and the occurrence of after-exposure skin burns [8,9]. In recent years, incorporating aerogel [10,11] and phase change materials (PCMs) [12,13] into TPC have interested researchers as these materials can improve the thermal protection provided by traditional protective clothing. Despite their notable advantages, these systems have several limitations, including the large stiffness and high evaporative resistance caused by the aerogel, and the burning behavior, durability problems, increasing evaporative resistance and clothing weight caused by the PCMs [14].
Ideal TPC should provide adequate thermal insulation only when thermally challenged by heat and flame, and offer less thermal insulation concordant with comfort requirements under normal conditions. This goal can be achieved by introducing an expanding air gap between the layers of TPC when the temperature changes. The use of shape memory alloys (SMAs) is a novel method to meet this requirement [15,16]. SMAs have been used as actuators in many mechanical processes for some time [17]. The alloys are trained to return to a given shape when an actuation temperature is reached. The early work related to the incorporation of SMA into TPC was conducted by Congalton [18]. A SMA flat conical spring with an actuation temperature of 50 • C was incorporated into a three-layer fabric assembly, and when the thermal challenge occurred, the spring was activated to form a substantial, insulating air gap between the fabric layers. It was found that the application of SMA has shown good potential to improve the thermal protective performance of clothing. Yates [19] also studied the performance of TPC with SMA stitched in the clothing pockets. In this study, one wire of shape memory material was fashioned into two conjoined loops held in the center by a clip, forming a butterfly shape. It was demonstrated that the SMA showed a significant improvement in performance than the traditional clothing without the SMA. Park et al. [20] analyzed the thermal protective performance of protective clothing when four different attachment methods and two different sewing methods of SMA were applied. The results showed that the attachment methods had limited impact on the thermal insulation and a wave type stitch was better than a square one when SMA springs were attached onto the intelligent turnout gear for firefighters. Recently, Ma et al. [21] investigated the thermal protection of fireproof fabrics with SMA springs under hot surface contact and demonstrated the effectiveness of such a dynamic adaptive structure in improving thermal protection of fabric systems.
Taken together, these results show the enormous potential of SMA as a suitable material for additional protection in TPC. However, there has been little agreement on the effects of SMA arrangement mode and size. Does the thermal protective performance increase in absolute terms as the number of SMA rises? Or does SMA size have a significant effect on the improvement of thermal protection? The answers to these questions will help in the design of suitable SMA to improve the thermal protective performance of clothing. Since thermal hazard attacks and skin burn injuries are frequently reported by firefighters, an investigation was made on firefighter's protective clothing in this study. SMA springs were incorporated into the selected fabric assemblies as representatives of firefighter turnout suit materials. The effects of SMA arrangement mode and size on the thermal insulating properties of fabrics were explored and the behaviors under two thermal conditions (radiant heat exposure and hot surface contact) were studied. The goal of this study was to reduce severe burn injuries experienced by firefighters in this and similar occupations by gaining a better understanding of the mechanisms underlying heat transfer when SMA springs are incorporated. The research findings might also provide new insights into the development of temperature-adaptable protective clothing.
SMA Springs
In this study, SMA springs, made of a copper-based alloy wire (1.5 mm diameter), were developed. This copper-based alloy is one of the main types of SMAs [18] and is relatively cheap to use, making the temperature-responsive protective clothing economical to produce. Each spring was a flat coil at low temperatures (Figure 1a), and quickly changed to a cone shape when the temperature was above 45 • C (Figure 1b). The effectiveness of reinforcing the firefighter protective clothing using the Materials 2018, 11,1932 3 of 17 SMA springs should depend on the ability of the protective clothing to expand and form an air layer. It has been reported that 90% air gap size observed for single layer protective clothing is less than 40 mm [22,23]. Moreover, Congalton et al. [18] suggested that air gap size introduced by the SMA should be lower than 35 mm. Based on the above reported facts, the maximum height of the spring after full deformation should be controlled under 35 mm. In this study, two sizes of SMA spring (code-named "No-cut" and "Cut") were used, and their full deformed heights approximated 32 mm and 16 mm, respectively. The No-cut coil had an outer diameter of 28 mm and an inner diameter of 14 mm, and the Cut one had an outer diameter of 21 mm and an inner diameter of 14 mm. The weight of each No-cut spring was 5 g and the weight of the Cut one was 2.2 g.
Materials 2018, 11, x FOR PEER REVIEW 3 of 18 temperature was above 45 °C ( Figure 1b). The effectiveness of reinforcing the firefighter protective clothing using the SMA springs should depend on the ability of the protective clothing to expand and form an air layer. It has been reported that 90% air gap size observed for single layer protective clothing is less than 40 mm [22,23]. Moreover, Congalton et al. [18] suggested that air gap size introduced by the SMA should be lower than 35 mm. Based on the above reported facts, the maximum height of the spring after full deformation should be controlled under 35 mm. In this study, two sizes of SMA spring (code-named "No-cut" and "Cut") were used, and their full deformed heights approximated 32 mm and 16 mm, respectively. The No-cut coil had an outer diameter of 28 mm and an inner diameter of 14 mm, and the Cut one had an outer diameter of 21 mm and an inner diameter of 14 mm. The weight of each No-cut spring was 5 g and the weight of the Cut one was 2.2 g.
Design of Fabric Assemblies with SMA Springs
Firefighter turnout gear typically consists of a flame-resistant outer shell, a moisture barrier, and a thermal liner. The detailed specifications of the selected fabrics within these layers are displayed in Table 1. The testing fabric assembly in this study was 15 cm × 15 cm and placed with an overlying order from above of a thermal liner, a moisture barrier, and an outer shell. The SMA springs were sewn between the moisture barrier and the thermal liner by using thermal resistance threads to form an air gap between these two fabric layers when the deformation of coil was activated. Six different arrangement modes of the SMA spring were tested (refers to Figure 2): (a) CON-the control group with no spring; (b) One-one spring was located at the center of the fabric specimen; (c) Two Diag-two springs were positioned diagonally, having a 10 cm distance (approximated half the fabric's diagonal) between them; (d) Two Para-two springs were paralleled in the central line of the fabric and the space between them approximated half the length of the fabric (8 cm); (e) Three Diag-three springs were positioned diagonally. One was in the center and the distance to the other two springs was 8 cm; (f) Three Tria-three springs arranged at the vertices of an equilateral triangle. The distance between each spring was 9 cm. Since heat transfer through the central point of a fabric would be measured in the following thermal protective performance tests, the One and the Three Diag with a spring exactly located in the corresponding measuring point were further selected to be treated with different SMA spring sizes. In addition, the Two Diag that had a similar spring layout with the Three Diag was also selected to examine the effect of the spring size. That is, the One, Two Diag and Three Diag not only differed by spring arrangement modes, but also had different spring sizes (No-cut and Cut).
Design of Fabric Assemblies with SMA Springs
Firefighter turnout gear typically consists of a flame-resistant outer shell, a moisture barrier, and a thermal liner. The detailed specifications of the selected fabrics within these layers are displayed in Table 1. The testing fabric assembly in this study was 15 cm × 15 cm and placed with an overlying order from above of a thermal liner, a moisture barrier, and an outer shell. The SMA springs were sewn between the moisture barrier and the thermal liner by using thermal resistance threads to form an air gap between these two fabric layers when the deformation of coil was activated. Six different arrangement modes of the SMA spring were tested (refers to Figure 2): (a) CON-the control group with no spring; (b) One-one spring was located at the center of the fabric specimen; (c) Two Diag-two springs were positioned diagonally, having a 10 cm distance (approximated half the fabric's diagonal) between them; (d) Two Para-two springs were paralleled in the central line of the fabric and the space between them approximated half the length of the fabric (8 cm); (e) Three Diag-three springs were positioned diagonally. One was in the center and the distance to the other two springs was 8 cm; (f) Three Tria-three springs arranged at the vertices of an equilateral triangle. The distance between each spring was 9 cm. Since heat transfer through the central point of a fabric would be measured in the following thermal protective performance tests, the One and the Three Diag with a spring exactly located in the corresponding measuring point were further selected to be treated with different SMA spring sizes. In addition, the Two Diag that had a similar spring layout with the Three Diag was also selected to examine the effect of the spring size. That is, the One, Two Diag and Three Diag not only differed by spring arrangement modes, but also had different spring sizes (No-cut and Cut).
Test Conditions and Protocols
Protective performance of the fabric assembly with SMA springs was evaluated by bench-scale tests. Fabric assemblies with SMA springs incorporated within them were preconditioned for at least 24 h in a standard climatic chamber of 20 ± 2 °C and 65 ± 4% relative humidity prior to the test. To determine the thermal protective performance of fabric assemblies under exposures to thermal hazards, a set of thermocouples was installed at fabric locations to measure the temperature distribution through fabric layers. Type-T thermocouples (Omega Engineering, Norwalk, CT, USA; accuracy: ±0.5 °C) with a wire diameter of 0.274 mm were used. Thermocouples locations and their attachment to protective fabrics were determined according to the research of Keiser et al. [24]. Two thermocouples were placed at the center of the external surface of the moisture barrier and the thermal liner, respectively. Another thermocouple was placed at the center of internal surface of the thermal liner. After fabric assemblies were prepared with thermocouples, three fabric layers were sewn together at one pair of the diagonal corners of the fabric assembly to simulate the restriction force caused by the stitches of clothing, or the fabric deformation due to body movement [21]. Then either a radiant heat exposure or a hot surface contact was simulated to perform the thermal protective performance tests.
Radiant heat exposures (RHE)
To perform the laboratory simulation of RHE, the thermal protective performance (TPP) tester (Mode 701-D-163-1, Precision Products LLC, Richmond, VA, USA) was employed (Figure 3a). In this method, radiant heat was generated by a bank of nine translucent quartz infrared lamps placed
Test Conditions and Protocols
Protective performance of the fabric assembly with SMA springs was evaluated by bench-scale tests. Fabric assemblies with SMA springs incorporated within them were preconditioned for at least 24 h in a standard climatic chamber of 20 ± 2 • C and 65 ± 4% relative humidity prior to the test. To determine the thermal protective performance of fabric assemblies under exposures to thermal hazards, a set of thermocouples was installed at fabric locations to measure the temperature distribution through fabric layers. Type-T thermocouples (Omega Engineering, Norwalk, CT, USA; accuracy: ±0.5 • C) with a wire diameter of 0.274 mm were used. Thermocouples locations and their attachment to protective fabrics were determined according to the research of Keiser et al. [24]. Two thermocouples were placed at the center of the external surface of the moisture barrier and the thermal liner, respectively. Another thermocouple was placed at the center of internal surface of the thermal liner. After fabric assemblies were prepared with thermocouples, three fabric layers were sewn together at one pair of the diagonal corners of the fabric assembly to simulate the restriction force caused by the stitches of clothing, or the fabric deformation due to body movement [21]. Then either a radiant heat exposure or a hot surface contact was simulated to perform the thermal protective performance tests.
Radiant heat exposures (RHE)
To perform the laboratory simulation of RHE, the thermal protective performance (TPP) tester (Mode 701-D-163-1, Precision Products LLC, Richmond, VA, USA) was employed (Figure 3a). In this method, radiant heat was generated by a bank of nine translucent quartz infrared lamps placed horizontally beneath a specimen. A produced heat flux of 12 ± 0.3 kW/m 2 was designed to simulate the low level RHE that is frequently encountered by firefighters [4]. The RHE duration was determined after preliminary experiment, which was set as 70 s to ensure the temperature on the internal surface of the thermal liner could reach 44 • C (see below).
•
Hot surface contacts (HSC) The TPP of fabrics in HSC exposure was measured according to a modified ASTM F 1060 (West Conshohocken, PA, USA) shown in Sumit's research [25] (Figure 3b). The specimen of the fabric assembly was horizontally placed in contact with a hot surface plate of electrolytic copper (Precision Products LLC, Richmond, VA, USA). The temperature of the hot surface was controlled at 400 • C. The exposure duration of the fabric was set at 20 s. at 400°C. The exposure duration of the fabric was set at 20 s.
During either a RHE test or a HSC test, the external surface of the outer shell was subjected to thermal exposures (see Figure 3). The data from the thermocouples were collected throughout the exposure duration by using a data acquisition system (National Instruments, NI 9213, Austin, TX, USA). The sampling rate of the data acquisition system was two samples per second for temperature measurements. Three samples of each fabric assembly were tested. The sequence of test specimens with different arrangement modes and SMA spring sizes was randomized.
TPP Analysis Method
Except for the evaluation of temperature histories obtained by thermocouples, the TPP of the fabric assembly under exposures to RHE and HSC was also determined by using the time to reach a sensor temperature rise of 12 °C or 24 °C in accordance with ISO 6942:2002 [26]. The temperature on the internal surface of the thermal liner was used to calculate the time t44 and t56, i.e., the time to reach the temperature of 44 °C and 56 °C, respectively. In addition, the final temperature throughout the test (Tfa) was also examined. Therefore, the thermal insulating properties of the fabric assembly with different SMA springs were especially compared by these three indices.
Statistical Analysis
Descriptive statistics (means and standard deviations) were calculated for all dependent variables: t44, t56 and Tfa. All the statistical analyses were processed using SPSS 21.0 software (SPSS Inc., Chicago, IL, USA). A one-way analysis of variance (ANOVA) was used to explore differences of the dependent variables due to the spring arrangement or the spring size. Post hoc analyses were performed using a least significant difference (LSD) test to assess the parameters that displayed significant differences in the ANOVA analysis.
In the discussion section, fitting analyses were used to derive the correlations between the size of the open air gap and the TPP of the fabric assembly. We were trying to use the linear function, polynomial function, power function and exponential function to conduct the fitting analyses, because these regression functions have been successfully used to establish the relationships During either a RHE test or a HSC test, the external surface of the outer shell was subjected to thermal exposures (see Figure 3). The data from the thermocouples were collected throughout the exposure duration by using a data acquisition system (National Instruments, NI 9213, Austin, TX, USA). The sampling rate of the data acquisition system was two samples per second for temperature measurements. Three samples of each fabric assembly were tested. The sequence of test specimens with different arrangement modes and SMA spring sizes was randomized.
TPP Analysis Method
Except for the evaluation of temperature histories obtained by thermocouples, the TPP of the fabric assembly under exposures to RHE and HSC was also determined by using the time to reach a sensor temperature rise of 12 • C or 24 • C in accordance with ISO 6942:2002 [26]. The temperature on the internal surface of the thermal liner was used to calculate the time t 44 and t 56 , i.e., the time to reach the temperature of 44 • C and 56 • C, respectively. In addition, the final temperature throughout the test (T fa ) was also examined. Therefore, the thermal insulating properties of the fabric assembly with different SMA springs were especially compared by these three indices.
Statistical Analysis
Descriptive statistics (means and standard deviations) were calculated for all dependent variables: t 44 , t 56 and T fa . All the statistical analyses were processed using SPSS 21.0 software (SPSS Inc., Chicago, IL, USA). A one-way analysis of variance (ANOVA) was used to explore differences of the dependent variables due to the spring arrangement or the spring size. Post hoc analyses were performed using a least significant difference (LSD) test to assess the parameters that displayed significant differences in the ANOVA analysis.
In the discussion section, fitting analyses were used to derive the correlations between the size of the open air gap and the TPP of the fabric assembly. We were trying to use the linear function, polynomial function, power function and exponential function to conduct the fitting analyses, because these regression functions have been successfully used to establish the relationships between TPP and other independent variables, e.g., fabric weight, fabric thickness, fabric air permeability, and absorbed energy of the skin [23,27]. The goodness of the fitting was examined by using the coefficient of determination (R 2 ) and the best fitting equation was determined as the R 2 was higher than 0.7.
Performance under RHE
In this section, temperature profiles obtained for the No-cut SMA springs were presented first as examples of how heat transfers through the fabric assembly under the RHE condition. The effects of the SMA arrangement mode as well as the SMA size on TPP under the RHE condition were analyzed thereafter. Figure 4 shows the temperature distribution through the fabric assembly with different arrangements of No-cut SMA springs under the RHE condition. For the temperature on the external surface of the moisture barrier (Figure 4a), all the curves representing the different SMA arrangements exhibited similar change trends. The temperature increased quickly during the first 40 s of exposure, and then increased very slowly as the exposure continued, showing a final temperature in the range of 235-260 • C. Please note that the temperatures observed for the fabric assemblies with SMA springs incorporated within them were always higher than those without SMA springs (i.e., CON). For the temperature on the external surface of the thermal liner (Figure 4b), a continuous rise was observed for the CON. However, slower increases or even quasi-steady states were observed for the fabric assemblies that had SMA springs incorporated within them after 45 s of exposure. In comparison to the temperatures on the external surfaces of the fabric layers, the temperature on the internal surface of the thermal liner increased slowly and the change trends could be classified into three groups: high, medium, and low levels ( Figure 4c). The highest temperature level was observed for the CON, showing 98 • C at the end of the exposure, whereas the lowest temperature level was found for the One and the Three Diag (ending temperature of approximately 43 • C). The temperatures for the Two Diag, Two Para and Tree Tria arrangements were staying in the medium level and showing very similar values throughout the test.
Performance under RHE
In this section, temperature profiles obtained for the No-cut SMA springs were presented first as examples of how heat transfers through the fabric assembly under the RHE condition. The effects of the SMA arrangement mode as well as the SMA size on TPP under the RHE condition were analyzed thereafter. Figure 4 shows the temperature distribution through the fabric assembly with different arrangements of No-cut SMA springs under the RHE condition. For the temperature on the external surface of the moisture barrier (Figure 4a), all the curves representing the different SMA arrangements exhibited similar change trends. The temperature increased quickly during the first 40 s of exposure, and then increased very slowly as the exposure continued, showing a final temperature in the range of 235-260 °C. Please note that the temperatures observed for the fabric assemblies with SMA springs incorporated within them were always higher than those without SMA springs (i.e., CON). For the temperature on the external surface of the thermal liner (Figure 4b), a continuous rise was observed for the CON. However, slower increases or even quasi-steady states were observed for the fabric assemblies that had SMA springs incorporated within them after 45 s of exposure. In comparison to the temperatures on the external surfaces of the fabric layers, the temperature on the internal surface of the thermal liner increased slowly and the change trends could be classified into three groups: high, medium, and low levels ( Figure 4c). The highest temperature level was observed for the CON, showing 98 °C at the end of the exposure, whereas the lowest temperature level was found for the One and the Three Diag (ending temperature of approximately 43 °C). The temperatures for the Two Diag, Two Para and Tree Tria arrangements were staying in the medium level and showing very similar values throughout the test. The data for three different dependent variables (t44, t56 and Tfa) are displayed in Table 2 to examine the effect of the SMA arrangement on TPP under the RHE condition. It was shown that all indices t44, t56 and Tfa were highly dependent on the SMA arrangement (p < 0.001). The fabric
Effect of SMA Arrangement under RHE
The data for three different dependent variables (t 44 , t 56 and T fa ) are displayed in Table 2 to examine the effect of the SMA arrangement on TPP under the RHE condition. It was shown that all indices t 44 , t 56 and T fa were highly dependent on the SMA arrangement (p < 0.001). The fabric assembly with any SMA arrangement had a significantly higher t 56 but lower T fa compared with the CON (p < 0.05). The index t 56 was only 36.4 s for the CON, while it increased by 23-81% due to the existences of the Two Diag, Two Para and Three Tria arrangements. The thermal protection time was much more improved when the One and the Three Diag were respectively incorporated with the fabric assembly, showing that temperatures at the internal surface of the thermal liner did not reach a criterion 56 • C after 70 s of RHE. In addition, the data from T fa indicated that the One and the Three Diag were most effective in reducing the final temperature from 97.6 • C to a range of 42-44.9 • C. C; EA = the effect of the SMA arrangement; *** p < 0.001, ** p < 0.01. a , b -each testing sample with the same superscript letter do not differ significantly from each other (p > 0.05), otherwise significant differences determined between each sample using LSD post hoc tests (p < 0.05); CON = control group without any coil; One = one coil; Two Diag = two coils positioned diagonally; Two Para = two coils positioned in parallel; Three Diag = three coils positioned diagonally; Three Tria = three coils arranged at the vertices of an equilateral triangle.
Effect of Spring Size under RHE
The results of t 44 , t 56 and T fa for different spring sizes under the RHE condition are displayed in Figure 5. Regardless of the SMA arrangement, t 56 and T fa were significantly changed as the spring size increased (p < 0.05). For example, t 56 for the CON was 36.4 s, and it was substantially increased to approximately 45 s for the Two Diag with either a Cut or a No-cut size. In addition, t 56 was further raised to over 70 s when both the Cut and No-cut sizes of the One and the Three Diag were incorporated within the fabric assembly. For the One arrangement, the significant effect of the spring size was observed from the indices t 44 and T fa (p < 0.05), showing that t 44 was almost doubled and T fa decreased by 13.7 • C when the spring size changed from the Cut to the No-cut. For the Three Diag arrangement, the significant effect of the spring size was only detected from t 44 (p < 0.05), exhibiting a 20% increase as the spring size increased from the Cut to the No-cut. However, for the Two Diag arrangement, there were no significant differences in t 44 , t 56 and T fa between the two different sizes of SMA springs. size was observed from the indices t44 and Tfa (p < 0.05), showing that t44 was almost doubled and Tfa decreased by 13.7 °C when the spring size changed from the Cut to the No-cut. For the Three Diag arrangement, the significant effect of the spring size was only detected from t44 (p < 0.05), exhibiting a 20% increase as the spring size increased from the Cut to the No-cut. However, for the Two Diag arrangement, there were no significant differences in t44, t56 and Tfa between the two different sizes of SMA springs.
Performance under HSC
Similarly, to examine the performance of fabric assemblies with SMA springs incorporated within them under the HSC condition, three aspects including the temperature profiles, the effect of SMA arrangement, and the effect of SMA size are respectively shown in this section. Figure 6 presents the temperature distribution through the fabric assemblies with different arrangements of No-cut SMA springs under the HSC condition. It can be seen from Figure 6a that all temperatures on the external surface of the moisture barrier increased immediately in the first 2 s when direct contacted with the hot surface. In addition, then the temperature was slightly changing before reaching a second quick rise in the following exposure. However, unlike the situation in Figure 4a showing that the CON under the RHE condition always had the lowest temperature on the external surface of the moisture barrier, under HSC this temperature ranked in the middle, showing higher values than the Two Diag and Three Tria but lower values than the One and Three Diag. In Figure 6b, all temperatures on the external surface of the thermal liner showed similar change
Performance under HSC
Similarly, to examine the performance of fabric assemblies with SMA springs incorporated within them under the HSC condition, three aspects including the temperature profiles, the effect of SMA arrangement, and the effect of SMA size are respectively shown in this section. Figure 6 presents the temperature distribution through the fabric assemblies with different arrangements of No-cut SMA springs under the HSC condition. It can be seen from Figure 6a that all temperatures on the external surface of the moisture barrier increased immediately in the first 2 s when direct contacted with the hot surface. In addition, then the temperature was slightly changing before reaching a second quick rise in the following exposure. However, unlike the situation in Figure 4a showing that the CON under the RHE condition always had the lowest temperature on the external surface of the moisture barrier, under HSC this temperature ranked in the middle, showing higher values than the Two Diag and Three Tria but lower values than the One and Three Diag. In Figure 6b, all temperatures on the external surface of the thermal liner showed similar change trends. Temperature rose for the first 2 s of exposure, subsequently reached a relatively steady state during 2-12 s of exposure, and then increased sharply again until the end of exposure. As shown in Figure 6c, the temperature on the internal surface of the thermal liner was largely decreased compared to the temperatures on the external surfaces of the moisture barrier and the thermal liner. Temperatures observed for the One and the Three Diag increased more slowly throughout the exposure, showing an ending temperature approximated 45 • C. However, temperatures for other conditions rose initially to a range of 53-57 • C, and then decreased to 47-52 • C. during 2-12 s of exposure, and then increased sharply again until the end of exposure. As shown in Figure 6c, the temperature on the internal surface of the thermal liner was largely decreased compared to the temperatures on the external surfaces of the moisture barrier and the thermal liner. Temperatures observed for the One and the Three Diag increased more slowly throughout the exposure, showing an ending temperature approximated 45 °C. However, temperatures for other conditions rose initially to a range of 53-57 °C, and then decreased to 47-52 °C.
Effect of SMA Arrangement under HSC
As shown in Table 3, the indices t44 and Tfa were significantly affected by the SMA arrangement under the HSC condition (p < 0.001). The frequent appearances of NR in the t56 demonstrated that except for the CON and the Two Diag, fabric assemblies with other arrangements of SMA did not reach 56 °C after 20 s exposure to the HSC condition. The available data indicated that t44 and t56 under the HSC were much lower as compared to those under the RHE. For example, the maximum t44 listed in Table 3 was 17.7 s, which accounted for only 27% of that in Table 2. There was no significant difference existed in t44 among the CON, Two Diag, Two Para and Three Tria arrangements (p > 0.05). However, t44 increased nearly four times from 3.5 s to a range of 17.1-17.7 s when the One and the Three Diag were incorporated with the fabric assemblies. The index Tfa observed for the CON was only 54.1 °C, which was significantly reduced by the five different arrangements of SMA springs (p < 0.05). The difference in Tfa was particularly noticeable when the
Effect of SMA Arrangement under HSC
As shown in Table 3, the indices t 44 and T fa were significantly affected by the SMA arrangement under the HSC condition (p < 0.001). The frequent appearances of NR in the t 56 demonstrated that except for the CON and the Two Diag, fabric assemblies with other arrangements of SMA did not reach 56 • C after 20 s exposure to the HSC condition. The available data indicated that t 44 and t 56 under the HSC were much lower as compared to those under the RHE. For example, the maximum t 44 listed in Table 3 was 17.7 s, which accounted for only 27% of that in Table 2. There was no significant difference existed in t 44 among the CON, Two Diag, Two Para and Three Tria arrangements (p > 0.05). However, t 44 increased nearly four times from 3.5 s to a range of 17.1-17.7 s when the One and the Three Diag were incorporated with the fabric assemblies. The index T fa observed for the CON was only 54.1 • C, which was significantly reduced by the five different arrangements of SMA springs (p < 0.05). The difference in T fa was particularly noticeable when the CON was compared to the One and the Three Diag. However, no significant difference was observed between the One and the Three Diag. Figure 7 illustrates the TPP of the fabric assemblies with different sizes of SMA springs under the HSC condition. There were significant effects of SMA spring sizes on t 44 , t 56 and T fa (p < 0.05). For instance, the Cut size of the One and the Three Diag had a t 44 of approximately 12.5 s, which was nearly three times higher as compared with the CON. In addition, their No-cut size had a much higher t 44 reaching to 17.5 s. When the Cut size of the Three Diag was incorporated within the fabric, a significantly 12% decrease from 54 • C to 47.5 • C was found in T fa . In addition, this decrease was expanded to 17% as the No-cut size of the Three Diag was used. However, there was no significant difference in T fa between the Cut and the No-cut size of the Two Diag (p > 0.05). Moreover, it was shown that t 44 and t 56 were respectively 5.9 s and 9.8 s for the Cut size of the Two Diag, which was significantly higher than those of the No-cut size (p < 0.05). CON was compared to the One and the Three Diag. However, no significant difference was observed between the One and the Three Diag. EA = the effect of SMA arrangements; *** p < 0.001; NS = no significant difference was observed at p > 0.05. a , b , c -each testing sample with the same superscript letter do not differ significantly from each other (p > 0.05), otherwise significant differences determined between each sample using LSD post hoc tests (p < 0.05). CON = control group without any coil; One = one coil; Two Diag = two coils positioned diagonally; Two Para = two coils positioned in parallel; Three Diag = three coils positioned diagonally; Three Tria = three coils arranged at the vertices of an equilateral triangle. Figure 7 illustrates the TPP of the fabric assemblies with different sizes of SMA springs under the HSC condition. There were significant effects of SMA spring sizes on t44, t56 and Tfa (p < 0.05). For instance, the Cut size of the One and the Three Diag had a t44 of approximately 12.5 s, which was nearly three times higher as compared with the CON. In addition, their No-cut size had a much higher t44 reaching to 17.5 s. When the Cut size of the Three Diag was incorporated within the fabric, a significantly 12% decrease from 54 °C to 47.5 °C was found in Tfa. In addition, this decrease was expanded to 17% as the No-cut size of the Three Diag was used. However, there was no significant difference in Tfa between the Cut and the No-cut size of the Two Diag (p > 0.05). Moreover, it was shown that t44 and t56 were respectively 5.9 s and 9.8 s for the Cut size of the Two Diag, which was significantly higher than those of the No-cut size (p < 0.05). Comparisons of the thermal insulating property of the fabric assembly incorporated with two different sizes of SMA springs under the HSC condition (Note: * = significant difference was observed at p < 0.05. When t 56 = 20 s, it means that temperatures did not reach 56 • C throughout the hot surface contact. CON = control group without any coil; One = one coil; Two Diag = two coils positioned diagonally; Three Diag = three coils positioned diagonally).
Discussion
From temperature curves, it was found that under both RHE and HSC conditions the CON resulted in higher temperatures on the external and internal surfaces of the thermal liner in comparison to the fabric assemblies with any SMA arrangement incorporated within them. This was because the SMA springs, with an actuation temperature 45 • C, was expanded to form a steady air gap between the moisture barrier and the thermal liner when the fabric assembly was exposed to thermal hazards. The heat conductivity of steady air is 0.027 W/m/ • C, which approximates one sixth of the fiber's [28]. Heat radiation/conduction from thermal environments to the fabric layers was much impeded and heat loss by convection through the fabric layers increased due to the presence of the air gap, and thus the temperature of the thermal liner decreased. However, the results in Figure 4a showed that any arrangement of SMA springs under the RHE condition increased the external surface temperature of the moisture barrier. This could be explained by the fact that the activation of the SMA spring compressed the moisture barrier and decreased the distance between it and the radiant heating source, resulting in a higher temperature at the outside of this layer. Interestingly, in Figure 6a it was shown that the external surface temperature of the moisture barrier for the CON under the HSC condition was higher than that of the Two Diag and the Three Tria but lower than that of the One and the Three Diag. This observation on the external surface temperature of the moisture barrier under the HSC condition could be attributed to the arrangement of SMA spring and the location of the thermocouple for measuring the temperature. Both the One and the Three Diag had a SMA spring located in the center of the thermal liner, nicely forcing the temperature measuring point on the moisture barrier in approaching to the hot surface and thus increasing the external surface temperature of this fabric layer. However, arrangements of the Two Diag and the Three Tria did not have any spring located in the center of the fabric layer and the temperatures on their moisture barrier's surfaces were not enhanced by the activation of SMA spring. Tables 2 and 3 clearly showed that the SMA arrangement had significant influences on TPP of the fabric assemblies indicated by dependent variables of t 44 /t 56 and T fa under two different exposure hazards. The HSC condition resulted in a considerably lower t 44 than the RHE condition, demonstrating that the direct physical contact between the fabric and a 400 • C hot surface caused greater heat transfer and less time to reach the criterion 44 • C as compared to the electromagnetic waves radiate from the quartz infrared lamps under the RHE condition. This quick heat transfer under the HSC condition caused the observation of three stages in the temperature profile in Figure 6c. An initially sharp increasing phase at the first 2 s of heat exposure was owing to the large temperature gradient between the fabric and the hot surface at the beginning of exposure. This large temperature gradient drove the heat transfer at a fast rate; consequently, the observed temperature increased rapidly. A second stable phase in about 12 s was due to the activation of the SMA spring. When the SMA spring was activated to open an air gap, heat convection through the air gap gradually counteracted the heat absorption of the fabric, and then an equilibration occurred in heat transfer. Upon further exposure, heat absorption of the fabric exceeded the heat convection, thereby showing a following increase in fabric temperature. The incorporation of any arrangement of SMA spring within the fabric assembly could improve the TPP to some extent, since the temperature at the end of exposure (T fa ) was found to be notably reduced due to the use of SMA springs under both thermal exposures. The positive effect of the SMA spring under exposures to the RHE and HSC conditions consisted with those of Congalton's study [18], in which the effectiveness of SMA springs was reported as the fabric was exposed to a radiant heat produced by a cone calorimeter. Under both RHE and HSC conditions, it was found that incorporations of the One and the Three Diag were the most effective way in reducing the internal temperature of the thermal liner (shown in Figures 4c and 6c), increasing the thermal protection times of t 44 /t 56, and decreasing the final temperature T fa . The different contributions of the SMA arrangements to the TPP could be attributed to their open air gaps formed by the deformation of SMA springs.
To further examine the air gap effect, the details of the air gap distribution in different SMA arrangements are displayed in Table 4. The air gap between the moisture barrier and the thermal liner was uneven. The location where the spring was inserted had the biggest air gap, and the air gap was decreasing as the distance to the spring increased. Both the One and the Three Diag had one spring precisely located at the center of the fabric, creating an air gap there with a full size of 32 mm. Therefore, heat transfer through the central area was greatly impeded, and as a result temperatures there were largely decreased. However, for the Two Diag, Two Para and Three Tria arrangements the center of the fabric had no springs available, just showing smaller air gap sizes (27-31 mm) there. It was suggested that SMA springs should be inserted in a proper place of the protective clothing, especially where the thermal exposure attacks directly and usually. For example, the chest and forearm are suitable places because these parts are directly facing to a heat source when firefighters are holding a hose to extinguish a fire. Please note that the Three Diag had two more springs within the fabric layer as compared with the One, but it did not lead to a significantly higher thermal insulation when we examined the data of t 44 , t 56 and T fa . It seemed to indicate that increasing the number of SMA springs did not show additional thermal insulation from the results of this study. This might be attributed to the limitations of the single-point temperature measurement method. As shown in Table 4, the Three Diag had a more uniform air gap than the One. This uniform air gap would improve better thermal insulation during a full-scale test in which an intact piece of protective clothing and a larger isolation area are measured, rather than one point in the middle of a small fabric sample tested in this study. This result should be confirmed by further studies. The use of multipoint temperature measurement or the infrared thermal camera technology that would provide a two-dimensional visible image of the temperature profile is suggested in future works to examine the potential of SMA springs as a suitable material for the development of intelligent garments. To further examine the air gap effect, the details of the air gap distribution in different SMA arrangements are displayed in Table 4. The air gap between the moisture barrier and the thermal liner was uneven. The location where the spring was inserted had the biggest air gap, and the air gap was decreasing as the distance to the spring increased. Both the One and the Three Diag had one spring precisely located at the center of the fabric, creating an air gap there with a full size of 32 mm. Therefore, heat transfer through the central area was greatly impeded, and as a result temperatures there were largely decreased. However, for the Two Diag, Two Para and Three Tria arrangements the center of the fabric had no springs available, just showing smaller air gap sizes (27-31 mm) there. It was suggested that SMA springs should be inserted in a proper place of the protective clothing, especially where the thermal exposure attacks directly and usually. For example, the chest and forearm are suitable places because these parts are directly facing to a heat source when firefighters are holding a hose to extinguish a fire. Please note that the Three Diag had two more springs within the fabric layer as compared with the One, but it did not lead to a significantly higher thermal insulation when we examined the data of t44, t56 and Tfa. It seemed to indicate that increasing the number of SMA springs did not show additional thermal insulation from the results of this study. This might be attributed to the limitations of the single-point temperature measurement method. As shown in Table 4, the Three Diag had a more uniform air gap than the One. This uniform air gap would improve better thermal insulation during a full-scale test in which an intact piece of protective clothing and a larger isolation area are measured, rather than one point in the middle of a small fabric sample tested in this study. This result should be confirmed by further studies. The use of multipoint temperature measurement or the infrared thermal camera technology that would provide a two-dimensional visible image of the temperature profile is suggested in future works to examine the potential of SMA springs as a suitable material for the development of intelligent garments.
Arrangement Diagrammatic Presentation Air Gap Distribution Characters
One The center has the biggest air gap (32 mm), and the corner with no fixation has an air gap of approximately 9 mm.
Two Diag
The location where the spring is inserted has the biggest air gap, but the center has a smaller air gap (27 mm). The corner with no fixation has an air gap of approximately 22 mm.
Two Para
The location where the spring is inserted shows the biggest air gap, but the center has a smaller air gap (30 mm). The center of the edge that is in line with the two springs has a 26 mm air gap, while the center of the other edge has a 15 mm air gap.
The center has the biggest air gap (32 mm), and the corner with no fixation has an air gap of approximately 9 mm.
Two Diag final temperature Tfa. The different contributions of the SMA arrangements to the TPP could be attributed to their open air gaps formed by the deformation of SMA springs. To further examine the air gap effect, the details of the air gap distribution in different SMA arrangements are displayed in Table 4. The air gap between the moisture barrier and the thermal liner was uneven. The location where the spring was inserted had the biggest air gap, and the air gap was decreasing as the distance to the spring increased. Both the One and the Three Diag had one spring precisely located at the center of the fabric, creating an air gap there with a full size of 32 mm. Therefore, heat transfer through the central area was greatly impeded, and as a result temperatures there were largely decreased. However, for the Two Diag, Two Para and Three Tria arrangements the center of the fabric had no springs available, just showing smaller air gap sizes (27-31 mm) there. It was suggested that SMA springs should be inserted in a proper place of the protective clothing, especially where the thermal exposure attacks directly and usually. For example, the chest and forearm are suitable places because these parts are directly facing to a heat source when firefighters are holding a hose to extinguish a fire. Please note that the Three Diag had two more springs within the fabric layer as compared with the One, but it did not lead to a significantly higher thermal insulation when we examined the data of t44, t56 and Tfa. It seemed to indicate that increasing the number of SMA springs did not show additional thermal insulation from the results of this study. This might be attributed to the limitations of the single-point temperature measurement method. As shown in Table 4, the Three Diag had a more uniform air gap than the One. This uniform air gap would improve better thermal insulation during a full-scale test in which an intact piece of protective clothing and a larger isolation area are measured, rather than one point in the middle of a small fabric sample tested in this study. This result should be confirmed by further studies. The use of multipoint temperature measurement or the infrared thermal camera technology that would provide a two-dimensional visible image of the temperature profile is suggested in future works to examine the potential of SMA springs as a suitable material for the development of intelligent garments.
Arrangement Diagrammatic Presentation Air Gap Distribution Characters
One The center has the biggest air gap (32 mm), and the corner with no fixation has an air gap of approximately 9 mm.
Two Diag
The location where the spring is inserted has the biggest air gap, but the center has a smaller air gap (27 mm). The corner with no fixation has an air gap of approximately 22 mm.
Two Para
The location where the spring is inserted shows the biggest air gap, but the center has a smaller air gap (30 mm). The center of the edge that is in line with the two springs has a 26 mm air gap, while the center of the other edge has a 15 mm air gap.
The location where the spring is inserted has the biggest air gap, but the center has a smaller air gap (27 mm). The corner with no fixation has an air gap of approximately 22 mm.
Two Para final temperature Tfa. The different contributions of the SMA arrangements to the TPP could be attributed to their open air gaps formed by the deformation of SMA springs. To further examine the air gap effect, the details of the air gap distribution in different SMA arrangements are displayed in Table 4. The air gap between the moisture barrier and the thermal liner was uneven. The location where the spring was inserted had the biggest air gap, and the air gap was decreasing as the distance to the spring increased. Both the One and the Three Diag had one spring precisely located at the center of the fabric, creating an air gap there with a full size of 32 mm. Therefore, heat transfer through the central area was greatly impeded, and as a result temperatures there were largely decreased. However, for the Two Diag, Two Para and Three Tria arrangements the center of the fabric had no springs available, just showing smaller air gap sizes (27-31 mm) there. It was suggested that SMA springs should be inserted in a proper place of the protective clothing, especially where the thermal exposure attacks directly and usually. For example, the chest and forearm are suitable places because these parts are directly facing to a heat source when firefighters are holding a hose to extinguish a fire. Please note that the Three Diag had two more springs within the fabric layer as compared with the One, but it did not lead to a significantly higher thermal insulation when we examined the data of t44, t56 and Tfa. It seemed to indicate that increasing the number of SMA springs did not show additional thermal insulation from the results of this study. This might be attributed to the limitations of the single-point temperature measurement method. As shown in Table 4, the Three Diag had a more uniform air gap than the One. This uniform air gap would improve better thermal insulation during a full-scale test in which an intact piece of protective clothing and a larger isolation area are measured, rather than one point in the middle of a small fabric sample tested in this study. This result should be confirmed by further studies. The use of multipoint temperature measurement or the infrared thermal camera technology that would provide a two-dimensional visible image of the temperature profile is suggested in future works to examine the potential of SMA springs as a suitable material for the development of intelligent garments.
Arrangement Diagrammatic Presentation Air Gap Distribution Characters
One The center has the biggest air gap (32 mm), and the corner with no fixation has an air gap of approximately 9 mm.
Two Diag
The location where the spring is inserted has the biggest air gap, but the center has a smaller air gap (27 mm). The corner with no fixation has an air gap of approximately 22 mm.
Two Para
The location where the spring is inserted shows the biggest air gap, but the center has a smaller air gap (30 mm). The center of the edge that is in line with the two springs has a 26 mm air gap, while the center of the other edge has a 15 mm air gap.
The location where the spring is inserted shows the biggest air gap, but the center has a smaller air gap (30 mm Three Diag The air gap located in the line of the springs is becoming uniform. The spring at the center creates the biggest air gap of 32 mm, but the springs closer to the fixed corner create only 14 mm air gap. The corner with no fixation has an air gap of approximately 24 mm.
Three Tria
The air gap in an equilateral triangle area formed by the springs is becoming uniform. The air gap at the center is about 31 mm, and the air gap at the corner with no fixation is approximately 26 mm.
Note: CON = control group without any coil; One = one coil; Two Diag = two coils positioned diagonally; Two Para = two coils positioned parallelly; Three Diag = three coils positioned diagonally; Three Tria = three coils arranged at the vertices of an equilateral triangle.
The results from Figures 5 and 7 showed that TPP of fabrics was affected by the SMA spring sizes under both exposures to RHE and HSC. For the One and the Three Diag, there was a considerably increase (approximately 100-400%) observed in t44, even when a Cut size spring was incorporated within the fabric layers. This finding was consistent with those from previous The results from Figures 5 and 7 showed that TPP of fabrics was affected by the SMA spring sizes under both exposures to RHE and HSC. For the One and the Three Diag, there was a considerably increase (approximately 100-400%) observed in t44, even when a Cut size spring was incorporated within the fabric layers. This finding was consistent with those from previous The air gap in an equilateral triangle area formed by the springs is becoming uniform. The air gap at the center is about 31 mm, and the air gap at the corner with no fixation is approximately 26 mm. Note: CON = control group without any coil; One = one coil; Two Diag = two coils positioned diagonally; Two Para = two coils positioned parallelly; Three Diag = three coils positioned diagonally; Three Tria = three coils arranged at the vertices of an equilateral triangle.
The results from Figures 5 and 7 showed that TPP of fabrics was affected by the SMA spring sizes under both exposures to RHE and HSC. For the One and the Three Diag, there was a considerably increase (approximately 100-400%) observed in t 44 , even when a Cut size spring was incorporated within the fabric layers. This finding was consistent with those from previous researches, which have shown that the thermal protection against thermal exposures could be enhanced remarkably even though a small air gap size ranged 6-12 mm was added between the fabric layers [28,29]. The No-cut size of the One and the Three Diag resulted in a significantly higher t 44 than the Cut size under both RHE and HSC conditions. This was because the No-cut springs were taller after they fully deformed in comparison to the Cut springs. This additional height increased the size of the air gap introduced between the moisture barrier and the thermal liner, thereby improving the thermal insulation of the fabric assembly.
However, under the RHE condition no significant differences in t 44 , t 56 and T fa were detected between the Cut and No-cut size of the Two Diag. The likely reason that the significant effect of the SMA size was not observed for the Two Diag was the limitations of this spring arrangement. As can be seen from Table 4, the Two Diag had no springs located in the center of the fabric. The additional height of its No-cut size spring directly induced an air gap increase occurred where the two springs were inserted, but the air gap within the central area of the fabric could not increase as much as that resulted from the One and the Three Diag. This observation demonstrated that the impact of SMA spring size was affected by the arrangement mode, which could be related with the air gap shape caused by different SMA arrangements. Interestingly, under the HSC condition the No-cut size of the Two Diag had significantly lower t 44 and t 56 as compared to the Cut size. This result was opposed to those found under the RHE condition, i.e., the No-cut size of the spring increased the thermal protection time t 44 . A possible explanation for this might be the heat conduction caused by the SMA spring. In comparison to the RHE condition, physical contact under the HSC condition resulted in great heat transfer between the fabric and a hot surface. This great heat transfer increased the temperature of the spring and then increased the heat conduction from the copper-based spring to the fabric. Thus, the positive effect of the big spring size on reducing heat transfer was offset by the increasing heat conduction from the spring. The problem of heat conduction of the SMA spring was also motioned in a previous study, which have reported that the great conductive heat travelling through the spring is not benefit for the thermal protection [18]. It seemed to indicate that as compared to the RHE condition the HSC condition may add the risk of heat conduction from the metal-based SMA spring. An insulative disc with low heat conductivity might be used to attach the springs in the fabric, and thus decrease the heat conduction to skin [18]. In addition, this heat conduction should be paid more attention under the HSC condition because in real-life situations protective clothing with multilayers of fabric is easily compressed as firefighters are blundering into a structural heating wall, crawling on a hot ground, or holding a heated stuff. In these situations, the compressed protective clothing not only restricts the capacity of the air gap to open but also enhances heat conduction of the metal spring, all of which will limit the benefits of the SMA spring. Therefore, it is suggested that the SMA spring should not be placed in the elbow, knee, and back areas of the protective clothing, where the fabric layers are frequently squeezed if they are contacting with a hot surface.
The above discussion demonstrated that the advantages of the SMA spring were achieved through providing additional air gaps between adjacent fabric layers as the SMA was activated by thermal exposures. In addition, the degree of this advantage depended on the size of the open air gap. Therefore, it was necessary to explore the specific corrections between the air gap size and the TPP of the fabric assembly. The regression analyses are suitable for estimating the relationship of TPP [23,27]. In this study, there were several NR shown in the columns of t 44 and t 56 in Tables 2 and 3, meaning that these temperatures did not reach the criteria of 44 • C and 56 • C. The NR-labeled data were not suitable to use for the fitting analysis. Thus, the index T fa that had every datum available was only chosen to conduct the fitting analysis.
The relationships between the air gap size and T fa under the RHE and the HSC conditions are displayed in Figures 8 and 9, respectively. Strongly negative exponential correlations with the air gap size were observed (RHE condition: R 2 = 0.7781, HSC condition: R 2 = 0.933). Equations (1) and (2) indicated that the final temperature of the fabric decreased with the increasing of the air gap size produced by the SMA spring. This meant that the increase of the open air gap size that was resulted from the different SMA arrangements and spring sizes had a positive effect on TPP of the fabric assembly. This was consistent with previous studies conducted on the conventional TPC that has no SMA springs incorporated within it [28,29]. If the conventional protective clothing has a bigger clothing size, heat transfer through the clothing is decreased due to the increasing air gap size between the clothing and the skin. where σ is the Stefan-Boltzmann constant, εa and εb are the emissivity of the two clothing layers adjacent to the air gap, and F is the radiation view factor of two adjacent fabric layers, which is [31]: where Lfab is the thickness of the fabric layer. It can be seen from Equations (5) and (7) that the convective heat transfer coefficient hc,airgap and the radiation view factor F decrease with the increase of the air gap, and thereby reducing the convection and radiation across the air gap. It can be concluded that the air gap size that was influenced by the SMA spring arrangement and spring size has a positive effect on TPP, because the larger air gap size reduces the convective heat transfer coefficient and increases the view factor between the fabric layers.
Conclusions
In this study, SMA springs were incorporated into a three-layer firefighter turnout suit material in an endeavor to develop thermally responsive protective clothing that actuated to create extra air gap between the fabric layers under exposures to thermal hazards. The effects of the SMA arrangement mode and size on TPP of clothing were investigated under RHE and hot surface exposure. The incorporation of SMA springs within the fabric assembly could improve the TPP, but where σ is the Stefan-Boltzmann constant, εa and εb are the emissivity of the two clothing layers adjacent to the air gap, and F is the radiation view factor of two adjacent fabric layers, which is [31]: where Lfab is the thickness of the fabric layer. It can be seen from Equations (5) and (7) that the convective heat transfer coefficient hc,airgap and the radiation view factor F decrease with the increase of the air gap, and thereby reducing the convection and radiation across the air gap. It can be concluded that the air gap size that was influenced by the SMA spring arrangement and spring size has a positive effect on TPP, because the larger air gap size reduces the convective heat transfer coefficient and increases the view factor between the fabric layers.
Conclusions
In this study, SMA springs were incorporated into a three-layer firefighter turnout suit material in an endeavor to develop thermally responsive protective clothing that actuated to create extra air gap between the fabric layers under exposures to thermal hazards. The effects of the SMA arrangement mode and size on TPP of clothing were investigated under RHE and hot surface exposure. The incorporation of SMA springs within the fabric assembly could improve the TPP, but To further understand the effect of the air gap, heat transfer through the air gap was examined. When the air gap is formed due to the actuation of SMA springs, energy is transferred by both conduction/convection and radiation through the air gap. Assuming one-dimensional heat transfer of inclined channels through a plane fluid layer, the heat flux across the air gap (q) is given by [30]: q = q cond/conv,airgap + q rad,airgap where q cond/conv,airgap refers to the heat flux by convection across the air gap, and q rad,airgap is the heat flux by radiation across the air gap. The heat flux by convection between the two adjacent fabric layers with an air gap is given as [30]: where T a and T b are the temperatures of the two adjacent clothing layers forming an air gap, h c,airgap is the convective heat transfer coefficient between the clothing layers, which is [30]: h c,airgap = Nu k air L airgap (5) where Nu is the Nusselt number, k air is the thermal conductivity of air, and L airgap is the size of air gap. According to Torvi's study [30], the radiation heat flux across the air gap is where σ is the Stefan-Boltzmann constant, ε a and ε b are the emissivity of the two clothing layers adjacent to the air gap, and F is the radiation view factor of two adjacent fabric layers, which is [31]: where L fab is the thickness of the fabric layer. It can be seen from Equations (5) and (7) that the convective heat transfer coefficient h c,airgap and the radiation view factor F decrease with the increase of the air gap, and thereby reducing the convection and radiation across the air gap. It can be concluded that the air gap size that was influenced by the SMA spring arrangement and spring size has a positive effect on TPP, because the larger air gap size reduces the convective heat transfer coefficient and increases the view factor between the fabric layers.
Conclusions
In this study, SMA springs were incorporated into a three-layer firefighter turnout suit material in an endeavor to develop thermally responsive protective clothing that actuated to create extra air gap between the fabric layers under exposures to thermal hazards. The effects of the SMA arrangement mode and size on TPP of clothing were investigated under RHE and hot surface exposure. The incorporation of SMA springs within the fabric assembly could improve the TPP, but the extent of which the springs provided thermal protection was dependent on the arrangement and the size of the SMA springs. The effectiveness of reinforcing the firefighter protective clothing using the SMA springs depended on the capacity of the protective clothing to expand and form an air layer. For the effect of SMA arrangement, the One and Three Diag with No-cut springs were shown to be the most effective applications in enhancing thermal insulation, because they had one spring just located in the central of the fabric and created the biggest air gap there. It is suggested that SMA springs should be incorporated into the vulnerable parts of protective clothing where the thermal exposure directly and usually attacked (e.g., the chest, arms).
For the effect of SMA spring size, there was a considerable increase of thermal insulation even if a Cut size of spring was incorporated within the fabric assembly, and the thermal insulation was further improved as the No-cut size of spring was used. One exception was that under the HSC condition the No-cut size of the Two Diag significantly decreased the thermal insulation as compared with its Cut size. This indicated that the HSC condition may add the risk of heat conduction travelling through the copper-based spring, while the benefit of the bigger spring size was pronounced under the RHE condition. In addition, relationships between air gap size and TPP under two different kinds of thermal exposures were successfully established using exponential equations. TPP increased with the increasing size of the open air gap. One way to improve the TPP of the intact protective clothing is to use a greater spring size to produce a bigger air gap size. In addition, a rational arrangement of SMA spring is also essential to insure the location and the shape of the air gap. The preferring locations of the produced air gap are the directly exposing areas of the protective clothing and increasing the number of SMA spring increases the uniformity of the air gap. The results of this study will guide the engineering of temperature-responsive protective clothing with SMA.
|
v3-fos-license
|
2017-12-16T02:57:22.967Z
|
2011-05-19T00:00:00.000
|
37064052
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://angeo.copernicus.org/articles/29/875/2011/angeo-29-875-2011.pdf",
"pdf_hash": "4ddfec3b1535687958f844cfd0edf860e3811dd6",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46702",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "4ddfec3b1535687958f844cfd0edf860e3811dd6",
"year": 2011
}
|
pes2o/s2orc
|
Modelling of spacecraft spin period during eclipse
The majority of scientific satellites investigating the Earth magnetosphere are spin stabilized. The attitude information comes usually from a sun sensor and is missing in the umbra; hence, the accurate experimental determination of vector quantities is not possible during eclipses. The spin period of the spacecraft is generally not constant during these times because the moment of inertia changes due to heat dissipation. The temperature dependence of the moment of inertia for each spacecraft has a specific signature determined by its design and distribution of mass. We developed an “eclipse-spin” model for the spacecraft spin period behaviour using magnetic field vector measurements close to the Earth, where the magnetic field is dominated by the dipole field, and in the magnetospheric lobes, where the magnetic field direction is mostly constant. The modelled spin periods give us extraordinarily good results with accumulated phase deviations over one hour of less than 10 degrees. Using the eclipse spin model satellite experiments depending on correct spin phase information can deliver science data even during eclipses. Two applications for THEMIS B, one in the lobe and the other in the lunar wake, are presented.
Introduction
The period of a spinning spacecraft is usually determined with a sun sensor.It depends on the moment of inertia of the spacecraft which in turn depends on the temperature of the spacecraft body.The temperature of the spacecraft is falling during eclipse and the spacecraft contracts diminishing its moment of inertia and hence the spin period until a Correspondence to: E. Georgescu (georgescu@mps.mpg.de)final equilibrium is reached or the spacecraft exits the eclipse and heats up and expands again.The temperature dependence of the moment of inertia is mainly determined by the wire antennae and booms because of their extension and low thermal inertia.The spacecraft is an irregular object with non-homogeneous thermal properties and a theoretical calculation of the change of moment of inertia due to temperature change is difficult.Since the sun sensor doesn't deliver information during eclipses, the spin locked instruments and the vector measurements cannot work properly.We developed a method to determine the behaviour of the spacecraft spin period during eclipse using magnetic field measurements in the magnetosphere.The method requires eclipse measurements of the magnetic field in a region where either a model magnetic field is available or where the ambient magnetic field has a relatively constant direction.The model can subsequently be applied to reconstitute the spin period for eclipses outside the magnetosphere even when magnetic field measurements are not available.
Description of the method
The modelling of the spin during eclipse is based on the assumption that each spacecraft has a specific signature in the temperature dependence of its moment of inertia.The model intends to describe this signature analytically.The spin period can be determined from a magnetic field component in a rotating spacecraft system.
The method implies the following steps: -determination of the spin period using a spinning component of the magnetic field in the spacecraft frame, -correction of the spin period with the changes of the direction of the magnetic field in the spin plane determined from the IGRF magnetic field model and recording of this information into an eclipse data base, -superposed epoch analysis of the eclipse data and fit of an analytical expression to the data and -checking the model with randomly selected eclipse data to determine the errors of the model.
The first two steps of the method were developed using CLUSTER data, but then applied systematically to two years of data collected by the five spacecraft of the THEMIS mission.
Spin period determination from the magnetic field
The magnetic field is measured both on CLUSTER and THEMIS spacecraft with flux-gate magnetometers (FGM); the magnetometers are described in detail in Balogh et al. (2001) and Auster et al. (2008).They are placed on booms situated in the spacecraft spin plane.In order to determine the spin period we used one of the spin-plane components of the calibrated magnetic field in the spacecraft frame.The magnetic field direction is assumed to be constant during a spin.The experimental measurement was decomposed into intervals of data containing "one period sine" curves by finding the zeros and taking windows slightly larger (2 measurement points outside the interval) than two subsequent crossings of the zero line in the same direction.A sine with linearly varying amplitude, which takes into account the change of the magnetic field magnitude during a spin, was fitted to each data interval.
The "t" variable is the time and a i , (i = 0-3) are the four parameters of the fit; the period a 2 of the sine function, named T B in the following, estimates the spin period of the spacecraft (T ).Since the data have different sampling rates and the spacecraft different rotation periods, the window lengths in seconds are also different.The method is applicable to any spinning spacecraft where the sampling rate of the magnetometer (fs) is high enough that the number of measurement points per spin (n = fsT ) is larger than 4, the number of the parameters in B FIT .This condition is generally satisfied by most of the spinning spacecraft: for THEMIS T ∼ 3 s and the sampling rate is 4 or 8 Hz for the eclipse time periods resulting in at least 12 data points/spin; for CLUSTER fs = 22 or 67 Hz and T ∼ 4 s.The accuracy is increasing with increasing number of measurement points.
Correction of the experimental FGM spin period
Figure 1 exemplifies the method for two time intervals for the THEMIS spacecraft one without eclipse (a) and one with eclipse (b).In the upper panels the FGM spin period T B (black points) and the sun sensor spin period T S (red) are overplotted.The two lower panels show the spin periods measured by the two instruments relative to the reference value T Sref : T S = T S − T Sref (red) and here in black are the corrected FGM spin periods T Bc = T Bc − T Sref by using the relative rotation of the magnetic field as described below.The reference value T Sref , has been determined as the median of T S over a time interval at least 20 min prior to the eclipse in order to avoid penumbra effects.
The FGM-periods (T B ) show a discrepancy when compared to the sun-sensor determination of the spin period (T S ) in the intervals when both measurements are available.The discrepancy between T B and T S comes from the failure of our assumption concerning the magnetic field direction being constant during a spin.The assumption may be right in the lobes, but close to the Earth, the direction changes.These changes can be determined from the magnetic field model and may reach 10 degrees per spin for THEMIS (T S ∼ 3 s).
The determined spin periods T B depend on the direction of rotation of the field relative to the rotation of the spacecraft.The time derivative of the clock angle in the spin plane is used to make the correction.The magnetic field computed in GSE, by using IGRF and Tsyganenko96, was transformed to a non-spinning spacecraft frame using the attitude.The spinplane components of the model field were spline-interpolated to the T B times.The clock angle α, i.e. the angle of the projection of the magnetic field vector in the spin-plane, was computed as the arctangent of the ratio of the two spin-plane components in the non-spinning spacecraft frame.The corrected FGM spin periods have been checked against the sunsensor periods in time intervals without eclipse.The blue curve in the upper panel of Fig. 1 shows the derivative of the clock-angle dα/dt that is used to correct the FGM spin periods T Bc = T B − dα/dt; now T Bc agrees fairly well with T S .The bottom panel shows the plots of the differences to the reference spin: T S (red) and T Bc .Depending on the region where the measurement is performed the spread of T Bc can be relatively large and/or the magnetic field model inaccurate.For the modelling of the spin behaviour we have to select regions relatively close to Earth, where the field is strong and stable.In the same time we need a large range of eclipse lengths to have a model describing all experimentally possible cases.For this we select the lobes where the field has an almost constant direction.
Superposed epoch analysis of eclipse data and model fit
A data base containing all eclipse data for the five THEMIS spacecraft from 2007 to 2009 has been built.One difficulty was related to the determination of the start and stop times of the eclipses due to the effect of penumbrae.Two methods have been investigated: one based on theoretical calculations using the orbit data and the second on experimental sun sensor data; the second proved to be more accurate.
A data file has been recorded for each eclipse where FGM data and model field data were available.The data file con- tained the time series of the sun-sensor periods (T S ) and the corrected FGM periods (T Bc ).Two reference values are needed to perform a superposed epoch analysis one in the timeline (EStart) and one in the spin period (T Sref defined in Sect.2.2).The eclipse start time, EStart, is defined as being the sun pulse time delivered by the sun sensor about 15 s before entering the eclipse.These values are subtracted from the time (t) and from the spin period values (T ), respectively.
The numbers of files in the data base were 130, 20, 84, 177 and 152 for THEMIS A, B, C, D and E from a total of 231, 45, 213, 319 and 324, respectively, because not every eclipse had FGM coverage.The majority of the eclipses are close to the Earth where the magnetic field strength is high and the duration of the eclipses is less than 30 min.
Figure 2 shows an interval of data of THEMIS B from 7 March 2009, where the spacecraft was in a long eclipse (delimited with the dotted vertical lines).The top panel contains relative spin periods to the reference value: the sun sensor measured relative periods T S (thick red line) that have a constant fake value in the eclipse and the values determined from FGM data ( T B blue data points).Overplotted in black starting at EStart are all T Bc for the close-toperigee eclipses in the year 2008.
All these FGM spin periods were used to define an empirical eclipse-spin model (ESM) which appears as a thin red line in the upper panel of Fig. 2.
The spread of the data points for these short (<30 min) eclipses is small compared to the blue dots for the long eclipse.Long eclipses are infrequent; they occur when the spacecraft are mostly far away in the tail where the magnetic field component in the spin-plane is small.Sometimes the field is disturbed by wave activity and hence T B cannot be determined with sufficient accuracy in order to be used for fitting a model.
The continuous decay of the spin period with time in the eclipse is interrupted with a relatively constant part that we name "shoulder" and mark with black arrows in the Figs. 2, 5 and 6.A similar feature due to the same physical process appears during the warming up phase after re-entry in the sunlight and is specific to all THEMIS and CLUSTER spacecraft.This is determined by the spacecraft design, particularly the contraction and dynamics of the wire booms (Cherchas, 1971;Lai, 1979).
The fitting function was selected to reflect the physical processes supposed to produce the change in the spin period.The physics behind Eq. ( 2) is presented in Sect.2.4.
The following function has been taken for the fit: where a i , (i = 0-3) are the free parameters and t is the time in eclipse.
The fitting of the analytical function ( 2) is done in the two intervals I and II (top panel of Fig. 2) separated by the shoulder.The eclipse-spin model is given by a piecewise function (3) containing the results of the fit.
where T sh = (T I FIT (t sh ) + T II FIT (t sh ))/2.t sh ∼ 30 min, the center time of the shoulder, is derived from the experimental data observations.The values of τ 1 and τ 2 result from the continuity conditions T I FIT (t sh − τ 1 ) = T sh (t sh − τ 1 ) and T II FIT (t sh + τ 2 ) = T sh (t sh + τ 2 ).
The fitted parameters for THEMIS B are given in Table 1.Furthermore we distinguish a penumbra region before the entry into eclipse and before re-entry into the sunlight; this part is not modelled, but it is taken care of by smoothly joining T ESM with T S before and after the eclipse.The three lower panels in Fig. 2 illustrate the use of the original fake spin period (red curves) and of the model spin period (black curves) for despinning the magnetic field for the data of 7 March 2009.They contain the three components of the magnetic field in a despun spacecraft coordinate system where the z-axis is aligned with the spin axis and the x-z plane contains the direction toward the Sun.The lowest panel shows in blue the field magnitude that is very constant.This is a very good case for determining the spin periods from FGM data since the magnetic field has also an almost constant direction parallel to x; B Y and B Z are close to zero.
The physics behind the fit function
The fit function (2), which has been used to approximate the spin period behaviour in eclipse, is not an empirically found function.Instead the following physical considerations lead to its development.The spin period of the rotating spacecraft is proportional to the moment of inertia which is proportional to the square of the distance of the mass elements to the rotation axis.The temperature loss depends on the temperature of the spacecraft components conforming to Boltzmann's law and is proportional to the fourth power of the absolute temperature.The contraction in the radial direction is dominated by the contraction of the wire antennae and of the booms and depends linearly on the temperature loss.A few steps in the deduction of the fitting function are presented below.
When the spacecraft enters eclipse, the thermal budget will be mainly influenced by the emission of heat with a power proportional to θ 4 , where θ denotes its temperature.The heat energy content of the spacecraft will decrease with the rate proportional to the first derivative of θ.Hence, the following differential equation will hold: c 1 is here an unknown constant, which includes the Stefan-Boltzmann-constant, the spacecraft's heat capacity, as well as a measure of its area to mass ratio.The differential equation can be easily solved by: Here c 2 is proportional to c 1 and c 3 denotes a constant of integration.Equation ( 5) describes the decrease in temperature of the spacecraft.If the wire boom length is assumed to behave proportionally to the temperature, the moment of inertia I of the wire booms, which is basically proportional to the square of their length, should show the following time dependence: c 4 and c 5 are, again, unknown constants, which now also include a value for the mass of the wire booms and the constant of proportionality between temperature and wire boom length.Since the spacecraft conserves its angular momentum during the eclipse, the product I ω or the ratio I /T should remain constant.Here ω and T denote the angular spin frequency and spin period of the spacecraft respectively.Hence, the spin period behaviour should be analytically well described by the formula: where c 6 and c 7 also contain the angular momentum.For the computation of the spin period behaviour only the change in this period is considered.At t = 0 (beginning of eclipse time) T should be 0. Hence, we added −1/ 3 c 2 7 to Eq. ( 7).We also included to the final approximating function a scaling factor (c 8 ) and a constant offset (c 9 ), which is particularly important when the second branch (extended spin period model) is calculated, since the decay in period does not start from T Sref (corresponding to T = 0) but from some shorter period (negative T ).Thus we obtain the final fit function (2):
Model check and error analysis
The eclipse-spin model (ESM) derived in Sect.2.3 can be applied for every eclipse without determining the spin period from the magnetic field.The ESM was checked against all eclipse data between March 2008 and July 2009.The final error φ, an accumulated phase deviation, was estimated by adding up the spin periods during the eclipse and comparing with the first sun pulse time after the eclipse.The change of the period in the eclipse is approximately 0.005 s after 30 min.This corresponds to 600 spins, since the THEMIS spacecraft completes a rotation in 3 s.Adding up the errors in the spin periods in the case when no correction is performed the Sun direction will be out of phase with one spin or 360 • after 30 min.This is visible in the two middle panels of Fig. 2, where incorrectly despun B X and B Y (red lines) make an apparent 360 degrees rotation during the first 30 min and a total of 14 rotations during the 3 h 14 min of the eclipse.Figure 3 shows the errors in terms of phase deviation for the eclipses between March 2008 and June 2009 versus the duration of the eclipses.This interval has been chosen to cover the main phase orbits for THEMIS B, i.e. after the boom deployment and before the injection of the spacecraft into the transfer orbits for its journey to the moon.During these months the orbit of THEMIS B did not significantly change in shape, but rotated around the Earth due to the Earth's motion around the Sun; the tail seasons causing long eclipses are included for 2008 and 2009.
When applying the ES model to despin the magnetic field for the long eclipse (black curves in the two mid panels of Fig. 2) the accumulated phase error φ is less then 1 • .The model was determined based on this case (blue cross marked with the black arrow on Fig. 3), so an error analysis of other applications of the model is of interest.
Most of the eclipses do not last longer than one hour.The black crosses mark the eclipses close to perigee, i.e. in large magnetic fields.In these cases the modeled spin periods give us extraordinarily good results with accumulated phase deviations of less than 10 degrees.Short eclipses for which the model application was not successful are marked in red.In these three cases the spin period has been changed by the use of the spacecraft thrusters shortly before eclipse start.Hence, the reference spin period T Sref could not be obtained accurately.This results in a constant offset between the model spin period and true spin period which ultimately leads to a large accumulated phase deviation.The green crosses in Fig. 3 From this statistical analysis we can conclude, that for short eclipses the accumulated phase error is below 10 • while it is up to 40 • for eclipses lasting less than 2 h.A possibility to improve the application is to adapt the model to each individual eclipse.This can be done by distributing the phase deviation at the reentry in the sunlight over all the periods during the eclipse and reconstructing the sun-pulse times.The phase deviation after this adapted model application is close to zero (see Sect. 4).
The effect of the Earth albedo radiation on the spin period
Whenever the spacecraft comes close to the Earth, the infrared radiation form the Earth's surface becomes important for its thermal budget.Heating the spacecraft up increases its moment of inertia and, hence, its spin period.The radiation from Earth may be due to the reflection of sunlight (albedo) or simply thermal radiation of previously stored energy (on the night side).The Earth's radiation on the night side and the cooling of the spacecraft in eclipse have an opposite ef- fect on the spin period behavior.Although it seems to be obvious, that in eclipse the spacecraft will overall cool down, the question remains how much slower this cooling process evolves in the presence of the additional terrestrial radiation.
In order to quantify this effect we performed the following statistical study of THEMIS B spin period data determined by the sun sensor from the interval March 2008 to July 2009.The interval was divided into subintervals including only times, at which the last eclipse had ended at least 12 h before and the next one lay at least 10 min in the future.This ensures that the spacecraft has had enough time to recover its natural spin period in thermodynamic equilibrium after eclipse, such that our analysis is not affected by the spacecraft cooling in shadow.For each of these subintervals the median of the spin periods of THEMIS B were computed belonging to times when the spacecraft was more than 5 R E (Earth radii) away from the center of Earth.These are the reference spin periods to which we can compare the ones measured closer to Earth.The change in spin period, T S , with respect to these reference periods is shown for all the subintervals in Fig. 4.
As can be seen the effect is more important the closer the spacecraft approaches the terrestrial surface.For the minimum perigee distance to the surface of about 0.2 R E (corresponding to a radial distance to the center of the Earth of 1.2 R E ) the change in spin period can be at most 0.0005 s.The effect is drastically diminished, when the spacecraft is further away from the Earth.For instance at 2.5 R E geocentric distance, T S does not exceed 0.0002 s.
The color of the data points displays the local time position of the (temporally) closest perigee pass to the corresponding spin period considered.The closest approaches to the terrestrial surface correspond to the perigee passes around local noon (light blue and green dots), where we would expect the radiation from Earth to be largest due to the direct reflection of sun light in addition to the larger thermal radiation from the surface, which diminishes during night time.Both effects (closer spacecraft and larger radiation) enhance the increase in spin period.Close-in perigee passes around local midnight are excluded from the analysis due to perigee in the umbra.Noon/afternoon (green dots) and midnight local time (red) passes can be compared at a distance of 2 RE from the Earth's center.The increase in the spin period produced by the Earth radiation is <10 −4 s, whereas the decrease produced by the cooling in near-Earth eclipses is of the order of 5×10 −3 s, so terrestrial radiation does not have an important contribution to the spin period change and has been neglected.
Application of the ES model in the Earth tail
We illustrate in Fig. 5 The red line in the top panel is, similar to Fig. 2, the difference of the spin period delivered by the sun sensor to the reference value prior to eclipse ( T S ).The limits of the eclipse [EStart, EEnd] are marked by the two vertical dashed lines.The thin black line is the ESM.The two lower panels show the 3 components of the magnetic field (x:blue, y:green, z:red) in a despun spacecraft frame, z aligned to the spin axis, x toward the Sun.In the mid panel the fake constant value has been used for despinning while in the lower panel the model values (T ESM ) were used.The modeled spin periods T ESM are taken between EStart and EEnd.EStart denotes the time 15 s prior to the eclipse onset, as flagged in the data, EEnd is the time of the eclipse end, from which time on sun sensor measurements are again available.The model T ESM may be adapted to a particular eclipse by adding a small linear drift δ = d(T ESM )/dt yielding a modified spin period of T ESM + δ (t−EStart).The free parameter δ will be used to reduce iteratively the final phase deviation to zero.
By using a linear interpolation between the last measured spin period and the first modeled one we ensure a smooth transition from the sun sensor measurements into the model.After EEnd the first 60 s of measured spin period data are linearly extrapolated to times before EEnd until the point, where this straight line extension and the modeled spin period function cross each other.Before this point in time the spin period is obtained from the model, afterwards the linear extrapolation bridges the last gap of eclipse time.This takes into account, that the spacecraft is heated up again shortly before EEnd due to entering the penumbra region.Inside of this region sun sensor measurements are still not available, but the spin period increases again considerably.Furthermore, we ensure with the linear extrapolation a smooth transition from the model to the sun sensor measurements at eclipse end.
Application of the ES model in the lunar wake
The crucial question is to which extent the model can be applied to reconstruct the correct spin periods and sun pulse www.ann-geophys.net/29/875/2011/Ann.Geophys., 29, 875-882, 2011 times during eclipses, where the true spin period and spacecraft rotation phase can no longer be derived from a terrestrial magnetic field model.Since two of the THEMIS spacecraft are flying to the moon to be inserted into lunarcentered stable orbits, we have had the unique opportunity to test the model with data from the recent lunar flyby of the THEMIS B spacecraft, which took place on 13 February 2010, see Fig. 6.The spacecraft entered lunar eclipse at about 08:53 UT.The end of the eclipse took place roughly 47 min later at about 09:30 UT.The duration of the eclipse was, hence, larger than the first branch of our model.Would this have been a terrestrial eclipse under the circumstances considered for our statistical analyses, a resulting phase deviation of the order of 10 degrees would have been expected.
The accumulated phase error is in this case 27.4 degrees, which is a low value in absolute terms, but quite a high value considering the length of the eclipse of only 47 min.The explanation maybe the fuel consumption during the time elapsed since the eclipses used to define the model.After adapting the model to cancel out the final phase deviation we expect that the error in phase maximizes at the middle of the eclipse without exceeding half the value measured before adaptation.Although a minor deviation in phase may still be present in the despun data, the improvement is considerable: the model helped to recover the flyby data and make them usable for scientific purposes.A detailed study of solar wind transient features and the lunar wake structure during the flyby event based on a comparison of the eclipse-spin model despun data with results of hybrid simulations can be found in Wiehle et al. (2011).
Discussion and conclusions
The spacecraft spin behaviour during eclipse is characteristic for each spacecraft and reflects the dependence of its moment of inertia on temperature.
An "eclipse-spin model" for the spacecraft spin period behaviour can be developed by using the magnetic field vector measurements in a spinning frame.The magnetic field must be either close to Earth where its change in direction can be deduced from a model magnetic field or have a constant direction and a slowly varying magnitude in order to be usable for modelling.After defining an eclipse-spin model, it can be applied to eclipses in other regions of the magnetosphere or outside of it without the need of magnetic field measurements.The ES model compensates for the lack of experimental spin phase information from the sun sensor, such that satellite experiments, depending on correct spin phase information, can deliver science data even during eclipses.
The application of the method to CLUSTER and THEMIS data confirm the validity of the assumptions and give good results for the spin period reconstitution.This method can be applied to all spinning spacecraft carrying a vector magnetometer after a number of eclipses were recorded in the magnetosphere.The ESM defined for THEMIS will bring a valuable support to the ARTEMIS mission by the possibility to use the on-board instruments in the lunar shadow.
Fig. 2 .
Fig.2.Eclipse-spin model (ESM) and its application to despin magnetic field data.The eclipse time interval is delimited by the dashed lines.Top panel: difference of the spin period to the reference value before eclipse: T S (thick red line) from Sun sensor, T B (black and blue points) from magnetometer measurements and T ESM (thin red line in the eclipse) eclipse spin model.I and II denote the interval of short and long eclipses and the arrow point to the shoulder in between.Lower 3 panels: magnetic field in spacecraft system; red: despun with constant T S and black: despun with T ESM .The magnitude of the field B (blue) is shown in the bottom panel together with the component along the spin axis B Z (black).
Fig. 3 .
Fig. 3. Accumulated phase error, φ in degrees, versus eclipse length for the THEMIS B eclipses between March 2008 and July 2009.Black: all eclipses up to about 1 h duration for 2008 and 2009, blue: long eclipses in 2009, green: long eclipses in 2008, red: bad results due to manoeuvres.The arrows point to the long eclipses with very long stay in penumbra.
mark the long eclipses observed in March 2008 and the blue crosses those from March and April 2009.The accumulated phase deviation for 2009 is below 40 degrees for eclipses not longer than 3 h.The same conclusion applies to the eclipses of 2008 not longer than 2 h.An interesting feature of Fig. 3 is the increasing separation of the phase difference curve φ with increasing eclipse duration for the two years 2008 and 2009.Obviously, the second model branch, which has been determined from a 2009 eclipse, leads to better results for long eclipses of this year: The phase deviation increases drastically for 2008 eclipses longer than 2 h.This means, that the spin period behavior was different in 2008 and 2009.One reason may be the fuel content of the spacecraft, which was larger in 2008.The release of fuel will probably have slightly changed the moment of inertial of the spacecraft over the course of time.The crosses marked with arrows correspond to the first eclipses in March 2008 and 2009 of the series of eclipses marked in the two respective colors.The penumbra phase was particularly long in these cases, since the spacecraft only skimmed the region in full shadow and did not fly right through umbra.
Fig. 4 .
Fig. 4. Spin period change ( T S ) due to Earth radiation versus radial geocentric distance to the Earth.The colours show the local time (in hours) of the perigee crossing.
Fig. 5 .
Fig. 5. ES model application to a long eclipse for THEMIS B on 11 March 2009.Top panel: spin period differences to T Sref , mid panel: magnetic field despun with uncorrected spin times, bottom panel: magnetic field despun with corrected spin-times using the ESM.The eclipse time interval is delimited by the dashed lines.
the application of the ES model to the longest eclipse (3 h 45 m) encountered by THEMIS B on 11 March 2009.
Fig. 6 .
Fig. 6.ES model application for a THEMIS B lunar-flyby, 13 February 2010.Top panel: spin period differences to T Sref , mid panel: Magnetic field despun with uncorrected spin times, bottom panel: magnetic field B despun with corrected spin-times using the ESM.The eclipse time interval is delimited by the dashed lines.
Table 1 .
Model parameters for THEMIS B.
|
v3-fos-license
|
2019-03-17T13:07:25.342Z
|
2017-01-01T00:00:00.000
|
79596472
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.hsj.gr/medicine/sleep-disorders-and-its-effect-on-community.pdf",
"pdf_hash": "163424dfa59fd0832db0271d1bd9a9cd38fbce88",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46704",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Sociology"
],
"sha1": "d70556f576bdaf1478cfcfe46787d52a88265329",
"year": 2017
}
|
pes2o/s2orc
|
Sleep Disorders and its Effect on Community
The main aim of this paper is to analysis the effect of sleep disorders on the community. This study is quantitative. The questionnaire was designed and distributed among the student in school in the city of Riyadh high and intermediate school. The sample of this study consist 100 students. The researcher visited a school in the city of Riyadh high and intermediate school sections and that on December 10 to 11 to measure the impact of sleep disorders through a questionnaire that measures the effects of lack of sleep and then make them aware of the benefits of sleep and harm of lack of sleep from the physical and psychological effects, and it gave a lecture educate the students and then distribute them brochure contain the benefits and harmful effects of sleep. SPSS 21 was used to analysis that data. The results of the study indicated that lack of sleep limits a person's ability to think and solve the problem effectively, which means that people who wake up for a long time influence them to learn at an effective level. Lack of sleep affects the ability of thinking and can limit your ability to accurately interpret events, which can be difficult to respond correctly to situations in which effective decisionmaking is intelligent. The lack of sleep, even for one night can lead to swelling of the eyes and turn the skin into pale skin.
Introduction
Sleep disorders are widespread health problems that reduce quality of life, increase risks for psychiatric and medical disease and raise health care utilization and costs among affected individuals worldwide. A subset of patients with sleep problems seeks care from sleep specialists, but most such patients are seen in primary care settings where they are likely to receive suboptimal sleepproblem management. As noted by Gottschalk and Flocke during a typical primary care visit, the provider has only 10 to 15 min per patient to manage an average of two to three major medical problems that carry significant risk of morbidity and mortality; this leaves very little time to address whatever nonspecific sleep/ wake complaints patients might present. Moreover, primary care providers often have limited knowledge of sleep disorders medicine. As such, sleep disorders may either go unrecognized or improperly treated. Thus, many sleep disordered patients seen in primary care settings fail to be properly diagnosed and receive effective, evidence-based therapies [1].
The main aim of this paper is to analysis the effect of sleep disorders on the community. Also this paper seeks to achieve these objectives: 1. Measure the impact of sleep disorders and harmful to Group of our society (intermediate and high school sections of students in Arqam National Schools in Riyadh). Target group: from 13-18 years of students in Riyadh, Al-Arqam National Schools.
2. The research focused on identifying the negative effects of lack of sleep and health troubles (physical-psychologicalscientific).
Literature Review
Sleep is a behavioral state of perceptual freedom while being unavailable for the environment, accompanied by characteristic electroencephalographic changes, having the rapidly reversible potential to the state of vigilance [4]. In the Romanian medical dictionary, sleep is defined as a periodic and reversible physiological state characterized by somatic inactivity, relative and temporary suppression of consciousness, accompanied by a more or less important abolition of sensitivity and the inhibition of vegetative functions [4].
Sleep disorders are now more widely recognized as warranting specific clinical attention. Prevalence rates of sleep disturbances vary depending on the age group surveyed and the criteria used for inclusion. Estimates from primary care settings indicate that 10-30% of respondents experience significant sleep disturbances [5], while community studies note prevalence rates of up to 37% [6]. A community survey [7] of 987 parents of elementary school-aged children reported the following problems related to sleep behaviors: Bedtime resistance (27%), difficulty with morning wakening (17%), complaints of fatigue (17%), delayed sleep onset (11%), and night time wakening's (7%). Rates are even higher in studies examining clinical child populations, with restless sleep (43%) and night waking (47%) affecting a substantial number of children [8]. Despite the relatively high prevalence rates and potentially negative outcomes of disturbed sleep, adequate assessment of sleep problems is rarely conducted in primary care settings [9].
Methodology
This study is quantitative.
Analysis and Results
The results of the study aimed at identifying sleep disorders and their impact on society will be presented in this section (Tables 1-18).
Discussion
The paper aimed to study the effect of sleep disorders on the Table 9 Relationship between satisfactions with the amount of sleep and drink enough drinks daily. The value of (Chi-Square) is a statistically significant value.
Crosstab
Chi-Square Table 16 Relationship between satisfactions with the amount of sleep and rate less than 90%. The value of (Chi-Square) is a statistically significant value.
Crosstab
Chi-Square community. The results of the study indicated that there is a relationship between the number of hours of sleep and the satisfaction of the sample members from sleep. The values of (Chi-Square) are a statistically significant value. There is a relationship between the level of satisfaction with sleep and fatigue. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and difficulty concentrating. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and difficulty remembering. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and sleep affects you negatively. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfactions with the amount of sleep and feel sleepy. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and suffer from a chronic disease. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and s taking medication continuously. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and drink enough drinks daily. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and smoker. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and suffer from insomnia. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and suffer from snoring. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and suffer from frequent nightmares. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and disturbances affect studies. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and hours a day frequently. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and rate less than 90%. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfactions with the amount of sleep and difficult to make a decision. The values of (Chi-Square) are a statistically significant value. There is a relationship between satisfaction with the amount of sleep and suffer from frequent (nervous) loss of control. The value of (Chi-Square) is a statistically significant value. Sleep disorders or insomnia include difficulty sleeping and prolonged sleep for long hours. It is one of the most common medical problems. Who suffer from insomnia, wake up from sleep and remain inactive and uncomfortable, which affects their performance during the day. Insomnia not only affects the level of energy and mood, but also harms health, quality of work performance and quality of life.
Conclusion
Sleep is necessary for learning processes associated with learning. Lack of sleep limits a person's ability to think and solve the problem effectively, which means that people who wake up for a long time influence them to learn at an effective level. Lack of sleep affects the ability of thinking and can limit your ability to accurately interpret events, which can be difficult to respond
|
v3-fos-license
|
2018-04-03T02:07:01.278Z
|
2014-01-01T00:00:00.000
|
25533003
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21642850.2014.897624?needAccess=true",
"pdf_hash": "542a045d33fd5168500a2d44a23f8cbb844012fe",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46705",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "542a045d33fd5168500a2d44a23f8cbb844012fe",
"year": 2014
}
|
pes2o/s2orc
|
Life stress as a determinant of emotional well-being: development and validation of a Spanish-Language Checklist of Stressful Life Events
Objectives: To develop a screening instrument for investigating the prevalence and impact of stressful life events in Spanish-speaking Peruvian adults. Background: Researchers have demonstrated the causal connection between life stress and psychosocial and physical complaints. The need for contextually relevant and updated instruments has been also addressed. Methods: A sequential exploratory design combined qualitative and quantitative information from two studies: first, the content validity of 20 severe stressors (N = 46); then, a criterion-related validity process with affective symptoms as criteria (Hopkins Symptom Checklist (HSCL-25), N = 844). Results: 93% of the participants reported one to eight life events (X = 3.93, Mdn = 3, SD = 7.77). Events increase significantly until 60 years of age (Mdn = 6). Adults born in inland regions (Mdn = 4) or with secondary or technical education (Mdn = 5) reported significantly more stressors than participants born in Lima or with higher education. There are no differences by gender. Four-step hierarchical models showed that life stress is the best unique predictor (β) of HSCL anxiety, depression and general distress (p < .001). Age and gender are significant for the three criteria (p < .01, p < .001); lower education and unemployment are significant unique predictors of general distress and depression (p < .01; p < .05). Previously, the two-factor structure of the HSCL-25 was verified (Satorra–Bentler chi-square, root-mean-square error of approximation = 0.059; standardized root-mean-square residual = 0.055). Conclusion: The Spanish-Language Checklist of Stressful Life Events is a valid instrument to identify adults with significant levels of life stress and possible risk for mental and physical health (clinical utility).
Literature review Introduction
Despite the relevance of stressful events in health and well-being, there is not a valid instrument to investigate their prevalence and impact on Spanish-speaking populations of Latin America. This research aims at developing a screening instrument to evaluate the occurrence of severe stressful life events and provide initial evidence of its clinical utility. With a sequential exploratory strategy, the two studies presented here account for culture and diversity in a non-Western context.
They integrate qualitative and quantitative information coming obtained from two independent samples (Creswell, 2003;Yin, 2006). This paper is divided into four sections and one appendix with the protocol of the Spanish-Language Checklist of Stressful Life Events (SL-SLE). 1 The first section presents a brief overview of current research on stressful events as determinant of health, particularly in multi-cultural and developing countries. Special attention is paid to the social readjustment model of Holmes and Rahe (1967) because its rationale and items are used to develop a checklist of life stressors for the Peruvian context. The second and third sections correspond to Studies 1 and 2 (i.e. content and psychometric validities); they include methods and results subsections. Additionally, Study 1 describes the rational identification of items, while Study 2 presents the confirmatory factor analysis (CFA) of the criteria used to validate the SL-SLE (Hopkins Symptom Checklist (HSCL-25); Derogatis, Lipman, Rickels, Uhlenhuth, & Covi, 1974). The fourth section discusses the findings and further research with the SL-SLE.
Stressful life events
The concept of stress involves the environment, an organism, time and outcomes. Each component adds on the possible variability of the stress experience, thus research emphasizes on different aspects according to its objectives (Monroe, 2008). Research on stressful events focuses on natural contexts, appeals to recollections and examines the relations between life events and the psychobiological mechanisms activated, or the possible negative outcomes.
In the last decades, research on stressful life events has shown its relevance and challenges. Firstly, challenges rise from the fissure between a hypothesized objective environmental stress and the subjective appraisals that precede the response. Complex cognitive appraisals focus on how to overcome, prevent or accept the stressful situation (Folkman & Lazarus, 1986;Lazarus & Folkman, 1987). Therefore, individual variability may question a shared criterion to evaluate and compare life stressors. Secondly, confounding is a sensitive issue in the measurement of stress and psychopathology (Lazarus, DeLongis, Folkman, & Gruen, 1985;Luthar & Zigler, 1991). While the last point has been systematically controlled (Paykel, 2001), different research methodologies have dealt with challenges of individual variability and cognitive appraisals.
Interestingly the two most used approaches depart from a shared principle: the cognitive evaluation of a factual stressor will determine to a great extent its consequences. On the one hand, the work of Horowitz, Wilner, and Alvarez (1979) inspired the development of sophisticated methods to assess the perceived stress of any event. On the other hand, the social readjustment model of Holmes and Rahe (1967) provided a method of life stress quantification and established fixed parameters to rate and compare 43 severe life stressors. The authors asked a group of adults (acting like judges) to rate life events according to the amount of readjustment needed to go back to previous homeostasis. An event might be subjectively experienced as positive or negative (controllable or uncontrollable, unexpected or expected), but the comparison and rating process was based on the evaluation of resources needed to face it (readjustment). 2 Although the use of differential weights for stressors has to be cautious (Skinner & Lei, 1980), the model Social Readjustment Rating Scale (SRRS) shows that it is possible to establish certain standards of life stress comparison. Today the scale is used widely and its predictive validity is demonstrated with regard to specific psychopathologies (Woods, Racine, & Klump, 2010) and physical diseases (Mujakovic et al., 2009). After more than three decades of use and revisions, the SRRS is a commonly used method in life stress research (Hobson & Delunas, 2001;Monroe, 2008;Paykel, 2001).
Life events research: psychopathology, risk and vulnerabilities
There is well-documented evidence connecting stressful life events with poor psychosocial adjustment and physical distress, either as recent life events or as cumulative events along life (Foster, 2011;Hammen, 2005;Paykel, 2001;Shonkoff & Garner, 2012). Moreover, Scully, Tosi, and Banning (2000) rejected empirically the hypothesis that mainly undesirable and uncontrolled events are at the base of this association. Among others, stressful life events have been associated with anxiety, post-traumatic stress disorder (PTSD), depression, suicide, aggression, addictions, bulimia and psychosis in children, adolescents or adults.
Occasionally, stressful life events are used as a measure of environmental risk. However, researchers are aware that certain environmental risks are stable social or cultural conditions. Epps and Jackson (1993) summarize risk factors as stable conditions in four domains: family, socio-political, cultural and economic contexts. In addition, stressful events have to be differentiated from vulnerabilities. Today, vulnerabilities are mainly studied as biological conditions (genes, hormones and neurological elements) or as early traumatic experiencerelated to attachment and first relations in developmental research (Masten & Narayan, 2012;Shonkoff & Garner, 2012). In both cases, vulnerabilities are moderators of the impact of life events on well-being.
The occurrence of specific events (e.g. sexual violence) or the accumulation of stressful events is used as a predictor of psychopathology; and usually, their impact is investigated in short periods of time (weeks or months after the event). There are not standard rules for the time elapsed or for the number of severe life stressors to predict physic or psychological outcomes. However, research has demonstrated that the increase of severe life stressors leads to the presence of symptoms. Risch et al. (2009) carried out a meta-analysis of 14 studies where the increment of life events (1-3 or more) predicts depression in adults. Caspi et al. (2003) found similar results for young adults (until 26 years old) and reported 1-3 stressful life events in 86% of a national sample.
The number and characteristic of the life stressors change considerably in multi-trauma or complex trauma research. Here, methods and designs account for circumstances such as war, asylum, force migration (Terheggen, Stroebe, & Kleber, 2001) or environmental disasters (Masten & Narayan, 2012) where the number of life stressors is notably higher. For instance, in war context people suffer mostly up to 11 severe events along their life, although a small group reported up to 27 events (Neuner et al., 2004). In Mexico, Guatemalan refugees reported 8 life stressors in average, and a range of 1-19 events along life (Sabin, Lopes, Nackerud, Kaiser, & Varese, 2003). Additional challenges are posed by research on trans-generational and cumulative risk factorsfor instance, studies of abuse and maltreatment in children (Knutson, 1995) or dysfunctional patterns of attachment (Geenen & Corveleyn, 2013). The accumulation of stressors is a bigger threat to cognitive, affective, physical or social well-being (Moen & Erickson, 1995), and it poses additional issues to conventional psychological diagnosis (e.g. PTSD) and treatment (Courtois, 2004).
Poverty and diversity in the study of stressful life events
There is a lack of research about life stress and symptoms in Latin America, and strong criticisms to the unrevised use of life stress instruments and diagnosis criteria in other non-Western societies (Eyton & Neuwirth, 1984;Gonzales de Rivera y Revuelta & Morera, 1983).
In developing countries or impoverished communities, the study of life stress has to account for social exclusions. Children and adolescents exposed to poverty face deficiencies in health (malnutrition and health services) and education (quality of schooling), lack of stimulation, poor maternal health, fragile connections to home and school, child labor, religious, ritual or military services and parents underemployment (Garmezy, 1993;Grigorenko et al., 2007;Hernandez, 2002;Kotch et al., 1995). Community violence is a complex stressor in contexts of urban poverty. Frequently, it is studied as crime (witnessed and experienced victimization), social insecurity, unemployment, single parenting, lack of infrastructure and services (O'Donnell, Schwab-Stone, & Muyeed, 2002). Here, the challenge for life events research is to differentiate them from demographic characteristicsbecause of progressive adaptation and possible diffuse symptoms (Rosenthal & Wilson, 2003).
Diversity has broadened the study of life stress and well-being in psychology (Frable, 1997;Stewart & McDermott, 2004). There is an increasing tendency to include gender, ethnicity, education, and social and civil status in life stress research (Bruner et al., 1994;Hobson & Delunas, 2001;Reyes & Acuña, 2008). In Mexican samples, Acuña (2012) and Bruner et al. (1994) found that women, undergraduate students, single participants and those with basic education assigned higher scores to the life events, while Hobson, Kamen, and Szostek (1998) found differences in the evaluation of the stressors by gender and income in the USA. Ethnicity influences both appraisals and frequency of life events: minorities (African, Hispanic or Native Americans) have more in common than Caucasian middle-class groups of the USA due to migration, living conditions, family and community ties (Pine, Padilla, & Maldonado, 1985). Moreover, ethnicity connects to race-related traumatic experiences that act not as differentiated life-threatening events. Racial discrimination within the family is a strong element of internal vulnerability (Sorsoli, 2007) and a trans-generational shared experience (Tummala-Narra, 2007). However, commonly, racism is not considered a traumatic experience or a stressful life event and its consequences in mental health are underestimated.
Sex difference is a profuse field of study in psychology, but the study of gender as a critical category to understand sex-related differences is scarce (Stewart & McDermott, 2004). Broad reviews conclude that there are specific risks, life events and vulnerabilities that lead to negative outcomes for women and men. In the first case, research found consistent evidence connecting stressful events to affective disorders (mainly depression; Hammen, 2005), while in the second case, studies have focused on suicide and poor health (Foster, 2011;Joiner, Brown, & Wingate, 2005). The higher prevalence of depression in women is the most studied sex difference; however, sex-related biological factors cannot account for it, and yet, the meaning of life events are under-explored. For instance, marital status could be risk or protective factor for women depending on cultural values; life events affecting emotional ties have stronger impact on women (Bebbington, 1998;Bebbington et al., 1998). Sex-differentiated roles take in hierarchical status, values, attributions, satisfaction and retributions, thus women's social roles may act as specific affective and social stressors or protection.
In summary, several decades of research has left a main task: to develop valid instruments able to collect retrospective information (memory and cognitive bias) and to account for the stressors' relevance in their contexts. Research has to turn its focus from dysfunctional outcomes to the modifying factors that break the causative chain between life stress and emotional and physical distress (Paykel, 2001). Therefore, the main goal of the following studies is to develop and to validate a screening instrument to evaluate the occurrence and impact of stressful life events in Peru.
Study 1: development and content validity of the SL-SLE
The aim of Study 1 is to develop a short list of contextually relevant and severe stressful life events for Peruvian adults. The study uses a multi-method approach with expert sampling (Haynes, Richard, & Kubany, 1995).
Rational selection of items, survey and group interview
First, 28 (of 43) items of the SRRS were selected based on 6 studies: the original SRRS research (Holmes & Rahe, 1967) and 3 revisions, the update of items and national norms (Hobson & Delunas, 2001;Hobson et al., 1998) and an empirical study to address content criticisms (Scully et al., 2000). Because of their misrepresentation of women and ethnic minorities, two studies with Spanish-speaking groups (Mexico) were included: Acuña (2012) and Bruner et al. (1994). Here, subgroups are compared by gender and socio-economic status (SES) using the original items (translated and validated).
There are clear differences in the 20 first positions of the original list of Holmes and Rahe (1967) and the updated version of Hobson et al. (1998). Only original items 1 ('Death of a spouse') and 4 ('Jail term') kept the same position; items originally rated as 7, 10, 12 15, 18 and 20 changed 23-39 positions. Besides, studies conducted with the original items after 33 and 27 years in the USA (Scully et al., 2000) and Mexico (Bruner et al., 1994) showed important changes in the rating positions.
About the contents, the updating studies of Hobson and Delunas (2001) and Hobson et al. (1998) provide essential information to adapt the items. Firstly, they included 5 new items in the first 20 positions: 'being a victim of crime', 'being the victim of police brutality', 'infidelity', 'experiencing domestic violence/sexual abuse' and 'surviving a disaster' (rated as 8, 9, 10, 11 and 16 in the new national sample). Secondly, two original items (13, 'sex difficulties' and 19, 'change in number of arguments with spouse') were replaced by items about family, parenting and sexuality. Thirdly, two original items related to work (15, 'business readjustment' and 18, 'change to different line of work') were replaced by precise items and were rated in less relevant positions (33, 'changing employers/careers'; 43, 'changing positions -transfer, promotion'; 45, 'changing work responsibilities'). Other changes are the following: original items 3 and 9 ('separation' and 'reconciliation') were merged and rated in the 12th position; original item 4 ('jail term') was broadened to include 'other institutions'; original item 16 makes explicit the kind of financial 'change' ('problems/difficulties', rated now 14th), while the new version of item 20 does not specify the amount of money (now rated 47 th , 'home mortage'). Once selected the 28 items, the survey and group interview were conducted.
Participants
Forty-six Peruvian adults answered a survey and a group interview. They were sorted in 2 groups of women (15 of middle and 11 of low SES) and 2 groups of men (13 of middle and 7 of low SES). Participants of low SES are volunteers of a subsidized non-formal education program for adults living in inland regions. Participants of middle SES are residents of Lima and have pursued professional private education. Participants are 25-60 years old (M = 41.2; SD = 9.2); they are born in Lima (n = 25) or in regions of Peru (n = 21). They have mainly bachelor or postgraduate education (n = 25, 77.80%), but 22.2% of the group (n = 10) has secondary or technical education.
Materials
The survey was designed (1) to rate the relevance of 28 life stressors (three points scale), (2) to assess its accuracy and readability (dichotomous: yes/no) and (3) to suggest new items or modifications. The instrument also collected socio-demographic information. Participants were asked to act as a 'group of experts' in order to evaluate the list of life events as significant (or not) according to their experiences or the experiences of others. Instructions included the definition of a 'relevant stressful event of adult life' as an 'event (positive or negative) of great importance in adult life, in which people must use significant amounts of energy, time or resources to adapt to it or to overcome it'.
The group interview was designed as a collective and visual evaluation of the stressors. Based on their individual answers to the survey, participants had to locate the eventswritten in A4 paperin a 'line of severity' drawn in a blackboard. They had to find certain consensus, discuss and explain their appraisals. The evaluator followed an interview guide.
Data analysis
Together, qualitative and quantitative information outline the content validity process of the SL-SLE (Creswell, 2003;Haynes et al., 1995).
Quantitative data (survey): Inter-rater reliability (Cronbach's α) and cut-off values were calculated for the total group and each subgroup was analyzed (by gender and SES). Cut-off values (means plus one standard error) were used to identify the most relevant stressors (up to 28 selected life events) in the total group and each subgroup. Analyses were done with SPSS (IBM version 22).
Qualitative information (survey's open-ended question and group interviews): It was used to evaluate the linguistic accuracy of the items and to suggest modifications or additional stressors. Group interviews also provided information regarding the appraisals of the stressors. The four group interviews were tape-recorded and transcribed. Coding and analysis was supported by NVivo (QSR version 10). The unit of analyses was themes marked in words, sentences or groups of sentences (Esin, 2011). The analysis will inform the most severe life stressors and in-depth understanding of their impact on adult's lives.
Results
In average, the groups used 16.5 minutes to answer the survey and 95 minutes in the group interview. The quantitative rating process of the 28 life stressors reached an inter-rater reliability of 0.80 (standardized Cronbach's α) for the complete group (N = 46) and within-groups from α = .72 to .86. Table 1 shows the 18 items rated as highly relevant in the total group (N = 46) and four subgroups. The total group pointed out 15 life events as highly stressful (cut-off value for the total group = 2.49). Men agreed on 12 life events, while women and groups divided by SES agreed on 13 life events. Three items rated above a subgroup's cut-off point were included in the final list: 'Death of close friend' (included by women), 'Reconciliation with spouse or partner' (included by men) and 'Obtaining a major loan or home mortgage' (included by the low SES participants). Ten less relevant events were not included in the final list. 3 Qualitative information (survey's open-ended questions and group interviews) increased the contextual validity of the items (changes in writing style and contents). Participants said that, ' … auto accident' must precise the severity of the incident; 'Death of close family member' must clarify that it is about 'parents, children or siblings'. Participants considered mandatory to eliminate 'robbery' of item 9, because it is not comparable with a violent assault or rape. Finally, to equate with other items and to avoid gender bias, the item 'Getting married' was phrased as 'Getting married/living together'.
The groups agreed to include two additional stressors: 'Conflicts or violence in the family' and 'Not to get justice from the state'. The first one built in contents such as forced migration or separation from children or elderly relatives because of migration, money or heir conflicts, maltreatment of children and wife, infidelity, 'interference of relatives' in the family affairs, children start living independently and 'psychological problems' within the family. The stressor 'Not to get justice from the state' included contents such as inefficient, bad (or 'atrocious') laws of the country, bad judges, denied access to higher education, social conflicts and social insecurity. Table 2 exemplifies the subjective evaluations of the 10 life events rated as highly severe in the group interviews.
When asked about the most stressful events, a participant said 'everything is about feelings, because everything is related to our families, our close environment' (Man, low SES). Events referred to the loss of freedom or independence (i.e. jail, illness, accident, finances) elicit appraisals about individual choice, 'life project' and personal expectations, although the impact on relatives' life is also mentioned (e.g. events 2, 3, 5 and 8 in Table 2). In fact, the anticipated impact of a personal stressor on own family seems to be experienced as an additional stressor or the worst emotional cost of a life event. In several cases, participants emphasize the impossibility of changing the situation, either just after the event (i.e. disaster, disease, crime) or because it is a stable condition that has to be accepted (i.e. death). Note: Women n = 26; men n = 20; low SES n = 18 and middle SES n = 28. a Item rated below group's cut-off value. b First item rated equal or above group's cut-off value.
The 10 most relevant stressors are negative life eventsalthough the total list included four positive events (e.g. Getting married) and three events that could be considered neutral (e.g. Obtaining a major loan or home mortgage). Table 2 shows that participants evaluate the impact of the stressors in terms of the time needed to recover after the events (e.g. events 1, 4, 9 and 10 in Table 2) or the use of inner resources unexpectedly (e.g. events 7 and 8). Clearly, the most stressful events are uncontrolled, unforeseen and they are experienced as changes of lost or harm.
As a result, the protocol of the SL-SLEs was elaborated. It included 20 stressful life events listed in random order. Events are described independently, as a single occurrence, and sex bias was eliminated. Physic or psychological symptoms and personal changes that could be related to previous stressors were not included. The list contains events about family, work and social life; they might be considered either positive or undesirable, as well as controllable or If you have family, until you get another job, who will support financially the household? when you have a wife, children who are studying, imagine they will ask you money, it's frustrating (Woman, low SES) You wonder why, what happened, what did you wrong … I've seen people faint, cry, not understanding why (Man, low SES) (9) Being a victim of crime (assault, robbery, rape, etc.) The issue of violence, being a victim of a violent act, it is much harder for men because you never foresee to be in that situation, you are more vulnerable (Man, middle SES) Two years ago I was assaulted, my tendon, knee and arm had to be plastered, I lost my job, I had many debts … it hurts even today, it was very difficult … I have overcome it little by little (Woman, low SES) (10) Being involved in auto accident The auto accident is serious psychologically for anyone, even if it is not a sever crash, it stays with you, it happened to me, when I see a car approaching I have that feeling (Woman, middle SES) uncontrollable. It is expected that they reflect the experience of diverse segments of Peruvian multi-cultural society. Instructions asked to mark the events experienced along life. The final version of the instrument is in the appendix.
Study 2, clinical utility: criteria-related validation process The aim of Study 2 is to get a valid instrument to evaluate the prevalence and impact of stressful life events in Peruvian adults. The concurrent validity of the SL-SLE is established with psychological symptoms (HSCL-25) as related criterion (Cramer & Howitt, 2004).
Participants
A community sample of 844 adult residents of Lima (older than 18 years) participated in this validation stage. They were contacted through a private university (pre-and postgraduate students), a municipality (social promoters and public servants) and an NGO (volunteers and grassroots leaders). Their individual consent was obtained. Sixty percent of the participants were women; participant's mean age was 29.76 (SD = 11.54) and they were mainly born in Lima (n = 541, 65.7%
Spanish-Language Checklist of Stressful Life Events
The instrument includes 20 positive and negative life stressors (Study 1) and also collects sociodemographic information (gender, age, place of birth, education and employment). Participants were asked to put a check mark next to the events experienced along life. SL-SLE total score is calculated as the unweighted sum of answers to the items (1 = had ever been experienced; 0 = had not). It is expected that higher number of events reflects an increase in life stress (continuous variable, 0-20).
Data analysis
Mplus6 was used to estimate model parameters and run the CFA of the HSCL-25. Descriptive and inferential analyses were performed with SPSS (IBM version 22). Descriptive statistics of each life stressor and meaningful differences between subgroups were explored (non-parametric analyses). Four-step hierarchical regression analyses tested the criterion-related validity of the SL-SLE (concurrent). 6 There is no evidence of the measurement model of the HSCL-25 in Peru or Latin America, thus its two-factor structure was tested (CFA, anxiety and depression, composed of 10-and 15-factor indicators, respectively). Robust maximum-likelihood estimation (MLM) and the Satorra-Bentler chi-square (SB χ 2 ) were used in order to account for the non-normality of the data. Then reliability of the scales was confirmed (Cronbach's α).
Modeling with hierarchical regression analyses explored the association between life stress and psychology symptoms. The continuous variables were log transformed, age was mean centered and HSCL-25 outliers were controlled. Three four-step regression models differentiate systematically the effect of gender, age, place of birth (step 1), education (step 2), employment (step 3) and life stress (step 4, SL-SLE) in predicting three-symptom scores (effect indicators): HSCL anxiety, depression and total score. Adjusted R 2 was compared between participants with an SL-SLE total score up to 8 life events (N = 751, three models) and all the participants (N = 811, three models). Table 3 shows the frequencies and percentages of each life stressor in the total group (N = 844) and groups by gender and place of birth.
SL-SLE descriptive information
The four most frequent events in the total sample are the most frequent events across the subgroups (they change only one position). More variability is observed in the frequency of events reported for those who are not born in Lima: not any of the seven most frequent events in the total group keep the same position in the first generation of migrants in Lima. Table 3 shows also the relevance of the items qualitatively added to the list of stressors (Study 1). They appear ranked in position numbers 6 and 13.
HSCL-25 factor validity
First, participants (30) with more than one missing response in any HSCL item were excluded -Little's missing completly at random test: χ 2 (504, N = 841) = 572,030, p = .019. One missing completely at random value was imputed in 40 cases (4.9% of the sample) with a single expectation maximization process (Scheffer, 2002). Then, inter-item correlations were verified in order to proceed with the CFA.
The two-factor model of the HSCL-25 fits the data well in the Peruvian sample. The overall fit indices for the CFA are S-B χ 2 (274) = 1047.786, S-B χ 2 /df = 3.824, p < .001, and scaling correction factor for MLM: 1.431. Following the general criteria of Hu and Bentler (1999), the root-mean-square error of approximation (RMSEA) is below 0.06 (RMSEA = 0.059; 95% IC RMSEA = 0.055-0.063) and the standardized root-mean-square residual (SRMR) is below 0.08 (SRMR = 0.055). RMSEA corrects for parsimony and provides a test against the perfect model, thus it is essential in judging a model with 25 indicators. However, and similarly to results reported by Al-Turkait, Ohaeri, El-Abbasi, and Naguy (2011) in an Arab sample, the criteria for the comparative fit index (CFI) and the Tucker-Lewis Index (TLI) (above 0.90) were not met (CFI = 0.830; TLI = 0.814). The regression weights (factor loadings) are all significantly different from 0 (p < .001, two-tailed). The covariance paths between the factors (anxiety and depression) are significantly different from 0 (p < .001, two-tailed).
Modeling SL-SLE
Six four-step hierarchical multiple regression analyses were carried out with HSCL anxiety, depression and total score as dependent variables. Ten independent variables were organized in four steps to distinguish their capacity to account for by the outcomes' variance. The main variable of interest, SL-SLE total score, was introduced at the fourth step to explore its predictive capacity above the others. Independent variables are not highly correlated, with the exception of stressful life events and age. The collinearity statistics (i.e. tolerance and variance inflation factor (VIF)) are all within accepted limitsthe lowest levels of tolerance is 0.535, while the highest level of VIF is 1.871.
All the regression models for the dependent variables show significant predictive capacity (adjusted R 2 , with F parameters at p = .000). Analysis including participants with one to eight stressful life events along life (n = 751) showed stronger adjusted R 2 than models tested with all the participants (n = 811). 9 Thus, the first set of analyses has better predictive capacity 10 ( Table 4).
The hierarchical multiple regression analysis revealed that each of the four steps contributes to the regression models (ANOVA, p < .001). Stressful life events (step 4) increases significantly and consistently the prediction of general symptoms, anxiety and depression (p = .000), followed by step 1, demographic characteristics in the prediction of general distress and anxiety (p < .000) and depression (p < .05).
Step 2 (education) significantly increases the prediction of general distress and depression (p < .05). The increase of this contribution (adjusted R 2 and F change) is not significant only at step 3 for any criteria ('occupation' as a block of variables).
When all the independent variables are compared in step 4 of the regression models (standardized beta weights, β in Table 4), stressful life events remain as the most important predictor of psychological symptoms. It explains uniquely 18% of the variation in general distress, 14% of the variation in anxiety and 17% of the variation in depression. Gender and age have a unique predictive capacity of the three dependent variables. The direction of the associations shows that women and younger participants have stronger probabilities to show higher scores in the three-symptom scales. Lower levels of education (secondary and technical) and unemployment (unemployed participants and housewives) are also unique predictors of general distress and depression.
Discussion
Results show that the SL-SLE is an empirically supported and evidence-based instrument to investigate the prevalence and impact of stressful life events in Peruvian adult population (Holmbeck & Devine, 2009). Although further research should extend this conclusion, the SL-SLE shows satisfactory psychometric characteristics and capacity to identify adults at risk of developing symptoms of anxiety and depression.
In accordance with the literature, the content and psychometric validity processes of the SL-SLE resulted in negative (13), positive (4) and neutral (3) events, as well as uncontrollable (9) and controllable (11) events. These events correspond to diverse domains of adult life (relationships, work, family and community life) and they may activate internal and external resources in the individual. The quantitative study provided initial evidence of risk associated with the number of events (specially for age groups and birth place). However, whether or not a person will go through psychosocial or physical impairments will depend on diverse conditions, such as previous life experiences, psycho-biological characteristics, the accumulation of stressors and mainly, the subjective evaluation of the event and its consequences. The SL-SLE does not assess objective stress; it aims at providing a shared criterion to identify potentially harmful life events for the emotional well-being of adults in Peru. Adj.
ΔR 2 ΔF
Step The relevance of life events related to family and community ties has been studied as a cultural characteristic of Latin American (Hernandez, 2002) and Peruvian (Elsass, 2001) populations. The new items developed for the SL-SLE reveal that family and social contexts are simultaneously a domain of meaningful personal experience and source of stress and vulnerability.
The study provides normative information about Peruvian adults' life stress. These findings can lead us to some conclusions and further paths of research. In accordance with the literature, the number of stressors does not differ for men and women, and there is a significant increase of life events by age (from 3 to 6 events between 18 and 60 years of age). In both cases, patterns of life events may be explored to better understand the influence of gender and age in life stress. Interestingly, the life event 'change of residence (migration)' was excluded from the group of highly relevant stressors (content validity study). However, the validation process showed that being a migrant in Lima determines an experience of numerous and stressful life events. Although migration may entail a transition out of poverty, especially for young adults (Crivello, 2009), internal and external migration in Peru is a phenomenon associated with lack of social opportunities. It comprises the departure from contexts of severe exclusion (i.e. services of health or education) to the insertion in contexts of urban poverty. Recently, the differential and severe effect of these stressors has been studied as pre-and post-migration stress and has been empirically connected to mental health symptoms in Peruvian migrants (Lahoz & Forns, 2013). Further research is needed to understand in depth the short-and long-term impact of migration for adults' wellbeing in Peru or other Latin American contexts.
The exploration of SL-SLE total scores revealed unforeseen results: there is a small group of participants (4.9%) who declare an unusual number of stressors along their lives: from 9 to 16. This amount of stressors is not only labeled as outliers by psychometric procedures (Hoaglin & Iglewicz, 1987) but it is comparable to the number of stressors reported in contexts of war and forced migration (refugees) (Neuner et al., 2004). Contrary to what could be expected, the inclusion of this highly stressed group weakened the predictive power of life stress on psychopathology (hierarchical models with N = 811 and Pearson's r). These preliminary findings are consistent with current research on multi-traumatized groups. They not only experience unusual amounts of life stressors but also show patterns of complex physical and psychological outcomes. The first ones are mainly connected to cumulative experiences of child abuse, neglect, social violence and disasters, while the outcomes are a challenge for diagnostics and treatment (Courtois, 2004). Further research is needed to clarify the specific characteristics of these participants, their vulnerabilities, possible negative outcomes as well as resources developed to face the repetitive occurrence of severe stressors.
The most important result to emerge from the data is the significant and consistent predictive capacity of stressful events assessed with the SL-SLEon psychopathology symptoms. Modeling with hierarchical multiple regressions demonstrate its capacity to identify systematic variations of symptoms' scores as a function of distal and complex background characteristics (Bryk & Raudenbush, 1987). Two conditions of the study make these findings specially challenging. Firstly, the SL-SLE did not explore a recent or fixed period of time. Commonly, studies focus on proximate events and connect them to reactive episodes of anxiety (mainly PTSD) or depression (for instance, after a significant lost) (Kessler, 1997;Silove et al., 2007). In this study, cumulative stressors and the uncontrolled time between events and outcomes could obscure their impact on well-being. Secondly, as suggested by Luthar and Zigler (1991), this research does not control the presence and intensity of the dependent variables (psychopathology) by focusing on clinical samples. On the contrary, it kept a community-based approach, looking for a broad and diverse group of participants. This study aimed at developing an accurate instrument for researching in natural contexts, prioritizing the cultural and social diversity of the target population.
In accordance with research in Western and non-Western contexts, education and employment, along with stressful life events, consistently predict dysfunctional responses and greater risk. Clearly, indicators of social disadvantage (lower education, employment insecurity or unemployment) play a key role in psychosocial well-being of individuals or in the accumulation of stressors (migration).
Age and gender are specially challenging for further research. Age is the most consistent and powerful socio-demographic predictor of mental health (anxiety, depression and general distress), thus it is important to investigate youngsters' life conditions and inner resources as a risk factor in the Peruvian context. Women show greater risk of depression and general distress. This is consistent with previous findings reported in the introduction, and it also provides some paths of analysis for gender. Social roles and expectations ascribed to women might be affecting their well-being. Roles such as family or community caregiversespecially for migrant and poor womensingle parenting or social leadership could be associated with symptom manifestation Ventevogel et al., 2007) or subjective distress (Morote, 2011). Distal risk factorsi.e. child neglect/abuseare associated with women's depression (Bifulco, Brown, Moran, Ball, & Campbell, 1998), and the combination of distal riskssuch as maternal loss (Tennant, 1988) and early absence of own mother (Kotch et al., 1995) with current stressors are major predictors of women's depression. It is important to contextualize women's regulation of emotions because it is not purely individual processes, but culturally constructed patterns of adaptation with different consequences in women's health (Butler, Lee, & Gross, 2007).
Finally, although the results provide evidence of the accuracy of the HSCL-25 scales of anxiety and depression, further research should expand our understanding of somatic symptoms in the expression of emotional distress. In this study, symptoms such as headaches, trembling, faintness, dizziness or weakness, and difficulty falling asleep, showed unexpected patterns of correlations with their scales. Cross-cultural clinical research has shown that some expressions of mental distress are culturally dependent. More precisely, unusual expressions of psychosomatic symptoms have been identified in Middle East (Tinghög & Carstensen, 2010), Asian (Pernice & Brook, 1996;Terheggen et al., 2001) and African countries (Kaaya et al., 2002). In Peru, research has already shown the necessity of specific methods to assess subjective and somatic distress in traumatized Quechua groupssuch as worrying memories, headaches, stomach or chest pains, convulsions and general weakness. Traditional psychometric instruments of depression, anxiety or PTSD may find little support in non-Western contexts, while research including culturally relevant outcomes and adapted instruments showed consistent explanatory power (Tremblay et al., 2009).
The sampling method may be considered as a limitation of Study 2. However, as explained, accessibility and diversity were the main criteria to reach the participants, thus several recruiting strategies were used to improve the sample composition. For instance, different institutions (i.e. a university, local and metropolitan municipalities and an NGO) were contacted, and the demographic characteristics of the sample were inspected during the data collection. As a result, group comparisons are possible with the validation sample though generalization has to be cautious.
The impact of severe stressors on the development of symptoms is more obvious when time is limited to 1, 5 or even 10 years. However, the final format of the SL-SLE does not exclude the possibility to restrict the time assessed. This might be a convenient choice in further research, responding to specific objectives or target groups.
Prospective-longitudinal studies would also broaden the clinical utility of the SL-SLE. The capacity of the SL-SLE to predict affective symptoms, PTSD or physical conditions could be tested in series of time points. The increase of stressful events in fixed periods of time is not only related to affective symptoms but their interaction with genetic conditions can be demonstrated (Caspi et al., 2003). The use of the SL-SLE as a screening instrument (to gather information at an early stage of a relevant risk condition) could be used to prevent or to monitor treatments in diverse physic and psychological conditions. Finally, further research could also include clinical samples or valid diagnostic instruments in order to screen for psychopathology. The promising results of the HSCL-25 CFA allow trustworthy use of the HSCL-25 in Peru.
In conclusion, these findings represent an initial validation of a useful instrument to evaluate the occurrence and impact of relevant life stressors for adults in a Latin American sample. Satisfactory results were obtained to support its capacity to identify adults at psychosocial risk. This study also provides a springboard for the study of life stress in natural contexts and to search for connections with possible negative or positive outcomes. Among others, promising applications of the SL-SLE are the exploration of distinctive patterns of stressors for men, women, migrants, disadvantaged groups and for people with unusual accumulation of stressors along life. The usefulness of the SL-SLE has to be proved in the prevention and management of physical and mental consequences of life stress. The exploration of moderating factors of the impact of stressful life events is a promising area of further research. In Study 1, quantitative rating process of events and the qualitative appraisals of their severity followed the rationale of the model of Holmes and Rahe (1967). 3. The items are 'illness of close family member', 'assuming responsibility for sick or elderly loved one', 'change in work hours or conditions', 'retirement', 'change in number of arguments with spouse', 'sex difficulties', 'change to different line of work', 'no health insurance', 'trouble with boss' and 'change in living conditions (migration)'. 4. Subsample number of participants for each category does not always add up to 844 due to missing information. 5. The translation into Spanish used was made by the Harvard Program in Refugee Trauma and published by Oficina en México del Alto Comisionado de las Naciones Unidas para los Derechos Humanos (2007). 6. Reliability and dimensionality reduction are inappropriate analyses to validate a checklist of life events (Shalowitz, Berry, Rasinski, & Dannhausen-Brun, 1998;Tremblay et al., 2009). 7. The effect size r was calculated by dividing Z(−3, 793) by the square root of N (824). 8. The effect size r was calculated by dividing the chi-square value H (2) = 59.138 by N−1 (825). 9. Results with N = 811: HSCL total score (n = 754, R 2 Adj. = 0.64, F(10, 743) = 6.714 , p = .00), anxiety (n = 753, R 2 Adj. = 0.71, F(10, 742) = 6.714, p = .00) and depression (n = 754, R 2 Adj. = 0.51, F(10, 743) = 5.022 , p = .000). 10. Interestingly, strong direct associations (r) were found between the SL-SLE total score and HSCL-25 scales (means) but just until SL-SLE total score reaches eight events (N = 751, 93% of the sample). R 2 of stressful life event (SLE) (0-8) with HSCL total score is 0.484, with anxiety 0.376 and depression 0.411. When included the small group of participants with 9-16 life events (n = 59), the association is not significant: R 2 of SLE (0-16) with HSCL total score is 0.051, with anxiety 0.023 and depression 0.084.
|
v3-fos-license
|
2018-12-20T19:09:40.612Z
|
2017-01-01T00:00:00.000
|
73591518
|
{
"extfieldsofstudy": [
"Business"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://conbio.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/conl.12244",
"pdf_hash": "a2746112a252efb127af28e94e8f0e598beac12c",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46707",
"s2fieldsofstudy": [
"Environmental Science",
"Economics"
],
"sha1": "9d1426e7400d1b33ecee20f9390296e6c9ecc470",
"year": 2017
}
|
pes2o/s2orc
|
A Critical Comparison of Conventional, Certified, and Community Management of Tropical Forests for Timber in Terms of Environmental, Economic, and Social Variables
Tropical forests are crucial in terms of biodiversity and ecosystem services, but at the same time, they are major sources of revenue and provide livelihoods for forest-dependent people. Hopes for the simultaneous achievement of conservation goals and poverty alleviation are therefore increasingly placed on forests used for timber extraction. Most timber exploitation is carried out unsustainably, which causes forest degradation. Two important mechanisms have emerged to promote sustainable forest management: certification and community-based forest management (CFM). We synthesize the published information about how forest certification and CFM perform in terms of environmental, social, and economic variables. With the caveat that very few published studies meet the standards for formal impact evaluation, we found that certification has substantial environmental benefits, typically achieved at a cost of reduced short-term financial profit, and accompanied by some improvement to the welfare of neighboring communities. We found that the economic and environmental benefits of CFM are understudied, but that the social impacts are controversial, with both positive and negative changes reported. We identify the trade-offs that likely caused these conflicting results and that, if addressed, would help both CFM and certification deliver the hoped-for benefits. (Resume d'auteur)
1 Supporting methods
Literature review
To carry out the qualitative literature review, we followed the search protocol recommended for systematic reviews (Pullin & Stewart 2006). The goal was to compare forest variables under two different management regimes, or before and after management implementation. To find relevant publications, we used the literature search engine Google Scholar (https://scholar.google.com/, search performed in April and May 2015) with the following search terms: community OR joint OR open access OR certification OR Forest stewardship council AND forest management AND tropical OR Africa OR Asia OR South America AND impact OR effect AND social OR economic OR environment.
This search returned 38,100 results that we first sorted by relevance, and then scanned the 1,000 most relevant titles, after which the relevance of search results became too low to justify further processing. Next, for the titles identified as potentially relevant, we read the abstracts and identified studies that measured one or more specific forest value under one of the following management regime combinations: i) FSC-certified industrial vs. conventional industrial; ii) community managed vs. open access (no specific management); iii) FSC-certified industrial vs. FSC-certified community managed; iv) FSC-certified community managed vs. community managed).
We excluded purely theoretical and modeling studies and studies based solely on Corrective Action Requests (CARs) by forest certification bodies, if they did not verify on the ground whether the CARs were fulfilled. We included meta-analyses and systematic reviews if they calculated overall effect size, and highlighted them as such, and did not further include the individual studies on which the reviews were based. We did not include reviews without an overall effect size, but we did use the individual case studies on which these reviews were based. We included only studies from natural tropical and subtropical forests, excluding Australia.
From each study, we extracted the following information: i) variable group (environmental, social, economic); ii) variable (e.g. animal biodiversity, health and safety of logging crews, or harvest costs; Table 1 in main text); iii) management regimes compared; iv) continent and country and; v) outcome of the comparison. We went through all selected studies three times: first time listing all potentially extractable variables, with short descriptions, and the outcomes of the comparisons. We then drew a final list of variables ( Table 1 in the main text), which grouped the existing variables into more general categories, and went through the studies again, to verify whether their results fitted to the new categories. During the writing process, and consultation workshop with representatives from the industry (Precious Woods, A.G.), the Forest Stewardship Council, and Non-Governmental Organizations, several new studies were incorporated into the study, which warranted a third, final check of all included studies. The extraction of information from studies was carried out by ZB and FH, and all studies were cross-checked by both ZB and FH.
We were only able to extract information on whether one management regime was better, same, or worse for a particular variable, but not by how much, as many studies did not quantify the outcomes. Therefore, our review is only qualitative, because we cannot tell whether an improvement reported by one study is equivalent to an improvement in the same variable in a different study. We also emphasize that not all the individual comparisons used are independent, as some studies contributed multiple comparisons. Also, the studies are geographically clustered and were carried out with different degrees of rigor and therefore do not deserve the same weight.
Variables and stakeholders
For forests leased by companies as concessions from the state or a community, the main stakeholder with respect to the economic variables is the logging company, which we presume aims to maximize profits. For forests managed directly by communities, the main stakeholder in terms of economic variables is usually a community enterprise (Humphries et al. 2012). For community enterprises, profit maximization may not be the principal goal -job creation or social capital building might be equally important. Community enterprises may therefore be judged successful even if no profits are generated (McDaniel 2003;Humphries et al. 2012).
Another issue that we do not consider is that some profits from private companies and community enterprises percolate upwards and contribute to national incomes through taxes, royalties, or increased investments (not quantified by any study). Similarly, some corporate profits may benefit local inhabitants through direct payments or general welfare subsidies such as schools, roads, or health services.
The core social value can be described as the welfare of local communities. No study captures all aspects of welfare: the definition and measurement of quality of life is considered a major challenge for scientists and policy makers alike. There are, however, several variables that are presumed to contribute to welfare (Table 1). In this study, we consider welfare to be the opposite of poverty (Hensbergen et al. 2011). The existing literature contains very few social variables that reach beyond local communities.
The environmental variables most often measured relate to carbon sequestration and biodiversity conservation (Table 1). These variables are interrelated and valued principally by the international community (Kuijk et al. 2009). Very few studies measure environmental variables valued specifically by local communities, and these indicators often fail to inform local decision making (Garcia and Lescuyer 2008) (Sheil et al. 2010).
2 Supporting data and reference 2.1 Full references of studies included in the analysis Numbering corresponds to Figure 1 in the main text. 2.2 Database of comparisons of environmental, economic, and social outcomes for different tropical forest management regimes
|
v3-fos-license
|
2021-10-16T15:14:50.417Z
|
2021-10-14T00:00:00.000
|
244580845
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/journals/geofluids/2021/7153015.pdf",
"pdf_hash": "f11fb8e36bfc2d5c9294d90dceba7448e6c06d0f",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46708",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "0300a54bec59e962314849b53eec690f6e080d2b",
"year": 2021
}
|
pes2o/s2orc
|
The Effects of Precrack Angle on the Strength and Failure Characteristics of Sandstone under Uniaxial Compression
Characterization of the mechanical properties of cracked rock masses is essential for ensuring the long-term stability of the engineering environment. This paper is aimed at studying the relationship between the strength characteristics of specimen and the angle of precrack, as well as the interaction of cracks under uniaxial compression. To this end, two sandstone specimens, distinguished with a single and three precracks, were built using the PFC software. For the former case, both the peak strength and elastic modulus increase to a peak value as the crack angle α gets closer to the forcing (loading) direction. For the latter case, the strength experiences a trend of increasing-maintaining trend as the crack angle α gets closer to the forcing direction, and the elastic moduli are barely affected. For the specimens containing a single precrack, their crack numbers increased approximately in a one-step or two-step stair pattern with increasing axial strain; whereas for the specimens containing three cracks, their crack numbers all showed a multistep growth trend. Furthermore, the failure mode of the specimen is closely related to the precrack angle. However, if the precrack distribution does not affect the original crack propagation path, it will hardly affect the mechanical properties of the specimen.
Introduction
Rock is widely distributed on the earth's surface. As a natural material, inevitably, there are defects such as cracks inside the rock which are induced by thermal stress, erosion, earthquakes, human engineering disturbances, etc. [1][2][3]. As an example, at Hornelen, western Norway, sandstone and conglomerate fill a fault-enclosed basin, about 70 × 30 km, which is the remains of a once larger basin. The basin sediments are about 100~200 m thick and are of continuous transversal cycles, consisting of beds about 2 m thick. The cracks and joints there caused by the long-time affection of low temperature and ocean erosion have been extremely developed [4][5][6][7], as shown in Figure 1. The existence of cracks not only reduces the material strength of the rock but also accelerates the damage process, which poses safety hazards to the construction of slopes and underground projects [2,5,7,8]. Therefore, it is of great significance to study the strength characteristics of cracked specimens and the interaction of multiple cracks within the specimen.
The mechanical properties of defected rock mass have been a hot topic in the field of geotechnical engineering, and rich results have been achieved [2,[9][10][11]. Differing in the number of predefects, the current researches can be mainly grouped into two categories. The first type of research mainly focuses on rock mass, and the number of precracks reaches hundreds to thousands [5,6,[12][13][14]; the second type of research focuses on laboratory specimen, and the number of predefects is generally less than four [15][16][17].
For the first type of research, due to the large size of the specimen, the current research mainly focuses on the location of rock damage [12], the fracture surface roughness [13,18], and the specimen heterogeneity [5,6]. Only a few studies have looked into the strength characteristics of specimens [2]. Shi et al. [2,7] investigated the correspondence between crack distribution modes and rock mechanical properties, as well as the strength damage theory. However, the number of distributed cracks involved in the above studies is excessive; the crack propagation is thus affected by too many factors. As a result, it is hard to identify the influence of the crack interaction on the strength characteristics of the specimen.
For the second type of research, predefects are mainly made by hydraulic cutting (experiment) or the particle element deleting (numerical simulation). The elastic modulus, compressive strength, shear strength, and failure mode of the specimen were analyzed by changing its shape and size [16,17], the confining pressure [19], or the angle [20][21][22] as well as the combination and number of predefects [10,11]. These researches are of great significance for understanding the mechanical properties of defected rock, although large defects exist in crack prefabrication-the width of the cracks is larger than 2 mm [10,14]. Therefore, the research object of this research in the strict sense is fissured rock mass rather than the commonly observed cracked rock mass in nature [5,7]. The mechanical properties of cracked rock mass are obviously not equivalent to that of the fissured rock mass, and the research on cracked specimen is extremely insufficient. Moreover, the current researches on multifissured rock masses only focus on the combinations of fissures and lack a comparative analysis, so it is very hard to understand the specific impact of a fissure on the mechanical properties of a specimen [10,23,24].
In this paper, two sets of sandstone specimens differing in containing a single crack and three cracks were built using the PFC software. The relationship between the strength characteristics of the specimen and the angle of the precrack, as well as the interaction of cracks under uniaxial compression, was studied.
Numerical Model of Cracked
Sandstone Specimen 2.1. Particle Flow Code (PFC). PFC 2D software is very convenient in realizing the crack prefabrication and is outstanding in simulating the mechanical properties and failure process of rock and soil medium [23]. Due to these advantages, the PFC 2D software was selected for the simulation in this study. The particles and the bonds between particles are used to characterize the medium in the software, where the particles are simulated with rigid body of unit thickness. Two types of bonding effects of rock media suitable for this simulation are selected, namely, contact bond and parallel bond, as shown in Figure 2. The contact bond reflects the normal and tangential interactions (forces) between particles (see Figure 2(a)), while the parallel bond transmits both the force and the moment (see Figure 2(b)). It is widely accepted that these two kinds of bonds both exist in the interior of rock and soil [7], so they are used in this paper.
Calibration of Sandstone Mesoscopic Parameters.
To ensure the credibility of the simulation, it is necessary to determine the model parameters for the simulation. For PFC software, the particles and bonds are used to characterize the medium. Therefore, it is necessary to determine mesoscopic parameters that reflected the physical and mechanical properties of the particles and bonds. Due to the limitation in observation techniques, these parameters can hardly be obtained through laboratory tests. For uniaxial compression simulation with PFC, the "trial and error" method is usually used to calibrate the mesoscopic parameters of the specimen. As shown in Figure 3, m i is the strength parameter of Hoek-Brown [27]. In this method, the full stress-strain curve and the corresponding failure mode of a representative specimen need to be firstly obtained through 2 Geofluids laboratory tests; next, a numerical model is established, and the parameters such as the stiffness, elastic modulus, and the tensile and cohesive strength are adjusted until the numerical curve is roughly consistent with the experimental curve; finally, fine-tune the parameters until the failure mode of the numerical specimen is consistent with that of the experimental specimen [27]. In this paper, the uniaxial compression tests on sandstone specimens were performed by the MTS815 test machine of the State Key Laboratory for Geomechanics and Deep Underground Engineering, China University of Mining and Technology, as shown in Figure 4. The size of the laboratory specimen is 50 mm by 100 mm (diameter and height), and the loading was controlled by displacement with the rate of 0.002 mm/timestep [2]. The intact sandstone model with 31190 particles was established using PFC 2D software. The size and loading strategy of the sandstone model are consistent with that of the laboratory test. The parameters of the numerical specimen were calibrated using the "trial and error" method. Model results are compared with the experimental data, showing the stress-strain curve and failure mode of the specimen in Figures 5 and 6, respectively.
As shown in Figures 5 and 6, the full stress-strain curve and the failure mode of the numerical specimen are qualitatively consistent with that of the experimental specimen. Note that the simulation curves deviate from the experimen-tal ones in the prepeak stage, which is because that there is an obvious compaction stage for the laboratory specimen before the peak. To the best of our knowledge, this stage cannot be simulated by all numerical software including Figure 3: The "trial and error" method parameter checking process of the PFC model [27]. 3 Geofluids PFC software [2,14]. Currently, there are two main ways to cope with this problem. The first way is to ensure the consistency of the peak strength and peak strain with that of the actual specimen but might leave a difference in the elastic modulus [7,[28][29][30]. An alternative method is to ensure the elastic modulus and the peak strength to be consistent with that of the actual specimen but might lead to a significant difference in the peak strain [14].
Considering the study of rock strength to be the priory focus of this research, the first approach was chosen. Furthermore, the relative errors of peak strength and peak strain are 1.7% and 3.8%, respectively. The simulation results qualitatively agree with the experimental results, and the simulation parameters truly reflect the mechanical characteristics of the laboratory specimen.
The microscopic parameters of the intact sandstone specimen determined by the "trial and error" method are listed in Table 1.
Numerical Model of Sandstone Specimen with a Single
Crack or Three Precracks. In PFC 2D , crack, as a planar and finite-sized discrete element, is characterized by a segment with two vertex object ends. The prefabrication of the crack is realized through the Discrete Fracture Network (DFN). In the DFN module of PFC software, the input parameters to realize the prefabrication of each crack are the length, angle, and center point, with the width of the crack to be insignificant [23,[31][32][33][34]. In order to study the relationship between the strength characteristics and the angle of the precrack, as well as the interaction of the cracks, two sets of specimens that contain a single crack and three cracks were established, as shown in Figure 7.
It can be seen from Figure 7(a) that each specimen in the first group contains one precrack, and the angle of the crack is set as 0°, 30°, 60°, 90°, 120°, and 150°, respectively. The lower left corner of the specimen is set as the coordinate origin, and the x and y coordinates of the crack center point are 25 mm and 50 mm, respectively. In the second group, two extra fixed-angle precracks were added on the basis of the specimens in the first group, denoted as precracks ② and ③. For cracks ② and ③, their angles are both 45°and their center points are located at (25 mm, 75 mm) and (25 mm, 25 mm), respectively, as shown in Figure 6(b). In addition, the crack lengths of the precracks in Figure 7 are all 25 mm. The smooth joint model was used to describe the mechanical properties of the crack. The parameters used for the model are listed in Table 2 [2]. It can be seen from the table that the existence of cracks weakens the cohesion on both sides of the crack surface.
Strength Characteristics of the Cracked Sandstone
Specimen. The full stress-strain curve of the cracked specimens is shown in Figure 8, and the extracted variation of the strength with the precrack angles is shown in Figure 9. Figure 9. Theoretically, the specimens with the crack angles of 120°and 60°, as well as the specimens with crack angles of 150°and 30°, are not essentially different, so the elastic moduli of the specimens are almost the same, as shown in Figure 8(a). However, the strengths of the specimens with crack angles of 120°and 60°are quite different, which might be due to the dispersion of the particle and bond distribution inside the specimen [35][36][37].
The full stress-strain curve and strength value of the specimens with three precracks are shown in Figure 8 In particular, for the specimens with crack angles of 60°, 90°, and 120°, the difference in their strengths is negligible. Extra uniaxial compression experiments were done on the double-cracked specimens (only including cracks ② and ③, see Figure 9). Results showed that the difference between the strengths of the three precrack specimens and the double-cracked specimen is very small, which indicates that for the specimens with three precracks, the influence of crack ① on the mechanical properties of the specimen can be ignored when the angle of crack ① is in the range of 60°to 120°.
In addition, for both the specimens with a single precrack or three precracks, the smaller the angle between precrack ① and the horizontal direction is, the more fluctuation the full stress-strain curve presents, as shown in Figure 8.
New Crack Propagation of the Precracked Sandstone Specimen
3.2.1. Initial Crack Propagation. The initial crack propagation of the specimen containing a single precrack is shown in Figure 10. It can be seen that for the specimen with the precrack angle of 0°, new cracks emerge initially in the middle and ends of the precrack, and the development of the new cracks in the middle of the precrack is far quicker than that at the end of the precrack.
For the specimens with the precrack angles of 30°and 60°, new cracks emerge initially at the ends of the precracks, showing a clear wing expansion. For the specimens with the precrack angle of 90°, new cracks are randomly distributed within the specimen, which indicates that a precrack with the angle of 90°does not cause stress concentration inside the specimen. This is because, under the uniaxial loading condition, the strain and stress distributions of the specimen are uniform on any horizontal section before the specimen is significantly damaged. The crack distribution of the specimens corresponding to Figures 10(e) and 10(f) is symmetrical to the crack distribution of the specimens corresponding to Figures 10(b) and 10(c), respectively, so is the initial crack propagation conditions and thus will not be presented here. Furthermore, it can be clearly seen that as the precrack angle Specimens with One pre-crack ree pre-cracks None pre-crack Two parallel pre-crack ( = 45°) Figure 9: Correspondence between UCS of specimens and precrack angles. Figure 10: Initial crack propagation of the specimens containing a single precrack. 6 Geofluids increases from 0°to 90°, the temporal development of the new cracks shows a downward trend, as shown in Figure 10.
The initial crack propagation of the specimens containing three precracks is relatively more complicated, as shown in Figure 11. In general, the new cracks are located at the ends of the precracks, whereas the initial crack distributions at the C-end of crack ② and the F-end of crack ③ remain almost unchanged. The change of the angle of crack ① mainly affects the initial crack propagation of crack ①, the D-end of crack ②, and the E-end of crack ③.
When the angles of crack ① are 0°, 120°, and 150°, the ends of crack ① are closer to the D-end of crack ② and the E-end of crack ③. The two ends of crack ① penetrated with the D-end of crack ② and the E-end of crack ③, as shown in Figures 11(a), 11(e), and 11(f). For the specimen with the crack angle of 30°and 60°, the growth of the initial crack at each crack end is less affected by crack ① as the ends of crack ① are far from the ends of crack ② and crack ③, as shown in Figures 11(b) and 11(c).
When the angle of precrack ① is 90°, the internal stress concentration within the specimen is induced by cracks ② and ③. Compared with the one-crack specimen (see Figure 10(e)), the distribution of new cracks in the specimen is no longer uniform, new cracks of precrack ② of crack ③ penetrate through precrack ①, and there is no new crack propagated from the ends of crack ①.
Failure Modes.
The final failure modes of specimens containing a single precrack and three precracks are present in Figures 12 and 13. The final failure modes of the speci-mens vary substantially with the change of the precrack angle α.
For the single precrack specimens with 0°crack angle α, the failure mode is mostly the vertical splitting failure. Three vertical cracks extended from the two ends, and the middle of the precrack cut the specimen into strips. Moreover, there are many accumulated cracks located at the ends of the precrack, denoted by the yellow ellipses in Figure 12(a). For the specimens with the crack angles of 30°and 60°, their failures are caused by the gradual expansion of the new cracks along Figure 11: Initial crack propagation of the specimens containing three precracks.
=150°( f) Figure 12: Failure mode of the specimens containing a single precrack. Figure 13: Failure mode of the specimens containing three precracks. 7 Geofluids the ends of the precracks. Quite a few new cracks are closely located at the precrack ends, as highlighted by the yellow ellipses in Figures 12(b) and 12(c). There are few new cracks generated in the vertical direction of the precracks, as shown by the blue ellipses in Figures 12(b) and 12(c). This is agreed with the finding of Shi et al. [2] that nonvertical cracks will form a stress shielding circle with the diameter of its own. For the specimens containing precracks with the angles of 120°and 150°, the failure modes are the same as that of the specimens with the precrack angles of 60°and 30°, respectively, and will not be repeated here. For the specimens with the precrack angle of 90°, the effect of precracks on the failure mode of the specimen is negligible. The failure of the upper right corner of the specimen is very similar to that of the intact specimen (see Figures 6(b) and 12(d)).
For the specimens with three precracks, when the angle of precrack ① is 0°, due to the stress shielding effect of the precracks, there are basically no new cracks that emerged in the area between the adjacent precracks. As shown in Figure 13(a), the ends of the three precracks penetrate through each other, which results in the cutting failure of the specimen [38,39]. For the specimen with the precrack angle of 30°, the new cracks mainly occurred in the middle of the specimens due to the dense and uniform distribution of the precracks in this area. For the specimen with the precrack angles of 60°, 90°, and 120°, precracks ② and ③ penetrated through precrack ①, and the new cracks mainly concentrated at the C-end of crack ② and the F-end of crack ③. The failure modes of these three specimens are very similar. The failure modes of the specimens with the precrack angles of 150°and 30°are similar, and the concentrated cracks are mainly distributed at the junction of the A-end of crack ① and the D-end of crack ②, as well as the junction of the B-end of crack ① and the E-end of crack ③.
Crack Number Evolution of the Precracked Sandstone
Specimens. New cracks keep emerging in the loading process. The evolution of the number of new cracks (NNC) during loading process is shown in Figure 14. In general, the evolution of NNC exhibits a stair-step tendency, i.e., increases abruptly as the axial strain increases to a 8 Geofluids certain value. The NNC evolution of single precrack specimens experiences a one-step (corresponding to precrack angles of 90°and 120°) or two-step (corresponding to precrack angles of 0°, 30°, 60°, and 150°) increase. It can be seen from Figure 14(a) that for the specimen containing a single precrack, the number of new cracks approximately increased in one-step stair shape (corresponding to precrack angles of 60°, 90°, and 120°) or two-step stair shape (corresponding to precrack angles of 0°, 30°, and 150°) with the increase of axial strain. Notably, the maximum abrupt increase in NNC occurs at various axial strains for different precrack angles, i.e., increased axial strain value as the precrack angle increased until 90°and declined thereafter. For the specimens with three precracks (see Figure 14(b)), the evolution of NNC shows a multistep growth, which can be attributed to the fluctuations of the full stress-strain curves of the specimens before and after the peak (see Figure 8(b)). Interestingly, the NNCs corresponding to the final failure of the specimens with three precracks are around 4000 with extremely small deviation. For the specimens with a single precrack, when the crack angles are 0°, 30°, 60°, and 150°, the final NNCs are closer to 4000 as well. However, when the precrack angles are 90°and 120°, the final NNCs are up to 7500. By extracting the final NNC and UCS of the specimens (see Figures 14 and 9), it was found that the final NNC increased with the UCS, as shown in Figure 15.
Discussion
The analysis of Figure 9 in Section 3.1 shows that when the angles α of precrack ① are between 60°to 120°, the effect of precrack ① on the mechanical properties of the specimen can be ignored, which is very interesting and worthy of further study.
The initial crack propagation of the double-crack specimen (see Figure 16(a)) and the triple-crack specimens (see Figures 16(b)-16(d)) are shown in Figure 16. The existence of precrack ① inside the three-crack specimens has little effect on the initial crack growth. The D-end of crack ② and the E-end of crack ③ tend to penetrate in both the double-crack and the triple-crack specimens, and crack ① itself, as the penetration path of crack ② and crack ③, only promoted this process, especially for the specimens whose angles of crack ① are 90°and 120°. Therefore, there is very little difference in the crack distribution (including precracks and newly generated cracks, see the yellow dotted lines in Figure 16) inside the specimens, and the bearing structure of the specimens is very similar, as shown in Figure 16. Figure 17 shows the failure modes of the double-crack specimen (see Figure 17(a)) and the three-crack specimens (see Figures 17(b)-17(d)). The failure modes of the four specimens in Figure 17 are highly similar. There are many newly generated cracks in the upper left and lower right corners of the specimens (see the yellow ellipses in Figure 17). In addition, the Y-shaped expansion fissures in the upper right and lower left corners are symmetrically distributed with respect to the center point of the specimens (see the yellow dotted lines in Figure 17). In summary, the 4 main rock blocks generated after the failure of the specimen in Figure 17 are almost identical.
For a specific loading condition, the existence of cracks may not necessarily weaken the strength characteristics of Figure 17: Comparison of the failure mode of double-crack and triple-crack specimens. 9 Geofluids the specimen. From Figures 9, 16, and 17, it can be found that if the precrack does not affect the original crack propagation path (fracture process), it will hardly affect the mechanical properties of the specimen.
Conclusions
In this paper, the relationship between the strength characteristics of the specimen and the angle of the precrack, as well as the interaction of cracks under uniaxial compression, was studied. The two sets of sandstone specimens, respectively, containing a single precrack and three precracks were built using the PFC software, which was to study. The main conclusions are as follows: (1) For the one-crack specimens, the peak strength and elastic modulus continuously increase as the crack angle α is more aligning with the forcing (loading) direction. For the three-crack specimens, a similar pattern was observed for the strength behavior, i.e., with higher strength as α gets closer to the forcing direction. However, such increase stabilized as the angle between α and forcing direction is smaller than 30°. The elastic modulus of the specimens appears to be unaffected by the angles of precrack (2) For the specimens containing a single precrack, their crack numbers increased approximately in a onestep or two-step stair pattern with increasing axial strain; whereas for the specimens containing three cracks, their crack numbers all showed a multistep stair growth trend with the axial strain (3) The failure mode of the specimen is closely related to the precrack angle. However, the existence of cracks may not necessarily weaken the strength characteristics of the specimen. If the precrack does not affect the original crack propagation process (fracture process), it will hardly affect the mechanical properties of the specimen
Data Availability
The data used to support the findings of the study are available from the corresponding author upon request.
Conflicts of Interest
All authors declare that they have no conflict of interest or financial conflicts to disclose.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2016-02-24T00:00:00.000
|
14074869
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnana.2016.00011/pdf",
"pdf_hash": "9376d3f48fe962e8c6ac1278128aa1bb2d16cab8",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46709",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "fd72ad934fc442e8bff79f66df3c19f20c6bc907",
"year": 2016
}
|
pes2o/s2orc
|
Quantitative and Qualitative Analysis of Transient Fetal Compartments during Prenatal Human Brain Development
The cerebral wall of the human fetal brain is composed of transient cellular compartments, which show characteristic spatiotemporal relationships with intensity of major neurogenic events (cell proliferation, migration, axonal growth, dendritic differentiation, synaptogenesis, cell death, and myelination). The aim of the present study was to obtain new quantitative data describing volume, surface area, and thickness of transient compartments in the human fetal cerebrum. Forty-four postmortem fetal brains aged 13–40 postconceptional weeks (PCW) were included in this study. High-resolution T1 weighted MR images were acquired on 19 fetal brain hemispheres. MR images were processed using in-house software (MNI-ACE toolbox). Delineation of fetal compartments was performed semi-automatically by co-registration of MRI with histological sections of the same brains, or with the age-matched brains from Zagreb Neuroembryological Collection. Growth trajectories of transient fetal compartments were reconstructed. The composition of telencephalic wall was quantitatively assessed. Between 13 and 25 PCW, when the intensity of neuronal proliferation decreases drastically, the relative volume of proliferative (ventricular and subventricular) compartments showed pronounced decline. In contrast, synapse- and extracellular matrix-rich subplate compartment continued to grow during the first two trimesters, occupying up to 45% of telencephalon and reaching its maximum volume and thickness around 30 PCW. This developmental maximum coincides with a period of intensive growth of long cortico-cortical fibers, which enter and wait in subplate before approaching the cortical plate. Although we did not find significant age related changes in mean thickness of the cortical plate, the volume, gyrification index, and surface area of the cortical plate continued to exponentially grow during the last phases of prenatal development. This cortical expansion coincides developmentally with the transformation of embryonic cortical columns, dendritic differentiation, and ingrowth of axons. These results provide a quantitative description of transient human fetal brain compartments observable with MRI. Moreover, they will improve understanding of structural-functional relationships during brain development, will enable correlation between in vitro/in vivo imaging and fine structural histological studies, and will serve as a reference for study of perinatal brain injuries.
The cerebral wall of the human fetal brain is composed of transient cellular compartments, which show characteristic spatiotemporal relationships with intensity of major neurogenic events (cell proliferation, migration, axonal growth, dendritic differentiation, synaptogenesis, cell death, and myelination). The aim of the present study was to obtain new quantitative data describing volume, surface area, and thickness of transient compartments in the human fetal cerebrum. Forty-four postmortem fetal brains aged 13-40 postconceptional weeks (PCW) were included in this study. High-resolution T1 weighted MR images were acquired on 19 fetal brain hemispheres. MR images were processed using in-house software (MNI-ACE toolbox). Delineation of fetal compartments was performed semi-automatically by co-registration of MRI with histological sections of the same brains, or with the age-matched brains from Zagreb Neuroembryological Collection. Growth trajectories of transient fetal compartments were reconstructed. The composition of telencephalic wall was quantitatively assessed. Between 13 and 25 PCW, when the intensity of neuronal proliferation decreases drastically, the relative volume of proliferative (ventricular and subventricular) compartments showed pronounced decline. In contrast, synapse-and extracellular matrix-rich subplate compartment continued to grow during the first two trimesters, occupying up to 45% of telencephalon and reaching its maximum volume and thickness around 30 PCW. This developmental maximum coincides with a period of intensive growth of long cortico-cortical fibers, which enter and wait in subplate before approaching the cortical plate. Although we did not find significant age related changes in mean thickness of the cortical plate, the volume, gyrification index, and surface area of the cortical plate continued to exponentially grow during the last phases of prenatal development. This cortical expansion coincides developmentally with the transformation of embryonic cortical columns, dendritic differentiation, and ingrowth of axons. These results provide a quantitative description of transient human fetal brain compartments observable with MRI. Moreover, they will improve understanding of structural-functional
INTRODUCTION
In the developing human brain, the genesis of cerebral cortex takes place in transient fetal compartments (His, 1904;O'Leary and Borngasser, 2006;Rakic, 2006;Judaš, 2007, 2015;Bystron et al., 2008). It occurs through precise spatiotemporal gene expression of cell proliferation, cell migration, morphogenesis, dendritic differentiation, synaptogenesis, apoptosis, and myelination (Kang et al., 2011;Pletikos et al., 2014). Although corticogenic events take place in more than one fetal compartment, it was shown that some compartments have a predominant role as sites for particular neurogenetic events. On the boundary of the fetal cerebral ventricles, cells proliferate within the ventricular and subventricular zones, producing neurons and glia through mitotic divisions of cortical progenitors (for review see Bystron et al., 2008). Adjacent to the subventricular zone, the migration of postmitotic cells and the growth of axons occur in the intermediate zone (His, 1904;Rakic, 1972;Kostovic and Rakic, 1990;Bystron et al., 2008). Moving outwards, subplate compartment and marginal zone (at the surface of the developing telencephalon) are critical sites for early synaptic interaction (Molliver et al., 1973;Kostovic and Molliver, 1974;Kostovic and Rakic, 1990). In addition, due to its rich extracellular matrix the subplate compartment is also of great importance for guidance of axons (Molliver et al., 1973;Kostovic and Molliver, 1974;Kostovic and Rakic, 1990). Finally, the cortical plate, situated between subplate compartment and marginal zone, is the main locus of post-migratory cortical neuron differentiation (Mrzljak et al., 1988(Mrzljak et al., , 1992Marín-Padilla, 1992). Compared to other primates, apart from its large size, the human brain during development shows a very prominent subplate compartment (Kostovic and Rakic, 1990;Judaš et al., 2013;Hoerder-Suabedissen and Molnár, 2015) and an enlarged subventricular zone (Kriegstein et al., 2006;Bystron et al., 2008;Rakic, 2009;Rakic et al., 2009). The prominent subplate and subventricular zones have been related to the greater number of neurons and connectivity combinations in humans (Kostovic and Molliver, 1974;Kostovic and Rakic, 1990;Judaš et al., 2013;Hoerder-Suabedissen and Molnár, 2015). However, the growth trajectories of these transient human fetal brain compartments have not been completely characterized.
Modern magnetic resonance imaging (MRI) methods allow excellent opportunities to follow development of transient fetal compartments in vitro (Kostović et al., , 2014Radoš et al., 2006;Widjaja et al., 2010) and even in vivo (Maas et al., 2004;Judaš et al., 2005;Prayer et al., 2006;Perkins et al., 2008;Miller and Ferriero, 2009;Rutherford, 2009;Habas et al., 2010a;Kostović et al., 2014). In MRI, T1 (longitudinal) and T2 (transverse) relaxation times rely on water protons, more specifically, on the mobility of water within the tissues. This changes dramatically during brain development. Thus, the MRI characteristics of telencephalic structures are not easily comparable between fetal and adult brain Radoš et al., 2006). Developmental histogenesis is characterized by transient changes in cellular and extracellular composition of neural tissue and is reflected as changes in MRI T1/T2 signal intensity within the specific developmental phases (Kostović et al., , 2014Radoš et al., 2006). In the developing brain, the composition and density of cells, un/myelinated axonal fiber amount, and the water percentage within extracellular matrix result in inversion of relative MRI signal intensities between "future cortex" and prospective white matter Radoš et al., 2006;Kostović et al., 2014). Yet, using MRI correlated with histology, it is still possible to define transient compartments and spatiotemporal indicators of fetal cerebral cortical development Radoš et al., 2006;Widjaja et al., 2010;Kostović et al., 2014). As seen on MRI images, from 13 PCW onwards, the cerebral wall displays five laminar compartments Radoš et al., 2006) that vary in MRI T1 signal intensity and can be easily distinguished. These are: 1. Ventricular zone (VZ) and ganglionic eminence (GE), which are composed of tightly packed proliferative cells (as seen, for example, in Nissl stained sections). VZ surrounds the entire surface of the ventricular walls and displays a spatiotemporal pattern in intensity of cell proliferation. It increases in thickness, with a peak thickness approximately around 23 PCW, and afterwards reduces to one cell thick ependymal layer (for review see Bystron et al., 2008). Due to the densely packed cell content, VZ and GE are characterized in T1 MR images with high signal intensity Radoš et al., 2006). 2. Subventricular zone (SVZ) appears 1 week before the cortical plate (for review see Bystron et al., 2008). Similar to the VZ, the SVZ is marked by densely packed dividing cells. Nevertheless, while VZ reduces in thickness during the mid-fetal period, the SVZ continues to increase in thickness (Zecevic et al., 2005). Approximately around 11 PCW, the SVZ is divided into inner and outer layers by tangentially oriented fibers (periventricular fiber rich zone) (Smart et al., 2002 Despite numerous studies utilizing different imaging modalities (McKinstry et al., 2002;Maas et al., 2004;Perkins et al., 2008;Huang et al., 2009;Trivedi et al., 2009;Habas et al., 2010b;Takahashi et al., 2012;Makropoulos et al., 2014), quantitative data describing the precise growth trajectories of transient embryonic compartments is unfortunately still fragmentary (Kostović et al., , 2014Radoš et al., 2006;Huang et al., 2009Huang et al., , 2013Kostovic and Vasung, 2009;Widjaja et al., 2010;Huang and Vasung, 2014). Previous MRI studies of developmental changes in total brain volume (Habas et al., 2010a,b;Kuklisova-Murgasova et al., 2011;Makropoulos et al., 2014) used coarse segmentation of gray and white matter and were not able to relate transient fetal compartments to corticogenic events. One of the reasons is that the fetal transient compartments change dynamically throughout fetal development, as previously mentioned, and show different spatiotemporal relationships with cortical histogenesis (Kostović et al., 2014).
Here we provide quantitative data on the transient fetal compartments using a normative cohort of fetal human brains. The main goal of this study was to generate new volumetric MRI parameters for the analysis of transient fetal compartments, defined on the basis of reliable histological references, as it was shown that some of these parameters are useful for the delineation of cortical growth phases and their correlation with spatiotemporal gene regulation (Kang et al., 2011;Pletikos et al., 2014). Our study had two additional specific goals: (i) to use quantitative analysis of the developmental evolution of transient developmental compartments, especially the subplate compartment, in order to better understand the role of these compartments in later stages of brain development, (ii) to test the hypothesis that the voluminous transient subplate compartment in late human fetal brain is related to extraordinary richness of growing and "waiting" fronts of cortico-cortical axons (Kostovic and Rakic, 1990). These quantitative data are necessary for studies of structural and functional consequences of intrauterine and perinatal injuries of developing human cortex (Volpe, 2009) and for understanding the selective vulnerability of these fetal neural compartments (Kostovic et al., 2014).
Materials
Forty-four postmortem brains of human fetuses and prematurely born infants were included in the current study. Human fetuses were obtained in accordance with the Croatian federal law following the medically/legally indicated abortions or spontaneous abortions performed at the School of Medicine, University of Zagreb. Premature infants were obtained after the routine autopsy procedure. The procedure for the human autopsy material was approved and controlled by the Internal Review Board of the Ethical Committee at the School of Medicine, University of Zagreb. Consent for postmortem examination was obtained from each parent.
Only the brains of fetuses (<28 PCW) without any sign of pathology (as reported by routine pathology examination) and without known genetic abnormalities were included in the study. The brains of prematurely born infants (>28 PCW) were included if the cause of death was attributed to the sudden infant death syndrome or a respiratory disease.
The age of the fetuses and prematurely born infants was estimated on the basis of their crown-rump lengths (CRL; O' Rahilly and Müller, 1984), greatest length (caliper length without inclusion of the flexed lower limbs), and pregnancy records. In order to provide accurate age estimation of fetuses and of prematurely born infants, their age was expressed as weeks from conception (PCW), (Olivier and Pineau, 1962). After examination and age estimation the skull was removed in order to prepare the postmortem brains for MRI scanning. We invested major effort, in collaboration with the pathology department, to remove the skull without or with minimal damage to the brain tissue. Nevertheless, we could not avoid small damages of brain tissue (in three cases) or brain shape distortions.
In order to broaden the quantitative and qualitative MRI analysis, and to provide a histology-based atlas, postmortem brains were divided into four groups (Figure 1): (a) Group I (Quantitative MRI analysis): Nineteen human brain hemispheres from 14 postmortem brains (aged 13-40 PCW) were used for high-resolution quantitative MRI analysis ( Table 1). (b) Group II (Quantitative MRI-histology analysis): From Group I, we have selected five brain hemispheres (aged 13, 16, 24, 26, and 40 PCW) for histological processing. The The hemispheres of the same brains are marked with † . The hemispheres that were histologically processed (Group II) are marked in italic.
inclusion criteria were; I. time of fixation, II. developmental phase, and III. absence of tissue damage. (c) Group III (Qualitative MRI analysis): Ten brains (aged 11,16,20,20,21,22,25,32,37,40 PCW) were selected and scanned with MRI in order to provide additional T1-weighted MRI properties of transient fetal compartments for quality check. T1-weighted MRI images were acquired in order to build neuroanatomical coordinate guidelines for delineation of brain structures and in order to confirm spatio-temporal MRI properties of postmortem brain during different developmental stages. (d) Group IV (histology analysis): As an anatomical reference, needed for specimens that were not histologically processed, we have used 20 histologically processed brains [Nissl, PAS, and AChE stained sections of the fetal brains aged 13-40 PCW that are part of the Zagreb Neuro-embryological Collection (Kostovic et al., 1991;Judaš et al., 2011)]. Specimens were selected in order to serve as age-matched controls for delineation of MRI neuroanatomical structures.
MRI Acquisition
Ex vivo brains or separate hemispheres, with postmortem time ranging from few hours to 10 h maximally, were fixed by immersion in 4% paraformaldehyde in 0.1 M phosphate buffer, pH 7.4, and were used to obtain MR images by using the highfield 3.0T MRI device (Siemens Trio Tim). The fixation period ranged from a few weeks to a few years. As we wanted to have a uniform set of MRI signal intensity data that could be comparable between brains, one of our major concerns was the alteration of the MRI signal intensity of the brain that occurs due to the tissue fixation per se. In addition, formalin fixation is one of the factors that change the microstructure of the tissue, affecting and reducing the difference between the gray and white matter water. As those differences are the key to tissue discriminability using MRI, the standard three-dimensional spoiled gradient-echo (3-D GRE) sequence (magnetization-prepared rapid acquisition gradient echo -MPRAGE) failed to adequately discriminate the transient fetal compartments needed for 3D quantitative and qualitative analyses. In order to acquire the high spatial resolution and high-contrast T1-weighted postmortem fetal brain MR images, suitable for quantitative 3D MRI analysis, we had to modify commercially available VIBE sequence (volumetric interpolated brain examination (Rofsky et al., 1999). Having in mind the known challenges of postmortem MRI scanning, we have adjusted the MRI acquisition timing parameters taking into account the differences between behavior and microenvironment of water protons in the living and the formalin-fixed developing brains. Thus, we have reduced the FOV, increased the resolution and number of excitations, and finally modified the TE and TR as well as the flip angle. Finally, the parameters used for MRI acquisition were following: repetition time (TR) 14.5 ms, echo time (TE) 5.4 ms, number of excitations (NEX) 5, flip angle of 12 • , acquisition time ∼1.5 h per brain and section thickness ranging from 0.3 to 0.5 mm depending on the age. All brains were scanned using the wrist small-flexi eight-channel surface coil. The matrix size and the field of view were adjusted in order to obtain an isotropic spatial resolution of at least 0.3×0.3×0.3 mm 3 for 13 PCW old fetal brains, and 0.5 × 0.5 × 0.5 mm 3 for the fetal brains older than 15 PCW. Variability in fixation time can result in differences in MRI signal intensities between brains, however, signal intensity differences within brains were sufficient to distinguish and delineate telecephalic structural changes resulting from microstructural events Widjaja et al., 2010). In addition, according to Tovi and Ericsson, changes in T1 due to fixation occur rapidly but stabilize after 3-4 weeks (Tovi and Ericsson, 1992), which was the minimum fixation period used in our study to ensure tissue stability and comparability between samples.
Histology
Five histologically processed brains, aged 13, 16, 24, 26, and 40 PCW ( Table 1) were selected as major representatives for specific phases of prenatal brain development (Kostovic and Vasung, 2009). After the MRI acquisition these brains were embedded in paraffin and serially sectioned with coronal slice thickness of 15-20 µm. Cresyl violet and Periodic Acid Schiff-Alcian Blue (PAS-AB) were used for tissue staining. The PAS histochemical staining was conducted for the visualization of acid-sulphated glycoconjugates (Vacca et al., 1978) which provided us a "gold" standard for visualization of subplate compartment, known to have an abundant extracellular matrix (ECM). In addition, neighboring Nissl-stained celloidin sections were used as guidance for delineation of the cytoarchitectonic boundaries and FIGURE 1 | A diagram showing four groups of subjects, taken from Zagreb Neuroembryological Collection, that were included in our study. Quantitative MRI measurements were obtained on fetal brains marked with pink rectangle (Group I and II). cellular compartments of the human fetal telencephalon. AChE stained sections (acetyl-cholinesterase histochemistry) were used in the brains that arrived to our Institute within 24 h after death. This was done with a goal to show growing thalamocortical afferents and external capsule (Kostovic and Goldman-Rakic, 1983), which have been recognized as a relatively constant border between subplate and intermediate zone . Images of histological sections were captured using a chargecoupled device (CCD) camera or Nikon scanner (Figures 2, 3) and were processed using Adobe Photoshop R .
Image Pre-Processing
We have adapted and calibrated MR imaging tools, initially developed at the MNI (Montreal Neurological Institute) for processing of adult brains, and we have developed a semiautomated pipeline for processing postmortem fetal brain MR images. First, the images were manually cropped to minimize the field of view. The images were afterwards resampled at isotropic voxel sizes of 0.15 mm (age ≤13 PCW) or 0.25 mm (age ≥15 PCW). MRI signal intensity nonuniformity, resulting from field non-homogeneities, were corrected using the N3 method (Sled et al., 1998) with a small spline distance of 5 mm. A tissue mask was obtained by thresholding above background.
Tissue classification based on qualitative MRI and histology
Due to the specific anatomical organization of the human fetal brain, modifications to available processing tools designed for adult and postnatal brains are required to extend existing MRI analysis to prenatal brain. One of the major reasons for modifications, as mentioned above, are the age-dependent changes in T1 MRI signal intensity of transient fetal laminar compartments and inversion of relative signal intensities between "cortex" (high T1 signal intensity) and prospective white matter (moderate T1 signal intensity). In order to process the fetal and premature infant postmortem brains, we have developed a pipeline combining existing automatic MNI tools with several steps requiring semi-automatic and/or manual editing.
An initial tissue classification is performed using the artificial neural network (ANN) algorithm with manually selected tag points for each of the tissue classes (I = background, II = formalin, III = "cortical plate, " IV = "prospective white matter with basal ganglia"). At least 100 points per tissue class are taken for a reliable estimation of its mean intensity and variance Tohka et al., 2004). In comparison to an adult brain, in the fetal brains the partial volumes of CP-Formalin in deep sulci, of white matter-like intermediate intensities, were often mis-classified as white matter. This was caused by the narrowness of the sulci and the inversion of T1 intensities (CP-high T1 signal intensity, formalin and SPlow T1 signal intensity). Consequently, the segmented images needed to be manually corrected to account for partial volume effects for formalin in narrow sulci, for correction of artifacts (tissue damage), and for masking out unwanted tissues that were attributed to the background (brain stem, pons, mesencephalon, and cerebellum). Using the Display module (MNI toolkit), semiautomatically classified tissues were manually corrected and the narrow sulci were painted in order to extract the cortical plate surface.
After semi-automatic classification, tissues labeled as class IV (prospective white matter with basal ganglia) were extracted and manually painted introducing the five new tissue classes, namely: IV = subplate, V = intermedial zone, VI = proliferative compartments, VII = subcortical gray matter, and VIII = diencephalon. The example of tissue classification on coronal slices can be seen in Figures S1-S3. Although visible, SVZ does not show a continuous 3D appearance on MRI. Therefore, the SVZ was partially classified as VZ (inner subventricular zone), and as an IZ (outer subventricular zone that could be easily traced after the appearance of T1 hypointense periventricular fiber rich zone). After 35 PCW, we could not continuously distinguish the SP from IZ, although we have observed regional differences in MRI signal intensity. Thus, after 35 PCW, SP and IZ were measured together and were classified as one compartment (IV + V) called "fetal white matter." Volumes of the semi-automatically segmented fetal compartments were calculated by multiplying the number of voxels with the voxel unitary volume. In order to account for the effect of minimal tissue shrinkage caused by formalin fixation [(shrinkage of 2.7-3.5% as reported by Boonstra et al., 1983;Schned et al., 1996)], we have calculated absolute but also relative volume ratios for tissue classes III-VIII. Relative volumes were expressed as a percentage of the total telencephalic volume of the hemisphere (not including the diencephalon) or as a percentage of the cerebral volume of the hemisphere (including the diencephalon).
Surface extraction and registration
Manual pre-alignment of fetal hemispheres in MRI scans to stereotaxic space was a prerequisite for registration of surfaces and outer surface extraction. Each brain hemisphere was first manually registered to a stereotaxic space defined on an adult human template (ICBM152). Registration of the brain hemisphere to ICBM152 stereotaxic space was performed in Register, a GUI module developed at the MNI. For registration of fetal scans with adult templates, we manually defined 10 anatomical tag points on fetal brain scans with their corresponding counterparts on the ICBM152 model. Each brain hemisphere was then co-registered to the adult model using three translations, three rotations, and one scaling option.
The extraction of inner surfaces from CP and SP, and the CP-pial boundary was fully automated (MacDonald et al., 2000;Kim et al., 2005) and was based on the previously described segmented images. Surfaces were extracted by hemisphere, with 81920 triangles and 40962 vertices. The gyrification index (GI, the ratio of total to exposed area of the pial surface) was evaluated at the pial surface of the cortical plate.
Given the rapid growth and change in fetal brain morphology between 13 and 40 weeks of gestation, it was not possible to define a global static surface model to use for surface registration, as can be done with adult brain. Fetal surfaces were instead longitudinally registered by age with reference one to another. The oldest subjects, near 40 PCW, were registered to the reference adult template (ICBM152), thus defining the latest standard stereotaxic space (Robbins, 2004;Lyttelton et al., 2007;Boucher et al., 2009). On the younger brains, the longitudinal surface registration was driven by matching borders of lobes (or regions) as manually delineated on the extracted "inner" surfaces.
Regional and lobar segmentation was performed manually on the inner surfaces of the CP. The surfaces were first divided into 6 lobes [frontal lobe, parietal lobe, occipital lobe, temporal lobe, outer ring of the limbic lobe (gyrus fornicatus), and insular lobe]. The outer ring of the limbic lobe was further split into parahippocampal gyrus and cingulate gyrus resulting with 7 segmented regions in total (Figure 4). After longitudinal alignment of the surfaces, pathlines can be traced in time along the registered vertices of the individual surfaces to observe growth patterns of CP and SP both globally and regionally. For the segmentation of SP and CP, we have used anatomical borders on the inner cortical plate surfaces, i.e. gyri and sulci, which have been used for adult cortical surface segmentation, as described by von Economo and Brodmann (Brodmann, 1909;von Economo and Koskinas, 1925). The CP and SP were segmented only after identifying the primary sulci in fetal brain (Kostovic and Vasung, 2009). The anatomical borders used for surface segmentation were as follows (Figure 4).
Frontal lobe
The Sylvian fissure (SF) forms early in fetal development (9-13 PCW). The circular sulcus of insulae (CSI) forms during FIGURE 4 | Extracted lateral and medial cortical plate surfaces in 25 (upper row), 30 (middle row), and 40 (bottom row) PCW old fetal brains. Regional and lobar segmentation was performed manually and the surfaces were divided into 6 lobes: frontal lobe (violet), parietal lobe (red), occipital lobe (green), temporal lobe (beige or jungle green), outer ring of the limbic lobe [gyrus fornicatus encompassing: parahippocampal gyrus (celadon green) and cingulate gyrus with subcallosal area (orange)], and insular lobe (yellow). early fetal and mid-fetal development. During late mid-fetal development, the central sulcus (CS) begins to appear and can be identified continuously in the rostro-caudal direction (Kostovic and Vasung, 2009). Thus, CS, SF, and CSI provide anatomical borders of frontal lobe at lateral aspect. On the medial aspect, the frontal lobe extends to the cingulate sulcus that is continuous in appearance already in the mid fetal phase. The parolfactory sulcus does not appear before the early preterm phase, so from the rostral end of the cingulate sulcus we have extrapolated the line that most resembles the adult parolfactory sulcus (connecting rostral end of the cingulate sulcus with substantia perforata anterior) dividing the subcallosal area from frontal lobe.
Temporal lobe
As there is no sulcus delimiting the temporal from occipital lobe in fetal brain, we have extrapolated line connecting the Sylvian fissure with occipitotemporal incisures on the lateral aspect. On the medial aspect, the temporal lobe was delimited by collateral sulcus.
Occipital lobe
The borders of the occipital lobe were defined as follows: on the lateral aspect the extrapolated lines connecting the occipitotemporal incisures and parieto-occipital fissure with the Sylvian fissure, on the medial aspect occipitotemporal incisures and parieto-occipital fissure.
Parietal lobe
On the lateral aspect, the parietal lobe is delimited by the central sulcus and extrapolated line connecting the parieto-occipital fissure with the Sylvian fissure. On the medial aspect, the parietal lobe is delimited by the parieto-occipital fissure and cingulate sulcus.
Insular lobe
The circular sulcus of the insula provided a clear border between the insula and the frontal, parietal, and temporal lobes.
(i) Gyrus cynguli and area subcallosa
Delimited by cingulate sulcus dorsally and callosal sulcus ventrally. (ii) Parahippocampal gyrus, uncus, and substantia perforata anterior The collateral sulcus and rhinal sulcus were defined as borders between gyrus parahippocampalis and the remaining temporal lobe. The border between isthmus of gyrus cinguli and gyrus parahippocampalis was an extrapolated line connecting the most inferior part of splenium of the corpus callosum with the parieto-occipital fissure.
Telencephalic measurements
The thickness of each CP and SP was defined at the vertices of each surface. Thickness was measured by taking the absolute distance between corresponding vertices on each surface. It was blurred, on the surface in its native space, with a 5 mm kernel for brains <20 PCW and 10 mm kernel for brains >20 PCW (fwhm) (Boucher et al., 2009). This was done in order to increase signalto-noise ratio. We have used smaller values for blurring than those applied in adults (20-30 mm) because of the size of the fetal brains.
Measurements of the surface area, volume, and average thickness of the CP (13-40 PCW) and SP (21-30 PCW) were taken for each segmented lobe and region. The surface areas (mm 2 ) and volumes (mm 3 ) of the CP and SP in different regions and lobes were calculated by first evaluating elemental areas and volumes at the vertices, then summing these measures over the vertices defining each lobe or region. For an elemental area, the area of a surface triangle is distributed equally (weight 1/3) to each of its three vertices. Similarly, an elemental volume is calculated from the volume of the prism formed by the linked vertices of each triangle pair between the two surfaces. As for volumetric measures, the lobar volumes are expressed as an absolute value or as a percentage they occupy in the specific fetal compartment (CP or SP).
Statistical analysis was performed using the software SPSS R and Matlab R . Detailed description of each analysis is provided in the results.
Relationship between Postconceptional Weeks and Total Volume of Transient Fetal Compartments
We have used the Spearman correlation in order to test the correlation between age and volume of transient fetal compartments. As expected we found significant positive correlations between age (in PCW) and absolute volume of the hemisphere (r s = 0.953, N = 19, p < 0.01), cortical plate (r s = 0.937, N = 19, p < 0.01), subplate compartment for the period from 13 to 30 PCW (r s = 0.935, N = 17, p < 0.01), intermediate zone (r s = 0.897, N = 19, p < 0.01), volume of subcortical gray matter (r s = 0.963, N = 19, p < 0.01), and diencephalon (r s = 0.955, N = 19, p < 0.01). The absolute volume of proliferative compartments showed significant positive correlation with developmental age from 13 to 25 PCW (r s = 0.917, N = 12, p < 0.01). After 25 PCW, when the peak of volume is reached, the volume of proliferative compartments showed negative correlation with developmental age, that is, a rapid decline (r s = −0.774, N = 7, p = 0.04).
Although it is known that these compartments increase in their volume with age, until now there were no reports on what hemispheric percentage these compartments occupy at the given postconceptional age. For that purpose we took into account the entire volume of the telencephalic hemisphere and we have expressed the volume of each compartment as a percentage of the total volume of the telencephalic hemisphere. We have used the Spearman correlation in order to test the correlation between age and relative volume of transient fetal compartments. The only significant correlations were found between age and relative volume of proliferative compartments (a negative correlation with r s = −0.931, N = 19, p < 0.01) and relative volume of subplate compartment between 13 and 30 PCW (a positive correlation with r s = 0.877, N = 17, p < 0.01). This suggests that the relationship between age and percentage of the hemisphere occupied by certain transient fetal compartment may not be linear.
Therefore, in order to reveal the nature of a relationship between age and relative volumes of transient fetal compartments, we fit non-linear models (second-order polynomial, exponential, and Gaussian), using Matlab. For every fit, we chose between the three functional forms based on the adjusted r 2 value. In all cases, these models provided better fits than a simple linear model. The best-fit parameter values are shown in (Figure 5).
The relative volume of the cortical plate was best predicted from age by using a best-fit 2nd degree polynominal curve ( Figure 5A, V Cortical_plate = 0.10 * PCW 2 ± 4.75 * PCW + 82.59: adjusted r 2 = 0.74). For the prediction of the relationship between the relative volume of the subplate compartment and age (only between 13 and 30 PCW), a 2nd degree polynominal curve was found to be the most adequate fit (Figure 5B, V subplate = 0.15 * PCW 2 + 8.21 * PCW ± 67.32: adjusted r 2 = 0.87). The relative volume of intermediate zone was also best predicted from age by using a best-fit 2nd degree polynominal curve (Figure 5D, V Intermediate_zone = 0.08 * PCW 2 ± 3.37 * PCW + 55.84: adjusted r 2 = 0.60). An exponential model showed to be the most appropriate for the prediction of the relationship between relative volume of proliferative fetal compartments and age (Figure 5C, V proliferative = 194.78 (−0.15 * PCW) , adjusted r 2 = 0.84). Finally, in order to assess the relationship between relative volume of diencephalon (percentage of total volume of telencephalon and diencephalon of one hemisphere) and age, we have used best-fit
Changes in Thickness of Cortical Plate and Subplate during Prenatal Brain Development
Mean thickness of CP and SP in segmented lobes and regions was measured in 10 brains (21-40 PCW, Figure 6) since we have detected the appearance of the primary sulci at this time [described also by Chi et al. (1977)].
We did not find significant correlation between PCW and mean cortical plate thickness of seven segmented cortical plate areas (Figure 6). Moreover, curve fitting did not reveal any satisfactory model (low r 2 ) describing age dependent changes in the mean cortical plate thickness of segmented regions. This might be due to the small sample size, below resolution submillimeter changes in cortical thickness during prenatal brain development, or due to the changes in cortical thickness that do not have lobar predominance (Figures 6, 7). Spatio-temporal changes of cortical plate thickness across all vertices throughout the hemisphere have been calculated in all subjects and can be seen in Figure 7 (upper row).
Since we could not detect the subplate compartment continuously at 40 PCW it was approximated to 0 mm. We have found significant positive correlation between PCW (in the period between 13 and 30 PCW) and mean subplate plate thickness of five segmented subplate areas (Figure 6; r = 0.884, N = 8, p = 0.004 for the parietal lobe thickness, r = 0.828, N = 8, p = 0.011 for the occipital lobe thickness, r = 0.73, N = 8, p = 0.04 for the frontal lobe thickness, r = 0.774, N = 8, p = 0.024 for the cingulate gyrus thickness, and r = 0.821, N = 8, p = 0.012 for the temporal lobe thickness). The average thickness of insula and parahippocampal gyrus did not show significant correlation with PCW. Mean subplate thickness reached maximal value at 30 PCW in all segmented areas. Spatio-temporal changes of subplate thickness across all vertices throughout the hemisphere can be seen in the Figure 7 (bottom row).
Regional Surface Growth of Cortical Plate during Prenatal Development
The total surface area of cortex, from 13 to 40 PCW, showed significant and strong positive correlation with age (r s = 0.98, N = 19, p < 0.01). In addition, the level of gyrification (Figure 7), calculated as gyrification index, was also significantly correlated with PCW (r s = 0.5, N = 19, p = 0.03).
Coordinated Changes of Cortical Plate and Subplate Volumes during Prenatal Human Brain Development
Spearman correlation coefficients were computed between all regional and lobar volumes of cortical plate and all regional and lobar volumes of subplate across eight subjects, each at a different age (from 21 to 30 PCW), yielding a 14 × 14 correlation matrix and corresponding p-values (Figure 9). The significance level was set at 0.05, and the p-values were adjusted for multiple comparisons using False Discovery Rate. This resulted in 22 significant correlations ( Figure 9C). Positive correlations between cortical plate of frontal lobe and parietal lobe, between frontal lobe and occipital lobe, between frontal lobe and temporal lobe, and between temporal lobe and parietal lobe are significant across ages (Figure 9A, asterisks, Figure 9C). Furthermore, we have also found significant positive correlations across ages between the volume of subplate of frontal lobe and parietal lobe, frontal lobe and occipital lobe, frontal lobe and parahippocampal gyrus, parietal lobe and temporal lobe, and between subplate of parietal lobe and parahippocampal gyrus (Figure 9A, asterisk, Figure 9C).
As expected the volume of the subplate of frontal, occipital and parietal lobe showed significant positive correlation with the volume of the cortical plate of the same lobes (Figure 9A, asterisk, Figure 9C). Moreover, the volume of the subplate of frontal lobe showed significant positive correlation also with the cortical plate volume of parietal and occipital lobe, while the volume of the subplate of parietal lobe showed significant correlation with cortical plate of frontal and temporal lobes ( Figure 9C).
DISCUSSION
In this study we provide quantitative data on individual transient fetal compartments, such as thickness, as well as total volume, surface area, and gyrification of human brain during development. Using MRI aligned to histological sections, we show growth trajectories of corticogenic regions during human mid-fetal and late fetal periods of cerebral development. These findings are consistent with general embryological data and previous knowledge on timing of intrauterine corticogenic events in humans (Kostovic and Rakic, 1990;Bystron et al., 2008;Kostovic and Vasung, 2009;Kostović and Judaš, 2015). We have observed significant positive correlation between developmental age and absolute volume of cortical plate, intermediate zone, subcortical gray matter, and diencephalon but also between developmental age and subplate from 13 to 30 PCW, and developmental age and proliferative compartments from 13 to 25 PCW. However, the percentage of hemispheric volume occupied by transient fetal compartments did not show correlation with age, except for relative volumes of proliferative compartments, which showed a negative relationship with age, and relative volumes of subplate compartment, which showed a positive relationship with age from 13 to 30 PCW. These results indicate the importance of these transient compartments during the reorganization of the prenatal human brain. Thus, we have obtained quantitative indicators of transient corticogenic compartments, which are useful for better neurobiological interpretation of existing and future developmental MRI data.
Volume of Transient Fetal Compartments as an Indicator of Intensity of Histogenetic Events
The results of this study demonstrate that precise histological delineation of transient fetal compartments based on different histological, histochemical, and cytological methods (Kostovic and Rakic, 1990;Widjaja et al., 2010;Huang et al., 2013) discloses reliable anatomical landmarks for corresponding MR images (Figures 2, 3) and allows volumetric measurements of individual transient compartments. The proliferative (ventricular and subventricular) compartments decrease in size and hemispheric percentage occupied after 25 PCW (Figure 5) indicating the cessation of neurogenesis and switch to gliogenesis (Bystron et al., 2008). However, the developmental neurological interpretation of growth curves for the relative volumes of other transient compartments is not as straightforward.
Growth curves and changes in thickness of cortical plate reported here are likely difficult to interpret due to the dynamic addition of neurons to increasingly superficial positions of cortex (Rakic, 1982(Rakic, , 1988, regional differences in lamination of isocortical and alocortical (limbic) regions, changes in columnar (vertical) organization, and prominent dendritic growth. MRI studies of fractional anisotropy (FA) indicate microstructural changes of cortical plate (CP) during development (McKinstry et al., 2002;Takahashi et al., 2012). All these changes may contribute to the growth of CP volume during the first half of gestation when CP is recognizable as a cell dense band showing homogenous MRI signal intensity. Nevertheless, during the late mid-fetal and preterm period, when Brodmann (Brodmann, 1909) identified basic six layer lamination so called ground typus, CP shows lamination on histological sections and even on T1w MRI images and (Kostovic et al., 2008). However, the final cytoarchitectonic features are not achieved until 3 years of age (Judaš and Cepanec, 2007). Parallel with the lamination of the CP, changes occur in the organization of vertically aligned FIGURE 9 | (A) Correlation matrix (Spearman's correlation coefficient -color code on the right) between all seven regional volumes of cortical plate and subplate across eight subjects, each at a different age (21-30 PCW). Significant correlations are marked with * . (B) Uncorrected p-values for the correlation coefficients. (C) Significant correlations (FDR-adjusted p-value < 0.05), where a black matrix entry indicates significance. Regional volumes of the cortical plate are marked with numbers 1-7 [parietal lobe (1), occipital lobe (2), frontal lobe (3), insula (4), cingulate gyrus (5), temporal lobe (6), and parahippocampal gyrus (7)]. Regional volumes of the subplate compartment are marked with numbers 8-14 [parietal lobe (8), occipital lobe (9), frontal lobe (10), insula (11), cingulate gyrus (12), temporal lobe (13), and parahippocampal gyrus (14)]. embryonic columns, which are composed of young migratory neurons (Rakic, 1988(Rakic, , 1995McKinstry et al., 2002). Although our results show significant correlation between volume of the cortical plate and age, the relationship between age and the relative volume of telencephalon occupied by cortical plate is not straightforward. The cortical plate occupies the highest percentage of telencephalon in the early development (up to 20 PCW) and during the last trimester (after 30 PCW). This might be explained by the fact that from 20 to 30 PCW the subplate compartment displays a growth spurt. Furthermore, although the growth of CP until 20 PCW can be attributed to addition of neurons, it is very likely that the increase in volume of CP after 25 PCW is not caused by significant addition of new neurons (Bystron et al., 2008;Rakic, 2009;Rakic et al., 2009), although some late migratory neurons (Sanai et al., 2011) may contribute to the late developmental volume of the CP (Kostović and Judaš, 2015). Similarly, the addition of glial cells (Dobbing and Sands, 1973) is also not a massive event in the CP (Mrzljak et al., 1988(Mrzljak et al., , 1992. While the growth of dendrites of principal cortical neurons is accelerated after ingrowth of thalamocortical afferents (Mrzljak et al., 1988(Mrzljak et al., , 1992, around 24-26 PCW (Molliver et al., 1973;Kostovic and Rakic, 1990;Kostović and Judaš, 2010), the relocation of thalamocortical fibers from subplate to the CP most likely influences the shape of SP and CP volume growth curves and its thickness during late gestation.
Finally, all these factors may partly explain why we did not find significant difference in mean cortical thickness between segmented regions of cortical plate (Figure 6). This could be also due to the undetectable sub-millimeter discrete differences of immature cortex, but also due to the changes in cortical thickness that that are not detectable with our segmentation. Detailed vertex-based analysis revealed that between 16 and 21 PCW the first regions of cortical plate to become thickest are regions around central sulcus (Figure 7, upper row). Afterwards, cortical plate thickening displays central to frontal and central to occipital gradients (Figure 7, upper row). Moreover, a recent study from Huang et al. (2013) revealed that the time courses of FA drop are distinct in different brain regions during the first two trimesters of prenatal development (Huang et al., 2013). According to the authors, the FA drop during first 20 PCW is the most pronounced in the frontal cortical areas (Huang et al., 2013), which coincides with cell differentiation, cessation of neuronal migration, dendritic and axonal growth, synapse formation, and cell adhesion (Bystron et al., 2008). Thus, our results are in line with those reported previously in the literature.
Subplate Compartment
Delineation of SP from intermediate zone during mid gestation was not problematic due to the presence of the external capsule situated at the deep border of SP Kostović et al., 2014), (Figures 2, 3: red dotted lines). SP is recognizable in MRI due to the hydrophilic extracellular matrix Judaš et al., 2005;Radoš et al., 2006;Widjaja et al., 2010). The delineation of the deep boundary of SP can be challenging during late gestation due to the formation of gyral white matter (Kostović et al., 2014). As well, the superficial border of SP at the interface between SP and CP is difficult to delineate during early stages due to the formation of a second CP (Kostovic and Rakic, 1990). Thus, the changing histological and histochemical properties at the interface between SP and CP, and SP and white matter (Kostovic and Rakic, 1990) certainly influence our measurements.
Despite these factors, it is evident that the volume of SP increases with age between 13 and 30 PCW, reaching the maximum around 30 PCW in most areas, occupying up to 45% of entire telencephalic volume (Figures 5, 6), and being almost 4 times thicker than CP (Figure 6). The maximal size of SP during this period may reflect an increased amount of "waiting" cortical afferents within SP, which form transient synapses before continuing into cortex (Rakic, 1977;Kostovic and Rakic, 1990;Kostović and Judaš, 2007). After penetration of thalamocortical fibers into the CP, between 24-28 PCW (Kostović and Judaš, 2010), an additional convergence of associative and commissural fibers also wait in SP before entering the CP Judaš, 2002, 2010;Kostović and Jovanov-Milošević, 2006). Supporting evidence for this possibility is that cortical areas with absence of callosal input, such as primary visual cortex (area 17), contain thin SP while prestriate cortex shows thick SP (Kostovic and Rakic, 1984). The SP is more prominent in associative cortical areas (Figure 7, bottom row), which are strategically arranged in perisylvial cerebral territories. This supports an original hypothesis that evolutionarily, SP is related to the increased number of corticocortical connections (Kostovic and Molliver, 1974;Kostovic and Rakic, 1990;Judaš et al., 2013). Moreover, sequential ingrowth of fibers into the subplate, followed by the waiting period within the subplate, and the final relocation to the cortical plate suggests that during the peak period of fiber ingrowth the volumes of the subplate and cortical plate within same areas should be related. We therefore, expected to find a positive correlation between subplate volume and cortical plate in different anatomical regions during this developmental period (21-30 PCW) (Figure 9). The volume of the cortical plate of the frontal, occipital and parietal lobe showed positive correlation with the volume of the subplate of the same regions (Figure 9, asterisks) indicating related growth of these transient fetal compartments.
Macroscopic Development and Microscopic Histogenetic Changes
The cerebral cortex is expanded in humans largely due to an increase number of cortical columns rather than increased cortical thickness (Rakic, 1995). However, growth of human cerebral cortex is not homogeneous in space or time. During the last trimester of human gestation, telencephalic volume, and surface area expands immensely, especially in frontal, parietal, and temporal areas (Figure 8). The vast expansion of CP occurs after the majority of neurons are born and situated in their final laminar positions, and embryonic columns are formed. This raises the question about the substrates underlying the intensive CP growth during last third of gestation we and others have observed (Retzius, 1900;His, 1904;Dobbing and Sands, 1973;O'Rahilly and Müller, 1984;Garel, 2004;Grossman et al., 2006;Trivedi et al., 2009;Habas et al., 2010a;Clouchoux et al., 2012;Lefèvre et al., 2015). During this phase, the dendrites of pyramidal neurons mature (Mrzljak et al., 1988(Mrzljak et al., , 1992Marín-Padilla, 1992), cortico-cortical afferents arrive (Kostovic and Rakic, 1984;Kostović and Jovanov-Milošević, 2006;Kostović et al., 2014) and glia are generated (Bystron et al., 2008). The dendrites of pyramidal neurons develop rapidly after 26 PCW (Mrzljak et al., 1988(Mrzljak et al., , 1992 and significantly contribute to cortical volume (Petanjek et al., 2008. Development of gyral white matter (Kostović et al., 2014), which is related to the diminishment of SP (Figure 7, bottom row) and the formation of gyral convolutions (Figure 7), is also a crucial factor in the morphogenesis of late fetal cortex (Kostovic and Rakic, 1990;Kostović et al., 2014). Patterns of cortical convolutions are unique to each individual human brain (Lohmann et al., 1999). These individual patterns and the majority of gyri and sulci emerge during prenatal and early postnatal development (Connolly, 1950;Chi et al., 1977) reflecting cortical maturation as well as ingrowth of cortical afferents (Goldman and Galkin, 1978;Goldman-Rakic and Rakic, 1984). The appearance of gyri and sulci that we have observed, with the first appearance of deep fissures (sylvian, parietooccipital, and calcarine) followed by the emergence of central sulcus (around 21 PCW), primary sulci (around 25 PCW), secondary (around 33 PCW), and tertiary sulci (around 40 PCW), is in accordance with previous descriptions from (Retzius, 1900;Connolly, 1950;Chi et al., 1977). The rapid increase of gyrification during late fetal period (Figure 7) coincides with the explosive development of corticocortical fiber connections, suggesting their possible role in gyrification (Kostovic and Rakic, 1990;Van Essen, 1997;Kostović and Jovanov-Milošević, 2006;Huang et al., 2009;Takahashi et al., 2012;Mitter et al., 2015).
CONCLUSION
This study demonstrates that quantitative volumetric, surface area, and thickness data obtained by MRI-histological analysis on transient cellular compartments in the human fetal cerebrum can serve as indicators of spatio-temporal intensity of major developmental prenatal neurogenic events. The volume of proliferative compartments decrease dramatically after 25 PCW, while extracellular matrix rich synapse containing subplate compartment reached its maximum volume and thickness around 30 PCW before decreasing again. We relate this phenomenon to the pattern of growth of thalamocortical and corticocortical pathways. Moreover, during mid-gestation, the subplate zone occupied nearly half of the total hemispheric volume, indicating the relevance of the subplate compartment during human brain development. Quantitative data on cortical plate show no significant age related mean cortical thickness change, whereas surface area, volume, and level of gyrification show exponential growth during last trimester of gestation. However, as we did observe spatio-temporal areal differences in cortical thickness (vertex-wise analysis), we interpret this pattern of cortical plate differentiation as consistent with coincident differentiation of neurons, growth of dendrites, transformation of embryogenic columns, ingrowth of axons, and synaptogenesis with subsequent development of cortical convolutions. This data will improve our ability to identify transient fetal compartments in neuroimaging data of prenatal human brain.
LIMITATIONS
There are several limitations to our study: Firstly, the major encountered limitation is our small sample size. Secondly, as fetal brains were extracted from the skull we could not always prevent shape distortions or tissue damage that could affect some of our measurements (gyrification index and cortical plate thickness).
Thirdly, as some of the brains were available as a result of sudden infant death syndrome or especially, respiratory disease, it is possible that some damage may have occurred in these brains, potentially altering their structure.
In order to account for the known tissue shrinkage of 2.7-3.5% that is attributed to the formalin fixation (Boonstra et al., 1983;Schned et al., 1996), we reported relative volumes of transient fetal compartments. Nevertheless, we cannot rule out the minor effects of fixation on obtained absolute measures (such as the changes in cortical thickness).
We have resampled the MR images to isotropic voxel sizes of 0.15 mm (age ≤13 PCW) or 0.25 mm (age ≥15 PCW) because we needed to scale down from adult-size to fetal-size brains while retaining its structures (gyri and sulci). Although we corrected non-homogeneities with a small spline distance of 5 mm using the N3 method (Sled et al., 1998), we could not fully correct the non-uniformities in the images and consequently, the initial tissue classification (relying on the discrete classification using ANN (artificial neural network) and partial volume estimations) was not optimal. This led to partial volume effects that, to an extent, influenced our measures.
AUTHOR CONTRIBUTIONS
LV Designed the study, conducted analysis, wrote the paper, and interpreted the results. CL Developed algorithm for fetal MRI image processing. MR and MP contributed to the fetal brain collection and acquisition. JG and SK contributed to data analysis and interpretation. JR and EF contributed to data analysis. MR contributed to data analysis and interpretation. PH contributed to interpretation of results. AE contributed to image processing design and interpretation of results. IK Designed the study, wrote the paper, and interpreted the results.
|
v3-fos-license
|
2019-02-05T18:02:22.466Z
|
2016-06-01T00:00:00.000
|
131988019
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://tos.org/oceanography/assets/docs/29-2_tandon.pdf",
"pdf_hash": "af6a062a737d1b3baf0c27db4592e27ee3fe6a14",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46710",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "af6a062a737d1b3baf0c27db4592e27ee3fe6a14",
"year": 2016
}
|
pes2o/s2orc
|
Technological advancements in observing the upper ocean in the Bay of Bengal : Education and capacity building
Because the monsoon strongly affects India, there is a clear need for indigenous expertise in advancing the science that underlies monsoon prediction. The safety of marine transport in the tropics relies on accurate atmospheric and ocean environment predictions on weekly and longer time scales in the Indian Ocean. This need to better forecast the monsoon motivates the United States to advance basic research and support training of early career US scientists in tropical oceanography. Earlier Indian field campaigns and modeling studies indicated that an improved understanding of the interactions between the upper ocean and the atmosphere in the Bay of Bengal at finer spatial and temporal scales could lead to improved intraseasonal monsoon forecasts. The joint US Air-Sea Interactions Regional Initiative (ASIRI) and the Indian Ocean Mixing and Monsoon (OMM) program studied these interactions, resulting in scientific advances described by articles in this special issue of Oceanography. In addition to these scientific advances, and while also developing long-lasting collaborations and building indigenous Indian capability, a key component of these programs is training early career scientists from India and the United States. Training has been focusing on finescale and mixing studies of the upper ocean, air-sea interactions, and marine mammal research. Advanced methods in instrumentation, autonomous robotic platforms, experimental design, data analysis, and modeling have been emphasized. Students and scientists from India and the United States at all levels have been participating in joint cruises on Indian and US research vessels and in training participants in modern tools and methods at summer schools, at focused research workshops, and during research visits. Such activities are building new indigenous capability in India, training a new cadre of US scientists well versed in monsoon air-sea interaction, and forging strong links between Indian and US oceanographic institutions.
Institute of Ocean Technology (NIOT), Chennai, and the Indian National Centre for Ocean Information Science (INCOIS), Hyderabad, were set up in 1992 and 1999, respectively, by the Department of Ocean Development (now the Ministry of Earth Sciences, or MoES).These institutes have contributed to the development of basic scientific knowledge of the Indian Ocean (e.g., Jayaraman, 2007), as well as to applied ocean science and technology that have direct social impact.For example, NIOT operates a regional oceanographic buoy network consisting of moored buoys at a dozen locations in the northern Indian Ocean (Venkatesan et al., 2013).INCOIS is a major partner in the international collaboration in maintaining the Research Moored Array for African-Asian-Australian Monsoon Analysis and Prediction (RAMA; McPhaden et al., 2009) and the Indian Ocean Argo float network (Riser et al., 2016).INCOIS also partners with the Indian Institute of Tropical Meteorology (IITM), Pune, and the India Meteorology Department (IMD), Delhi, in forecasting severe weather and monsoon variability using coupled models.NIO leads national efforts in coastal oceanography (Chaitanya et al., 2014;Mukherjee et al., 2014), equatorial current measurements (Sengupta et al., 2004;Murty et al., 2006), long-term expendable bathythermograph (XBT) lines, and continuing ship-based sections using traditional conductivity-temperature-depth (CTD) casts at O(50-100) km spacing (Murty et al., 1992;Shetye et al., 1996;Gangopadhyay et al., 2013).
THE NEED FOR ADVANCED CAPABILITIES
The response of the upper ocean to surface fluxes, as well as the fluxes themselves, are shaped by the unique thermodynamic structure of the Bay of Bengal, with a shallow fresh layer above a deep
INDIAN OCEANOGRAPHIC CAPABILITY
In India, teaching and research in oceanography began at Andhra University, Waltair, shortly after independence in 1947, followed by establishment of other university departments.In recent decades, India has made impressive strides in all areas of oceanography.The National Institute of Oceanography (NIO), Goa, was established in 1973; the National " We hope that the examples of successful capacity building initiatives described in this article will help other international oceanographic collaborations.Long-lasting collaborations and friendships that transcend national boundaries have been built by ASIRI-OMM largely due to the program's capacity building and training.
ABSTRACT.Because the monsoon strongly affects India, there is a clear need for indigenous expertise in advancing the science that underlies monsoon prediction.
The safety of marine transport in the tropics relies on accurate atmospheric and ocean environment predictions on weekly and longer time scales in the Indian Ocean.This need to better forecast the monsoon motivates the United States to advance basic research and support training of early career US scientists in tropical oceanography.Earlier Indian field campaigns and modeling studies indicated that an improved understanding of the interactions between the upper ocean and the atmosphere in the Bay of Bengal at finer spatial and temporal scales could lead to improved intraseasonal monsoon forecasts.The joint US Air-Sea Interactions Regional Initiative (ASIRI) and the Indian Ocean Mixing and Monsoon (OMM) program studied these interactions, resulting in scientific advances described by articles in this special issue of Oceanography.In addition to these scientific advances, and while also developing long- subsurface warm layer (Sengupta et al., 2004;Mahadevan et al., 2016, in this issue).Earlier ocean measurements (Harenduprakash and Mitra, 1988;Sanilkumar et al., 1994;Bhat et al., 2001;Rao and Sikka, 2005) and models typically resolved the ocean at horizontal scales of roughly 10 km or larger, and vertical scales of 10 m or larger.Recent assimilation of data from moorings, CTDs, and Argo floats into numerical ocean models (see Chowdary et al., 2016, in this issue) points to an urgent need for measurements and an understanding at higher vertical and horizontal resolutions as well as to a focus on the near-surface ocean.Recent studies using both forced ocean models and coupled models suggest that ocean physics on subgrid scales is critically important for realistic simulations and forecasts.In particular, operational models have serious deficiencies in simulating the shallow, fresh layer in the Bay of Bengal.The model ocean mixed layer is too deep (Ravichandran et al., 2013;Fousiya et al., 2015).NIOT, Chennai; Indian Institute of Technology (IIT), Madras, Chennai; Space Application Centre, Ahmedabad (ISRO/SAC); and Tata Institute for Fundamental Research (TIFR), Hyderabad.Though all attendees had master's degrees or more advanced degrees in physics, biology, or mathematics, their backgrounds and exposure to oceanography were quite varied depending on the missions of their home institutions.The content of the lectures thus progressed from very introductory to the latest observational and modeling studies in physical oceanography.
The workshop featured about 20 lectures of 1.5 hours each by four US scientists.The first week featured an introduction to ocean turbulence and ocean observations by Karan Venagayamoorthy (Colorado State University) and Lou St. Laurent (Woods Hole Oceanographic Institution).The second week featured " Working together on research cruises has played an important role-not only has it promoted detailed discussion on observing techniques and science but it has also fostered mutual understanding and friendship between personnel from different Indian and US institutions.
"
. lectures on upper-ocean processes and their observations by author D' Asaro (University of Washington) and modeling by author Tandon (UMass Dartmouth).In addition, G.S. Bhat (IISc Bangalore) presented a lecture on surface fluxes, and R. Venkatesan (NIOT Chennai) discussed observations in the Indian Ocean.Mornings were devoted to lectures and discussion specific to lectures while afternoons and evenings featured demonstrations and team meetings to make progress on projects.Five oceanographically relevant demonstrations were conducted in the afternoons using a compact, portable "weather in a tank" setup provided by author Tandon (courtesy of John Marshall of the Massachusetts Institute of Technology; http://paoc.mit.edu/labguide/projects.html), which reinforced the concepts learned in morning lectures.
Workshop attendees were divided into four teams that each included participants from different institutions.Each team, named after a river terminating in the Bay of Bengal, was assigned to work on a project that involved using an upperocean data set, a flux data set, or modeling.The projects focused on the following topics: one-dimensional estuarine modeling (guide: K. Venagayamoorthy), North Atlantic Tracer Experiment upperocean dissipation data (guides: Lou St. Laurent and Eric D' Asaro), Tropical Rainfall Measuring Mission data analysis (guide: V. Venugopal from IISc), and Exploring Arabian Sea data set using one-dimensional modeling (guide: Amit Tandon).The students worked on their projects into the evenings and over the weekends, with several groups reporting that they worked on their projects overnight as the presentation day grew closer.The groups reported on their projects on the last day of the workshop, with all students participating in the presentations.
Anonymous written feedback was collected after the workshop.The attendees were asked which components of the workshop appealed to them and which aspects they would like to see retained or changed in the future.They were asked about the balance between the time spent on content of the workshop, including demonstrations and the time needed for the projects.The anonymous survey elicited unanimously positive responses on the demonstrations and the teaching.
Both introductory and advanced topical lectures were highly appreciated by the students.They commented on the enthusiasm of the teachers and the active learning with demonstrations, and they appreciated the effort put in by the teachers in making difficult topics accessible.The surveys suggested that two hour-long lectures per day with the rest of the time devoted to demonstrations, discussions, and projects would work best for future workshops.Topics for future workshops suggested by attendees included glider training and training related to air-sea flux moorings, internal waves, and geophysical fluid dynamics, and there was a demand for more lectures on upperocean processes.Additionally, a book on real-time observations in the ocean is planned with US and Indian authors from more than 15 institutions.
Training on US Ships
Indian, Sri Lankan (see Box 1), and US students and junior scientists participated in all three R/V Roger Revelle cruises (Figure 2) in the Bay of Bengal in 2013, 2014, and in 2015.On these cruises, An intensive water sampling, filtration, and analysis campaign was incorporated into all cruises over the three years.Researchers from India supported collection of water samples to measure nutrients, high-performance liquid chromatography (HPLC) pigments, particulate organic carbon (POC), dissolved organic carbon (DOC), colored dissolved organic matter (CDOM), dissolved inorganic carbon (DIC), pH, alkalinity, and phytoplankton taxa, and they provided cross calibration for flow-through nitrate, oxygen, and chlorophyll sensors.The cruises led to a joint effort to describe the large-scale biogeochemical distributions within the Bay of Bengal (Sarma et al., 2016, in this issue).There was also onboard training for marine mammal observers (see later section).Each cruise also featured evening lectures, presentations, and scientific discussions led by Indian, Sri Lankan, and US members of the science party.All early career scientists on the cruises were encouraged to begin plotting, analyzing, and discussing the available data streams.
Ongoing Training
A key aspect of modern oceanographic research is the close coupling of fieldwork, data processing, data analysis, and scientific synthesis.Training in this synergy has been mostly through one-on-one The skills addressed in this training are diverse and have been accumulated through the wisdom of community best practices.Examples include identifying the spectral noise levels of particular instruments, matching temperature and conductivity signals to minimize salinity spiking, and making subtle corrections for lateral displacements between shipmounted ADCPs and GPS/heading sensors to account for differential rotation ship rates, especially for the nonstandard ADCP configurations (e.g., deployments in the well or in hull, or on over-boarding poles on the side of ship) on some of the cruises.These skills include both instrument engineering and subsequent data processing, which are more effectively conveyed by working closely with individuals than through reading documentation.The process of modifying existing codes and algorithms to work for a particular situation teaches the type of flexibility and innovation that leads to robust scientific capacity.
Shipboard Training on Indian Ships
Four The work focused on the operation and use of measurement systems as well as onboard lectures, individual instruction, and educational activities built into the scientific sampling.The overall goal was to build a core group of young Indian scientists and technicians capable of using modern physical oceanographic equipment to address current oceanographic problems.Sensors, measurement methods, and scientific analysis were all emphasized, as it is their confluence that leads to the most productive and innovative experimental science.
Instrumentation training focused on combining two systems capable of measuring temperature, salinity, and velocity, the most important quantities for understanding ocean circulation.One system, the Oceanscience uCTD, profiles temperature, salinity, and pressure from a vessel underway, at 4-5 kts for Sagar Nidhi, to achieve high spatial resolution.It is deployed off the stern of the ship with a small, manually operated winch system.Profiles are made every few minutes continuously for many days using a sensor that can be easily damaged if it hits the side of the ship.Furthermore, the winch system can overheat, and it needs maintenance and repair.These difficulties provided excellent opportunities for training.During the first cruise, US technician Michael Ohmart installed the uCTD and provided instruction on its use.When it needed repairs, the Indian scientists and technicians learned to diagnose the problems with coaching by Ohmart.By the second cruise, all installation and operation, and some repair, was done by early career Indian scientists; by the third Sagar Nidhi cruise, the entire operation was conducted by Indian shipboard participants.The uCTD requires a team of at least three to operate; continuous operations thus involved up to a dozen personnel, provided excellent opportunities to include students at many skill levels, and clearly promoted team building.Nearly 4,000 uCTD profiles taken on the Indian cruises comprise a major component of the ASIRI-OMM data set.Not a single probe was damaged during the cruises.
The second system, the RD Instruments ADCP, measures velocity from a set of acoustic transducers mounted on the ship.Navigational and orientation information are necessary to convert these data to Earth coordinates.Indian oceanographers had little prior experience with shipmounted ADCPs before ASIRI-OMM (Sagar Nidhi's 75 kHz ADCP was not operational when this program began).During the first cruise, a side-polemounted system was installed, but it was not sufficiently rigid.Furthermore, it was difficult to merge navigational and orientation information into the ADCP data stream.Most of the effort during this cruise was spent diagnosing these problems and attempting solutions, a joint effort between Eric D' Asaro and S. Shivaprasad of INCOIS, the chief Indian contact for the ADCP work.As a result, on the second cruise, S. Shivaprasad installed a quality pole-mounted ADCP system and a GPS-based attitude system.By the third cruise, Sagar Nidhi's ADCP was replaced with an operating 300 kHz system, which was supplemented by a 500 kHz pole-mounted system.Of equal importance, Dipanjan Chaudhuri, an Indian Institute of Science graduate student, wrote software that duplicated the manufacturer's processing software.Thus, indigenous expertise in both the installation and processing of ADCP measurements was developed.
The small-scale salinity fronts and eddies of primary interest to ASIRI-OMM are spatially localized and change with time.A flexible sampling strategy that uses information merged from multiple ships, satellites, and models is far more effective here than employing a predetermined, fixed sampling strategy.In particular, maps of surface salinity from the Aquarius satellite combined with maps of ocean eddies from satellite altimetry provided by the Space Applications Centre of the Indian Space Research Organization were effective in directing shipboard sampling toward regions of low-salinity riverine water.Reliable Internet communication was essential for such a strategy.During the first cruise, a slow Internet connection was available only on the ship's bridge with the captain's permission.Improved transfer speeds and access were made available on the second and third cruises.
The practical sampling instructions were supplemented by evening lectures on Sagar Nidhi by both US and Indian scientists and students.The talks covered both background scientific topics, recent findings on the cruise, and plans for the next day's work.
The success of both of these indigenous technological improvements and the increasing scientific experience of the Indian team became clear during the third ASIRI-OMM cruise.ORV Sagar Nidhi and R/V Roger Revelle worked together (Figure 3) to sample an evolving salinity front, first on a regular grid involving both ships and then in an adaptive pattern following a drifting surface buoy tagging the front.On Sagar Nidhi, the adaptive sampling was conducted entirely by the Indian team, with continuous uCTD measurements used to create evolving maps of the density structure that were then used to direct the ship track to resample the front.A Lagrangian float was deployed from Sagar Nidhi within a cluster of surface drifters deployed from Revelle.The combined efforts of the two ships and autonomous instrumentation deployed from them enabled production of detailed maps of the temperature, salinity, and density structure of the frontal region along with air-sea fluxes and oceanic mixing rates.
Glider Training
Modern physical oceanography increasingly relies on autonomous (i.e., robotic) vehicles to sample the ocean continuously, remotely, and without requiring the presence of a research vessel.Development of an indigenous capability for using this technology (Figure 4) is an important goal of ASIRI-OMM, with a special emphasis on using the Seaglider, developed and operated by the Applied Physics Laboratory of the University of Washington (APL/UW) and commercially available from Kongsberg Marine.The Seaglider typically dives to 1,000 m while traveling horizontally about 5 km
SUMMARY AND LESSONS LEARNED
The success of these training and capacity building programs resulted from a commitment by US and Indian scientists to the effort and the understanding on both sides that progress on the scientific problems was strongly connected to the success of the educational efforts.The initial demonstration of this commitment was an important component in overcoming bureaucratic problems on both sides.The high level of education and strong existing oceanographic expertise in India was also critical in allowing the incremental training from US investigators to have a significant impact.Most important to our success was the enthusiasm and eagerness to learn shown by the early career Indian and US scientists.
The main success of these efforts has been to improve indigenous measurement capability at a technical level.Indian scientists now make measurements that were not possible before the ASIRI-OMM collaboration.Continuing efforts will focus on data analysis, publication, and most critically, the development of indigenous scientific programs using these new measurement skills to address current problems in oceanography, particularly those relevant to India.
For many young scientists from India and the United States, this collaboration provided their first exposure to tropical oceanography and monsoon research.We hope that the examples of successful capacity building initiatives described in this article will help other international oceanographic collaborations.Longlasting collaborations and friendships that transcend national boundaries have been built by ASIRI-OMM largely due to the program's capacity building and training.
This special issue of Oceanography demonstrates that scientific progress has resulted from collaborations by these international teams, and almost every article in this issue has authors from multiple countries.We hope that future efforts will incorporate the knowledge presented in all the fine-to large-scale observations and models into predictive monsoon models.
FIGURE 1 .
FIGURE 1.The upper-ocean physics workshop at the Indian Institute of Science, Bangalore, featured lectures (upper left), demonstrations (lower left), and short group projects.Participants and instructors are shown in the photo on the right.
FIGURE 2 .
FIGURE 2. (a-c) Scientists from multiple countries aboard US R/V Roger Revelle.(d) Early career scientists from Indian and US institutions on the Revelle deck with Indian ORV Sagar Nidhi in the background.(e) Marine mammal operations.
ACKNOWLEDGMENT
FIGURE B1.Training sessions for (a) radiosonde launching and (b) data processing were conducted at the Colombo Meteorological Department.(c) Elementary school children visiting R/V Revelle and (d) small boat operations/ training took place in July 2014 off Sri Lanka.
FIGURE 3 .
FIGURE 3. A snapshot of ship positions and the assets deployed from them during the ASIRI-OMM collaborative cruise, August-September 2015.The ship icons show ORV Sagar Nidhi (red), R/V Roger Revelle (blue), and the Robotic Oceanographic Surface Sampler (ROSS; small yellow symbol) during a frontal mapping experiment.The Wirewalker instruments are denoted by Ws, and SOLO floats appear as elongated float icons.Lagrangian float 75 is the yellow circle marked 75.Red arrows mark the drifters that measure near surface salinity and temperature, green arrows denote temperature drifters, and purple arrows indicate wave drifters.The faint white threads mark the tracked positions of all assets.Figure courtesy of Jared Buckley
FIGURE 4 .
FIGURE 4. US and Indian scientists on board ORV Sagar Nidhi with two modern autonomous instruments, the Indian Seaglider (left) and a Lagrangian float.
FIGURE 6 .
FIGURE 6. (left) Example of species identification homework (sketch by Ajith Kumar).(below) Participants in a bioacoustics short course taught by Kathleen Stafford (far right) and Mark Baumgartner (sixth from left) at the Indian National Centre for Biological Sciences, December 10-12, 2014.
Upper-Ocean Physics Workshop at the Indian Institute of Science, Bangalore (July 2014)
The ASIRI-Effects of Bay of Bengal Freshwater Flux on Indian Ocean Monsoon (EBOB) project had a dedicated training and capacity-building component.As part of the training, US scientists taught courses in physical oceanography at the National Aquatic Resources Research and Development Agency (NARA) in Sri Lanka, and NARA scientists visited the United States.The Partnership for Observing the Global Oceans (POGO) supported part of this activity through scholarships.Capacity building included conducting training courses in instrument deployment, maintenance, and retrieval and in data processing for a group of students from NARA, the Colombo Meteorological Department, and the University of Ruhuna (FigureB1a,b).A large number of Sri Lankan university faculty members, government scientists, undergraduates, and high school students visited R/V Roger Revelle during its port calls to Colombo (FigureB1c).Tens of Sri Lankan scientists participated in Revelle cruises, jointly deployed instruments, and collaborated on data analysis.Periodic cruises were conducted by NARA using R/V Samuddrika (FigureB1d) for coastal measurements.During these cruises, Sri Lankan scientists gained experience in deploying shallow moorings and gliders.A set of new instruments was donated through ASIRI to improve NARA's measurement and scientific capabilities, and especially to equip R/V Samuddrika with sensors that could record advanced air-sea measurements.A number of young Sri Lankan scientists are involved in research under the supervision of US scientists, and some have used their research for preparing PhD and MS theses.An undergraduate from the United States visited Sri Lanka for the summer and worked under the mentorship of senior NARA scientists, helping to process data and to set up an around-the-clock Ocean Observation and Early Response Center.
|
v3-fos-license
|
2019-10-17T09:09:54.603Z
|
2019-10-09T00:00:00.000
|
204912908
|
{
"extfieldsofstudy": [
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.ccsenet.org/journal/index.php/gjhs/article/download/0/0/40977/42328",
"pdf_hash": "4ca82e786e612c76cc8f10ee5f6c4ab484dfcf9a",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46714",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"sha1": "4b11d6e0b8960b96475d5c453d0b1b5585b2a1ac",
"year": 2019
}
|
pes2o/s2orc
|
A Scoping Review on Nutrition Challenges Among People Living With HIV/AIDS in Sub-Saharan Africa
The connection between under-nutrition and HIV is bidirectional. It affects the quality of life, as well as the survival of affected people. While this is the case, there are various nutritional challenges, which are faced by people living with HIV/AIDS (PLWH), and which hamper the fight against the scourge. This study therefore sought to map literature on the nutritional challenges among PLWH in sub-Saharan Africa and guide future research in nutritional management to improve health outcomes for PLWH. A systematic search was done from the following sources: PubMed, the Cochrane Database of Systematic Reviews, EBSCOhost (CINAHL and Academic Search Complete), Web of Science, and Google Scholar. In addition, information was obtained both from unpublished studies, which included book chapters, reference lists, theses and conference papers. Eleven (11) studies met the inclusion criteria, and were used for data extraction. The studies were based in different countries, which form part of the Sub Saharan Africa. One of the studies was carried out in Senegal, two studies were carried out in various West African countries, one study was carried out in Burkina Faso; one study was carried out in Ethiopia and one of the studies was carried out in different countries forming part of the Sub Saharan Africa. Two of the studies were carried out in Zambia, one in Zimbabwe, one in Cameroon, and one in Ghana. Most of the studies established the main nutrition challenge facing PLWH to be food insecurity. Based on the findings of the study, it can be concluded that some of the main nutrition challenges include food insecurity, lack of nutritional support among PLWH, late detection of HIV, huge cost of treating severe acute malnutrition, and lack of feeding supplementations.
Background
Over 37 million individuals are living with HIV across the globe as of 2017 (WHO, 2018). Despite the fact that there has been a drastic decline in the number of new infections, there has been a rapid increase in the number of individuals living with HIV (Kharsany & Karim, 2016;WHO, 2018). This can be attributed to the fact that the people are living longer and healthier lives because of effective antiretroviral therapy (ART). When taken in the prescribed manner, ART significantly reduces viral load besides facilitating immune reconstitution (Cloete et al., 2010;Ensoli, Cafaro, Monini, Marcotullio, & Ensoli, 2014;Karim, 2017). Good nutritional status is also highly significant for individuals who are infected with HIV. It is worth pointing out that the HIV always attacks individuals' immune system (Mbirimtengerenji, 2007). During the early phases of infection, individuals demonstrate no visible signs of illness. While good nutrition is key for people living with HIV/AIDS (PLWH), there are challenges, which are still being experienced. This study therefore sought to map evidence on the nutritional challenges faced by PLWH in sub-Saharan Africa to guide future research in nutrition and HIV/AIDS. majorly within the poor communities, and most noticeably in the sub-Saharan Africa (Almeida-Brasil et al., 2018). At household level, HIV/AIDS may have a highly disastrous impact on food security and nutrition (Ivers et al., 2009). HIV affects more individuals than it infects. Every dimension of food security, like availability of food, access and use, are always at risk within the environments in which there is a high HIV/AIDS prevalence (Badri et al., 2005;Bigna, Plottel, & Koulla-Shiro, 2016). Therefore, finding lasting solutions to the different nutritional challenges among PLWH is highly beneficial in ensuring that the adverse impacts, which are associated with the HIV/AIDS epidemic are avoided.
Lack of proper nutrition makes PLWH not to be in a position to maintain healthy and high quality life. Given infection with HIV to a great extent damages a person's immune system, this result in other kinds of infections, like diarrhea and fever (Alebel et al., 2018;Berhe, Tegabu, & Alemayehu, 2013) which have a ripple effect of lowering intake of food and interfering with the ability of the body to absorb food. Therefore, the given individual is weakened, becomes malnourished, and loses weight (Alebel et al., 2018;Duggal, Chugh, & Duggal, 2012;Elfstrand & Florén, 2010). Nutritional support and care are the integral components of any action taken. While there are a number of benefits, which are linked to proper nutrition for the PLWH, in the sub-Saharan Africa, a number of challenges are faced, which result in malnutrition among PLWH (Duggal et al., 2012;Loos et al., 2017). At the time of the review, there was no existing published synthesis on the nutrition challenges among PLWH in sub-Saharan Africa. Our research question was what evidence is available on nutritional challenges among PLWH in sub-Saharan Africa.
Methods
The Arksey and O'Malley's 2005 scoping methodological framework (Arksey & O'Malley, 2005) guided this review. A three-step process was followed. The protocol for the scoping review is published elsewhere (Dzinamarira & Mashora, 2019). First was article search using keywords as detailed in Table 1. Thereafter all eligible references were imported to EndNote X9 software. Duplicates were removed. This was followed by a title screen. Next, we performed abstract screen for eligibility. Articles that met the criteria detailed in Table 2 underwent full title review for eligibility. Extraction of data took place after this, and analysis of the results was done through the use of qualitative thematic analysis. We employed NVivo version 12 to analyze and code the data. A scoping review protocol was developed priori (Dzinamarira & Mashora, 2019) but was not registered on PROSPERO as PROSPERO currently does not accept scoping review protocols. The protocol was guided by Preferred Reporting Items for systematic review and meta-analysis protocols (PRISMA-P) 2015 guidelines. sub AND Saharan AND "Africa" AND "nutrition" OR "nutritional challenges" AND people living OR "HIV/AIDS" 2/14/2019 EBSCOhost (CINAHL and Academic Search Complete) 236 sub AND Saharan AND "Africa" AND "nutrition" OR "nutritional challenges" AND people living OR "HIV/AIDS" 2/14/2019 Web of Science 1,320 Total articles from primary search 2,557
Study Design
The study adopted qualitative scoping review design. A scoping review has been shown to map literature on a given topic or on a given research area (Pham et al., 2014). Scoping reviews offer chance for the identification of the key concepts, research gaps and the kinds and sources of evidence with the aim of informing practice, policymaking, as well as research (Pham et al., 2014).
Literature Search
A systematic search was done from the following sources: PubMed, the Cochrane Database of Systematic Reviews, EBSCOhost (CINAHL and Academic Search Complete), Web of Science, and Google Scholar. In addition, information was obtained both from unpublished studies, which included book chapters, reference lists, theses and conference papers. We considered articles published from January 2001 to January 2019.
Study Selection
A thorough review was done to ensure that included studies met all the aspects in the inclusion criteria. The main keywords, which were used for the search were: nutrition challenges, People living with HIV and Sub-Saharan Africa. Detailed information on searches is available on Table 1.
Data Extraction
Extraction of data took place through the use of a standardized data extraction form. The data that were extracted gjhs.ccsenet.org Global Journal of Health Science Vol. 11, No. 12;2019 include: the author name, the publication year of the article, the geographical area of the studies, and research design. Characteristics of the included studies are detailed in Table 3.
Quality Assessment
The 2018 version of the Mixed Methods Appraisal Tool (MMAT) was used for the evaluation of the risk of bias for the studies, which were included in the review (Hong et al., 2019). The studies, which met the inclusion criteria were assessed based on a number of areas, which include: clarity of the question and research objectives; capacity of the data collected to meet the research questions; data collection from sources that are highly suitable; and rigor and appropriateness of the tool used to analyze the data. Assessment was also done based on accuracy of the sampling technique used; representativeness of the population; rate of response; and the research conclusions. Calculation of the overall percentage quality score was done for the included studies. The scores ranged from ≤50%, which was regarded as low quality, 51-75%, which was of average quality, and 76-100%, which was regarded to be of high quality.
Collating, Summarizing and Reporting the Results
The research used thematic analysis for reporting the research findings from the existing literature. The researcher coded the evidence reported in the included articles. NVivo version 12 was employed for data management. Structuring of the literature was done based on the themes which were derived from the study outcomes.
Screening Results
Initial search yielded 2,557 articles. Of the total, 1,437 articles remained after duplicates were removed. Further 1,171 were excluded after screening titles and 232 articles were excluded after screening abstracts. This left 23 articles for full text review (Supplementary File 1). Of these, only 11 articles met all the inclusion criteria. A detailed PRISMA flow chat is available on Figure 1.
Level of Bias for Included Studies
All the included studies passed through the methodological quality assessment, which was done through the use of the Mixed Methods Appraisal tool (MMAT)-Version 2018. All studies had a score of between 40% and 100%.
Characteristics of Included Studies
Eleven (11) studies met the inclusion criteria, and were used for data extraction. The studies were based in different countries, which form part of the Sub Saharan Africa. One of the studies was carried out in Senegal, two studies were carried out in various West African countries, one study was carried out in Burkina Faso; one study was carried out in Ethiopia and one of the studies was carried out in different countries forming part of the Sub Saharan Africa. Two of the studies, which were included, were carried out in Zambia, one in Zimbabwe, one in Cameroon, and one in Ghana. All the studies, which were included in the review, were published between the year 2001 and 2018.
Study Findings
All the studies strived to provide information concerning some of the main nutrition challenges among PLWH in sub-Saharan Africa. For purposes of the review the term Nutrition -refers to the sum of all the processes, which are often involved during the intake, assimilation, and use of proper quantities of nutrients for maintaining health, productivity and well-being.
Here we present our findings in three main themes that emerged from qualitative content analysis of the included studies' findings.
Food Insecurity
Severe food insecurity is one of the leading nutrition challenges among PLWH (Benzekri et al., 2015). In their study, Benzekri et al., 2015, majority of the research participants who were HIV positive reduced their meal size due to food insecurity. The scholars linked severe food insecurity to lower dietary diversity, less education, and lower daily expenditure on household food. The study concluded by pointing out that the vast majority of HIV-infected adults in Ziguinchor and Dakar suffer from severe food insecurity. Similarly, Poda et al., 2017 also noted that food insecurity is the major challenge among PLWH in Sub-Saharan Africa . This has resulted in inadequate diet . , also cited food insecurity as one of the main nutrition challenge among children living with HIV . Findings from their study showed that severely malnourished children were mainly admitted during the periods of food insecurity or during the post-weaning period. At the same time, the study noted that the other challenge is the fact that WHO's guidelines for the management of children who are hospitalized with severe malnutrition offer minimal specific assistance for treating malnourished children having HIV . In Zimbabwe, , also indicated that one of the main changes, which is always faced include food insecurity, which were disguised as economic factors and inadequate rainfall . According to , this has posed huge challenges to the ability of the HIV/AIDS patients to maintain a healthy diet. A sub-theme that emerged from findings of this study was reliance on processed foods, which are less healthy rather than on indigenous foods. Earlier work by Benzekri et al., 2015 in Senegal yielded similar findings (Benzekri et al., 2015).
Lack of Nutritional Support
Another important theme that emerged from work carried out by indicated that lack of nutritional support was an important challenge among PLWH in sub-Saharan Africa . The findings pointed out that among the malnourished children, over half of them did not receive any kind of nutritional support.
Limited Resources and Lack of Feeding Supplementation
In Ethiopia, Gebremichael et al., 2018, reported limitedness of resources as a main challenge (Gebremichael, Hadush, Kebede, & Zegeye, 2018). HIV/AIDS patients to lack access to adequate nutritious foods (Gebremichael et al., 2018). This poses a huge challenge to the success of ART. went further to add late detection of HIV/AIDS and lack of feeding supplementation as key nutritional challenges among PLWH in Zambia . Late detection of HIV and malnutrition as well as concomitant inadequate nutrition were also reported as the main nutrition challenge among PLWH (Amadi et al., 2001).
Evidence of proxy indicators to nutrition challenges were presented in a sub-theme; inaccessibility of appropriate and timely medical services.
Inaccessibility of Appropriate and Timely Medical Services
The findings from , indicated that one of the main contributions to the excess mortality because of malnutrition among HIV-positive children is inaccessibility of appropriate and timely medical services . It noted the numerous barriers to access of medical care. The findings also indicated that the main financial barrier to treatment for both HIV as well as malnutrition is the huge cost of medical treatment in the nations in which public healthcare is not available. The scholars also noted that co-infection is another significant contributor to severe acute malnutrition among HIV-positive children. They noted that efforts to minimize the number of infections, like gastroenteritis (eg, through water sanitation), are highly significant. In Ghana, Asafo-Agyei et al., 2013, noted a high HIV seroprevalence among children with severe acute malnutrition and a significantly poorer outcome in mortality as well as weight gain (Asafo-Agyei, Antwi, & Nguah, 2013).
Discussion
The scoping review has mapped available literature on the nutrition challenges facing PLWH in sub Saharan Africa. The findings of the studies indicate that the notable nutritional challenges facing PLWH in sub Saharan Africa include food insecurity, huge cost of treating malnutrition, lack of nutritional support, and lack of feeding supplements.
Nutrition is essential when it comes to the HIV morbidity and mortality. Improved treatment modalities play a highly significant role in increasing life expectancy of HIV-infected individuals. Over 1 million adults within the United States live with HIV (Thuppal, Jun, Cowan, & Bailey, 2017). A study was carried out in Kathmandu Valley, Nepal with the aim of estimating the prevalence of under-nutrition among PLWH in Nepal, and also to point out the main risk factors and to evaluate the correlations with PLWHs' quality of life and nutritional status (Thapa, Amatya, Pahari, Bam, & Newman, 2015). Based on the findings of the research, some of the main nutritional challenges, which are faced, include illiteracy, and residence in care homes.
The usage of dietary supplements is very common among PLWH. The main challenges, which are always faced, include vulnerability of individuals to medical misinformation as well as to unfounded health claims (Evans et al., 2013;Kalichman et al., 2012;Mothi, Karpagam, Swamy, Mamatha, & Sarvode, 2011). The findings of our scoping review generally indicate that some of the nutritional challenges are common across borders. As a result, various measures can be put in place to ensure that the challenges are addressed effectively and efficiently.
Consistent with our findings, Weiser et al. noted that food insecurity is a major risk factor for mortality among the antiretroviral therapy-treated individuals in British Columbia, mainly among the people who are underweight (Weiser et al., 2009;Weiser et al., 2011). Innovative approaches for addressing food insecurity ought to be included in the HIV treatment programs. HIV-positive Injection Drug Users (IDU) reporting food insecurity are almost twice as likely to die, in comparison to food secure IDU (Anema, Vogenthaler, Frongillo, Kadiyala, & Weiser, 2009
Implications for Practice
Nutritional challenges have become a major concern, and measures need to be put in place to ensure that the right strategies are adopted in order to deal with the nutritional challenges among PLWH. The findings of this study indicates that the main nutrition challenges include food insecurity, lack of nutritional support among PLWH, and lack of feeding supplementations. Practitioners need to ensure that adequate health education on nutrition is provided to PLWH.
Implications for Research
Progress has been made in HIV epidemic control in sub-Saharan Africa with some countries reported to be nearing epidemic control for HIV. This has brought a shift in focus to ensuring quality life for PLWH. This study guides future research in nutritional management to improve health outcomes for PLWH. Future studies should develop various kinds of strategies, which should ensure that the challenges are addressed effectively and efficiently. Future research should also delve into some of the main measures, which can be put in place to ensure that PLWH.
Strength and Limitations
This review included studies which were carried out in different countries in sub-Saharan Africa. This gives a general picture concerning some of the main nutrition challenges among PLWH in sub-Saharan Africa. The scoping review adopted rigorous and transparent methods. In order to make sure that there was a broad literature search, the search strategy included several electronic bibliographic databases. The articles were thoroughly reviewed to ensure that they met the inclusion criteria. The review might not have identified all the studies in the published and grey literature although attempts were made to be as comprehensive as possible. Additionally, the search was carried out in English terms only. Characterization and interpretation of the included studies were also subject to reviewer bias.
Conclusions
The main aim of the study was to explore the nutrition challenges among PLWH in sub-Saharan Africa. Based on the findings of the study, it can be concluded that some of the main nutrition challenges include food insecurity, lack of nutritional support among PLWH, and lack of feeding supplementations.
Funding N/A.
|
v3-fos-license
|
2021-11-01T13:27:34.746Z
|
2021-11-01T00:00:00.000
|
240290685
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnagi.2021.753351/pdf",
"pdf_hash": "9a3ad7bce83997d87626f3368a0d582b780cfeb3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46717",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "9a3ad7bce83997d87626f3368a0d582b780cfeb3",
"year": 2021
}
|
pes2o/s2orc
|
Vitamin A Deficiency Exacerbates Gut Microbiota Dysbiosis and Cognitive Deficits in Amyloid Precursor Protein/Presenilin 1 Transgenic Mice
Vitamin A deficiency (VAD) plays an essential role in the pathogenesis of Alzheimer’s disease (AD). However, the specific mechanism by which VAD aggravates cognitive impairment is still unknown. At the intersection of microbiology and neuroscience, the gut-brain axis is undoubtedly contributing to the formation and function of neurological systems, but most of the previous studies have ignored the influence of gut microbiota on the cognitive function in VAD. Therefore, we assessed the effect of VAD on AD pathology and the decline of cognitive function in AD model mice and determined the role played by the intestinal microbiota in the process. Twenty 8-week-old male C57BL/6J amyloid precursor protein/presenilin 1 (APP/PS1) transgenic mice were randomly assigned to either a vitamin A normal (VAN) or VAD diet for 45 weeks. Our results show that VAD aggravated the behavioral learning and memory deficits, reduced the retinol concentration in the liver and the serum, decreased the transcription of vitamin A (VA)-related receptors and VA-related enzymes in the cortex, increased amyloid-β peptides (Aβ40 and Aβ42) in the brain and gut, upregulate the translation of beta-site APP-cleaving enzyme 1 (BACE1) and phosphorylated Tau in the cortex, and downregulate the expression of brain-derived neurotrophic factor (BDNF) and γ-aminobutyric acid (GABA) receptors in the cortex. In addition, VAD altered the composition and functionality of the fecal microbiota as exemplified by a decreased abundance of Lactobacillus and significantly different α- and β-diversity. Of note, the functional metagenomic prediction (PICRUSt analysis) indicated that GABAergic synapse and retinol metabolism decreased remarkably after VAD intervention, which was in line with the decreased expression of GABA receptors and the decreased liver and serum retinol. In summary, the present study provided valuable facts that VAD exacerbated the morphological, histopathological, molecular biological, microbiological, and behavioral impairment in the APP/PS1 transgenic mice, and the intestinal microbiota may play a key mediator role in this mechanism.
INTRODUCTION
Alzheimer's disease (AD) is a devastating neurological disease characterized by a loss of cognitive function and a gradual decline in daily life activities, accompanied by behavioral changes and various neuropsychiatric symptoms (Joe and Ringman, 2019). It is predicted that there will be 131.5 million patients suffering from AD in the world by 2050 (Hsiao et al., 2018). Although the pathological process of AD is extremely complex and has not been fully elucidated so far, it is known that the typical pathological features of AD are the formation and deposition of extracellular β-amyloid (Aβ) and intracellular neurofibrillary tangles formed by the excessive phosphorylation of Tau protein in neurons, causing a series of reactions, such as the release of inflammatory factors, energy metabolism disorders, and oxidative stress in neurons, and eventually, leading to the degeneration and loss of neurons in cerebral cortex and hippocampus (Reitz et al., 2011). Therefore, preventing the formation of these pathological phenomena will be the key to solve this problem.
As an essential micronutrient, vitamin A (VA) and its derivatives are closely related to the development of the central nervous system (CNS) (Malaspina and Michael-Titus, 2008) and are essential for normal learning and memory functions (Sherwin et al., 2012). VA deficiency (VAD) is one of the most significant micronutrient deficiencies that pose a severe threat to the health of patients with AD in many countries (Lopes et al., 2014). It has been found that VA and β-carotene levels in patients with AD are significantly lower than those in normal controls, which may be partly attributed to the alterations in dietary behavior (Lopes et al., 2014).
It is of great significance to assess the effect of VAD on the pathogenesis of AD. As an active metabolite of VA, retinoic acid (RA) binds to the retinoic acid receptors (RARs) on the nuclear membrane and stimulates the transcription of target genes, participating in the regulation of organ formation and development. Nuclear retinoid X receptors (RXRs) are RA receptors activated by the 9-cis RA and potentially other endogenous retinoids in the CNS (Krezel et al., 1999). The RAR-RXR heterodimer could combine with the RA response element (RARE) in the promoter region of the target genes, thereby regulating the AD-related gene expression (Maden, 2007). In addition, retinaldehyde dehydrogenase 1 (RALDH1) has two main functions: one is to increase the synthesis of RA when retinol is less available; the other is to participate in the synthesis of neurotransmitters, such as γ-aminobutyric acid (GABA). Furthermore, RAR dysfunction can lead to Aβ deposition, impairment of the long-term synaptic plasticity, and memory in the brain of the patients with AD, which can be rescued by the VA supplementation (Misner et al., 2001;Nomoto et al., 2012). It was previously observed that VA can modulate the expression of β-amyloid precursor protein (APP) and the α-secretase to reduce the formation of the oligomerization of Aβ40/Aβ42 in vitro (Shudo et al., 2009). Moreover, Wang et al. (2015) demonstrated that VA could reduce the expression of beta-site APP-cleaving enzyme 1 (BACE1), which is a crucial enzyme involved in catalyzing the formation of Aβ polypeptide and cleavage of APP (Leng and Edison, 2020).
The gut microbiota is considered as an invisible organ, which plays a mediating role in the bidirectional signal transduction between the gut-brain axis (Petra et al., 2015). The gut-brain axis conducts bidirectional communication between the gut and the brain through the immune system, enteric nervous system, microbial metabolites, and vagus nerve, which ultimately contributes to the formation and function of neurological systems (Doifode et al., 2021). Many specific bacterial genera in the gut, such as Lactobacillus, regulate the expression of specific receptors in the brain via the vagus nerve (Bravo et al., 2011). Additionally, the intestinal microbiota is known to secrete immunogenic mixtures into the surrounding environment, such as amyloids, lipopolysaccharides, and other microbial exudates, which may regulate the signaling pathways and produce proinflammatory cytokines in the AD pathogenesis . Previous research has indicated that the intestinal microbiota composition can be regulated by diet and specific nutrients, resulting in the production or aggregation of amyloid protein in the brain (Scott et al., 2013). Therefore, the gut-brain axis may play an important role in the process of VAD aggravating the cognitive function of AD.
Emerging evidence shows that VAD in the different life stages could impair gut microbiota homeostasis, leading to the imbalance of Firmicutes and Bacteroidetes in rat colonic mucosal microbiota (Chen et al., 2020). Amit-Romach et al. (2009) illustrated that VAD could change the composition of gut microbiota and damage the integrity of the gastrointestinal mucosal barrier by reducing the abundance of Lactobacillus and the total bacterial count in the gastrointestinal tract. In addition, the alterations in the gut microbial community can reduce the level of brain-derived neurotrophic factor (BDNF), which is considered a genuine molecular mediator of functional and morphological synaptic plasticity in the brain (Bercik et al., 2011). Thus, alterations in the function and composition of the intestinal microbiota can help to recognize the new mechanisms that account for the impact of VAD on the pathogenesis of AD.
To date, the specific mechanism of how VA affects the cognitive function of patients with AD has not been clarified. This research aimed to assess the effect of VAD on the cognitive function and pathological mechanism of AD and determine the intestinal microbiota changes driven by VAD in the amyloid precursor protein/presenilin 1 (APP/PS1) transgenic mice. Our study implies that VAD aggravates the decline of learning and memory function of AD, and intestinal microbiota may play an important role in this process.
Animals and Diet
In this study, 20 specific pathogen-free (SPF) male C57BL/6J APP/PS1 transgenic mice aged 8 weeks, weighing 20 ± 2 g, were purchased from Beijing Huafukang Biotechnology Co., Ltd. (Beijing, China), and were randomly separated into two parallel groups (n = 10 per group). They were raised in standard individual ventilated cages with a 12-h dark/light cycle in a room with controlled temperature (23-25 • C) and humidity (45-55%). The VAD feed (VA < 120 IU/kg) and VAnormal feed (VAN; 15,000 IU/kg VA) were purchased from Keao Xieli Feed Co., Ltd. (Beijing, China). Both the feeds and drinking water were available to the mice ad libitum. Considering that the mouse species has a high VA storage capacity and needs a long time (up to 40 weeks) (Etchamendy et al., 2003) to induce VAD, we conducted a 45-week dietary intervention in the present experiment. After 45 weeks of dietary intervention, we assessed their cognitive deficits with the Morris water maze (MWM) test and the step-down passive avoidance (SDPA) test.
Reversed-Phase High-Performance Liquid Chromatography
Since the liver is the most important organ for storing VA and the serum retinol is a common indicator of VAD (WHO, 2009), we used reversed-phase high-performance liquid chromatography (RP-HPLC) to determine the concentration of retinol in the liver and the serum. The serum and liver samples were pretreated with acetonitrile to precipitate the protein. The resulting supernatant was added to 1,000 µl hexane, allowing the retinol to be extracted into hexane. The extract was evaporated to dryness under nitrogen, reconstituted in methanol, and then, separated with methanol in an isometric separation on an HPLC system (Agilent Technologies Inc., CA, United States) with a Pursuit XRs 100A C18 (4.6 mm × 250 mm) column. We set the flow rate to 0.8 ml/min and the detection wavelength to 325 nm to achieve a better detection effect. The peaks were identified by comparing the retention times with those of the standard samples.
Morris Water Maze Test
The spatial learning and memory abilities of mice were assessed by the MWM test, which includes orientation navigation and spatial probe tests. MWM was carried out in a stainless-steel circular pool with a diameter of 1.2 m and a height of 0.5 m. A round table with a diameter of 9 cm was placed in the first quadrant of the pool and placed 1 cm below the surface of the water. We added titanium dioxide to the pool water and mixed it evenly to make it milky white so that the animal cannot recognize the position of the platform in the pool.
The orientation navigation was carried out for 5 consecutive days. The mice were placed into the pool water facing the wall of the pool at different quadrant positions, and the time (the escape latency) and swimming trajectory of the mice from entering the water to finding the round platform within 90 s were recorded with an image automatic acquisition and software analysis system (Xin Ruan Information Technology Co., Ltd., Shanghai, China). Then, the spatial probe test was carried out on the sixth day. After the platform was removed, the mice were allowed to enter the pool in the quadrant opposite to the platform and search for the platform based on memory. The time spent in the target quadrant, the number of times across the platform, and the swimming trajectory were recorded.
Step-Down Passive Avoidance Test The state-dependent learning and memory were evaluated by SDPA, which includes training sessions and testing sessions. The SPDA test was performed in a small chamber containing an insulated wooden platform at the corner and a floor covered with electrode strips. During the training session, the mice were put on a wooden platform and received electric shocks as soon as they stepped on the electrode strips on the floor. The number of times the mouse stepped down from the wooden platform (number of errors) was recorded. After 24 h, the testing session was performed. The electrode strips on the floor were no longer energized. The mice were placed on the wooden platform again, and the time of jumping from the wooden platform to the floor (step-down latency) was recorded.
Hematoxylin-Eosin Staining
The mouse brain tissue samples were stored in 10% formalin solution, dehydrated according to the conventional methods, and then embedded in paraffin. After the sections were deparaffinized with xylene and different ethanol concentrations, they were stained with hematoxylin for 15 min. Afterward, it was soaked in acidified hydrochloric acid ethanol for 5 s and stained with eosin for 4 min. Finally, each section was examined with a microscope slide scanner (3DHISTECH Co., Ltd., Budapest, Hungary).
Immunohistochemistry
The mouse brain sections were placed in a small box filled with ethylenediamine tetraacetic acid (EDTA) antigen retrieval buffer (pH 9.0) and boiled in a microwave oven for 5 min; then, placed in 3% hydrogen peroxide solution and incubated for 20 min to inactivate the endogenous peroxidase activity. Next, placed the slides in PBS (pH 7.4), and washed them three times with shaking on a decolorizing shaker, 5 min each time. Then, they were incubated overnight with the primary antibodies: anti-Aβ40-42 (1:500, Abcam, Cambridge, United Kingdom) and anti-BDNF (1:500, Abcam, Cambridge, United Kingdom). Covered the tissue with the horseradish peroxidase (HRP)-conjugated secondary antibody (1:2,000, Abcam, Cambridge, United Kingdom) and incubated for 60 min at room temperature in the dark chamber. Finally, it reacted with 3,3 -diaminobenzidine (DAB) solution for 10 min. Similar to the Hematoxylin-Eosin (H&E) sections, immunohistochemistry (IHC) sections are visualized using the Micro Slide Scanner (3DHISTECH Co, Ltd., Budapest, Hungary). Analytical software (Image J) was used to quantify the integral optical density (IOD) of Aβ and BDNF in each image.
Real-Time Quantitative PCR
Total RNA of the hippocampus and cortex of the APP/PS1 mice was extracted with the TRIzol R reagent (Thermo Fisher Scientific Co., Ltd., MA, United States). The bacterial genomic DNA was extracted from the feces of the mice with the TIANamp Stool DNA Kit (Tiangen Biotech Co., Ltd, Beijing, China). To ensure the purity of RNA, we used a Genomic DNA pollution scavenger (Applygen Biotech Co., Ltd, Beijing, China) to remove the genomic DNA from the RNA extract. RNA and DNA quality and yield were determined by the ratio of OD260/OD280 with an ultraviolet spectrophotometer (NanoDrop R ND-1000, Thermo Fisher Scientific Co., Ltd., MA, United States). Then, 1 µg of the total RNA from each sample was reverse-transcribed using a RevertAid First Strand cDNA Synthesis kit (Thermo Fisher Scientific Co., Ltd., MA, United States) as prescribed by the supplier. Real-time quantitative PCR (RT-qPCR) was conducted with a Real-time PCR Detection System (Bio-Rad Laboratories Co., Ltd., MA, United States) using KAPA SYBR R FAST qPCR Master Mix (2X) Kit (KAPA Biosystems, Co., Ltd., MA, United States) based on the instructions from the manufacturer. The housekeeping gene β-actin served as the reference for standardization in the hippocampus and cortex, while the universal 16S rRNA gene was used as an internal reference in the fecal samples. The target genes and DNA levels relative to internal reference were quantitatively determined by the 2 − CT method. The sequences of the primers, such as the genus-specific primer for Lactobacillus and Clostridia_UCG-014 used for RT-qPCR, are listed in Table 1.
Luminex xMAP R Technology
The concentrations of Aβ40 and Aβ42 in the brain and gut of the mice were measured using the Luminex xMAP R technology and a specific multiplex plate, MILLIPLEX R MAP Mouse Amyloid Beta Magnetic Bead Panel kit (Merck Millipore, Co., Ltd., Darmstadt, Germany) following the instructions from the manufacturer. The Luminex xMAP R technology is to covalently cross-link the antibody molecules against different test substances to specifically coded microspheres and then use flow cytometry to detect the corresponding items of each coded microsphere. The multiplex plate was measured using a Luminex 200 R analyzer (Luminex, Co., Ltd., TX, United States). The concentrations of Aβ40 and Aβ42 were analyzed using the ELISAcalc software with a fiveparameter logistic curve-fitting method.
Bacterial 16S rRNA Gene Sequencing
As mentioned above, the fecal bacterial genomic DNA is extracted using the TIANamp Stool DNA Kit (Tiangen Biotech Co., Ltd., Beijing, China) according to the instruction from the manufacturer. The library of each sample was generated by PCR amplification using the universal specific primers 338F (5 -ACTCCTACGGGAGGCAGCAG-3 ) and 806R (5 -GGACTACHVGGGTWTCTAAT-3 ) targeted in the variable V3-V4 regions of 16S rRNA (Caporaso et al., 2011). The expected amplicon size is about 468 bp, and then AMPure XP beads (Beckman Coulter, Inc., CA, United States) are used to purify these amplicons to remove the unbound primers or primer dimers. The DNA concentration of the amplified sequence library was determined using the Qubit quantification system (Thermo Fisher Scientific Co., Ltd., MA, United States). Then, the paired-end sequencing was conducted on the purified libraries using the Illumina MiSeq PE 300 platform (Illumina, Inc., CA, United States).
performed to quantify the difference in the β-diversity analysis.
The linear discriminant analysis (LDA) effect size (LEfSe) analysis was performed to evaluate the differentially abundant taxa and biological relevance across the groups with a score cutoff of 2. Functional potential profiling of the microbial communities was predicted from the Kyoto Encyclopedia of Gene and Genomes (KEGG) database using the phylogenetic investigation of communities by reconstruction of unobserved states (PICRUSt) (Kanehisa et al., 2012). Spearman rank correlation was performed to examine the associations of the liver retinol and the serum retinol with the abundance of the genus by using R (version 4.0.0) (R Core Team, Vienna, Austria).
Statistical Analysis
Excluding the microbiota bioinformatic analysis mentioned above, the data from other analyses were represented as mean ± SD. A two-tailed, unpaired Student's t-test was used to compare the difference between the two groups by using the SPSS 23.0 software (IBM, NY, United States). The differences were considered statistically significant at P < 0.05, and presented by the asterisks as follows: * P < 0.05, * * P < 0.01, and * * * P < 0.001.
Vitamin A Deficiency Reduced Liver and Serum Retinol Levels in the Amyloid Precursor Protein/Presenilin 1 Transgenic Mice
Compared with the VAN-diet-fed mice, there was no significant decrease in the bodyweight of the VAD-diet-fed mice after 45 weeks of VA deprivation (n.s.; Figure 1A). However, as shown in Figure 1B, the liver-to-body weight ratio was markedly decreased in the VAD group compared with the VAN group (P < 0.05). To confirm whether the established mice model was VAD or marginal VAD, we detected the serum retinol ( Figure 1C) and found that the serum concentration of VAD-diet-fed mice matched the recommended standards (WHO, 2009) for VAD (≤0.7 µmol/L). Considering that the liver is the primary organ for VA storage in the body, we measured the retinol level in the liver by RP-HPLC and found that it was significantly decreased in the VAD group compared with the VAN group (P < 0.01; Figure 1D). The representative HPLC traces of retinol in the liver and serum are shown in Supplementary Figures 1, 2.
Vitamin A Deficiency Reduced the mRNA Expression of Vitamin A-Related Receptors and Vitamin A-Related Enzymes in the Cortex
Compared with the VAN group, the transcription of the RARγ, RALDH1, RXRα, RXRβ, and RXRγ genes in the cortex were markedly decreased in the VAD group (P < 0.05; Figure 2). Furthermore, VAD decreased the expression of CYP26B1, a catabolizing enzyme that degrades the RA (Woloszynowska-Fraser et al., 2020), in the cortex of APP/PS1 mice (P < 0.05; Figure 2). Besides, the transcription of the RARα, RARβ, RALDH2, and RALDH3 genes was reduced in the VAD group compared with the VAN group, but the differences were not statistically significant (n.s.; Figure 2). The liver retinol level of the APP/PS1 transgenic mice after 45 weeks on VAD-diet. VAD, vitamin A deficiency diet; VAN, vitamin A normal diet. n = 10 per group. * P < 0.05, * * P < 0.01, * * * P < 0.001, and n.s., non-significant.
Vitamin A Deficiency Exacerbated β-Amyloid Deposition, Tau Phosphorylation, and Pathological Degeneration
From the results of H&E staining, it can be seen that there may be some histopathological changes in the cortex and hippocampus of VAD-diet-fed mice, such as sparse and disordered neuron arrangement ( Figure 3A).
To assess the effect of VA deprivation on the Aβ depositions, IHC staining of the brain tissue with the specific Aβ antibody was carried out, and it showed a remarkable increment of Aβ plaque burden in the brain of the VAD mice as compared with the VAN group ( Figure 3B). Moreover, our IHC relative quantitative results advocated that the IOD value of Aβ was greatly increased in the VAD mice compared with that in the VAN mice (P < 0.05; Figure 3C).
Consistent with the IHC staining, the Luminex results showed that Aβ40 and Aβ42 in the cortex were remarkably increased in the VAD group compared with the VAN group (P < 0.05; Figure 3D). Intriguingly, we found that the Aβ burden of the colon and small intestine increased after 45-week VAD-diet consumption (P < 0.05; Figure 3D).
As shown in Figures 3E,F, the expression of the BACE1 and p-Tau in the cortex of mice in the VAD group were notably higher than those in the VAN group (P < 0.05). Additionally, the ratio of p-Tau to Tau in the VAD group was higher than that in the VAN group (P < 0.05).
The mRNA expression of BACE1 in both the cortex and hippocampus of the VAD group was higher than that of the VAN group, but the difference was statistically significant only in the hippocampus ( Figure 3G). Moreover, VAD decreased the expression of ADAM10, an enzyme that decreases the formation of Aβ from APP (Shudo et al., 2009), in the cortex and hippocampus (P < 0.05; Figure 3G).
Given the findings described above, VAD intensified Aβ generation, Tau phosphorylation, and pathological degeneration in the AD model mice.
Vitamin A Deficiency Aggravated the Learning and Memory Deficits in Amyloid Precursor Protein/Presenilin 1 Transgenic Mice
To examine the effect of VAD on spatial learning and memory ability, the MWM test was performed. During the orientation navigation, the escape latency to the target platform during the 5 consecutive days is illustrated in Figure 4A. On the third and the fifth day of the test, the escape latency of the VAD group was prolonged as compared with the VAN group (P < 0.05). Moreover, the VAD mice traveled longer distances than the VAN mice to reach the platform (P < 0.01; Figure 4B,C). In the spatial probe test, compared with the VAN mice, the VAD mice traveled more randomly in the tank without knowing the target location, which demonstrates poor performance in the spatial learning and memory tasks ( Figure 4D). Furthermore, the VAD mice spent less time on the target quadrants and crossed the target platform less frequently than the VAN group, but only the difference in the time of target quadrants was statistically significant (Figures 4E,F).
We conducted the SDPA tests to observe the state-dependent learning and memory, 1 day after completing the MWM tests. FIGURE 2 | Impact of consumption of the VAD-diet for 45 weeks on the mRNA expression of the vitamin A (VA)-related receptors and VA-related enzymes in the cortex of the APP/PS1 transgenic mice. The expression of retinoic acid receptors (RAR-α, RAR-β, and RAR-γ), retinoid X receptors (RXR-α, RXR-β, and RXR-γ), retinaldehyde dehydrogenases (RALDH1, RALDH2, and RALDH3), and CYP26B1 were determined by real-time quantitative PCR (RT-qPCR). VAD, vitamin A deficiency diet; VAN, vitamin A normal diet. n = 3 per group. * P < 0.05, * * P < 0.01, and n.s., non-significant. Figure 4G, the average number of errors in the VAD mice was remarkably higher than in the VAN mice (P < 0.05). Moreover, compared with the VAN group, the latency to step-down was notably shortened in the VAD group (P < 0.05; Figure 4H).
As illustrated in
Both the MWM and SDPA test indicated that VAD could aggravate the learning and memory impairment, thus exacerbating the cognitive deficits in the APP/PS1 transgenic mice.
Vitamin A Deficiency Altered the Structure and Function of the Gut Microbiota in Amyloid Precursor Protein/Presenilin 1 Transgenic Mice
To evaluate the effect of VAD on the intestinal microbiota, sequencing of the V3-V4 regions of the bacterial 16S rRNA was performed on the fecal samples. In the phylum level, the Firmicutes (F)/Bacteroidetes (B) ratio of the VAD group and the VAN group were 1.95 and 3.61, respectively, as displayed in Figure 5A. Community abundance at class, order, and family level are shown in Supplementary Figure 3. Of note, the VAD mice exhibited an enrichment of some potential pro-AD microorganisms, such as Clostridia_UCG-014, and a reduction in the abundance of other potential anti-AD microorganisms, such as Lactobacillus, compared with the VAN mice (Figures 5B,D,E; P < 0.05). The data of Mann-Whitney U test and Metastats analysis for all genera are shown in Supplementary Tables 1, 2. Venn diagram at the genus level ( Figure 5C) revealed that 120 genera were shared by the VAD and the VAN group, but mice on the VAD-diet have more genus than those on the VAN-diet. After VAD intervention, three genera (Formosa, Rhodobacteraceae, and Alloprevotella) were missing, while 19 new genera (Sphingomonas, Peptoniphilus, Rodentibacter, etc.) The RT-qPCR result showing the mRNA level of the BACE1 and ADAM10 genes in the cortex and hippocampus of the APP/PS1 mice. VAD, vitamin A deficiency diet; VAN, vitamin A normal diet. n = 3 per group. * P < 0.05, * * P < 0.01, * * * P < 0.001, and n.s., non-significant.
have appeared (Supplementary Table 3). Interestingly, we found that most of the 19 genera in the VAD diet are Gram-positive microbes, while the three genera in the VAN diet are Gramnegative microbes. As a result, the α-diversity indices (Shannon, Chao1, ACE, and Faith's PD) in the VAD group were remarkably higher than those in the VAN group (P < 0.05; Figures 6A-D). It means that, compared with the VAN-diet, the consumption of the VAD-diet for 45 weeks increased the community evenness and richness in the fecal microbiota of the APP/PS1 mice. The PCoA plot and the hierarchical clustering tree manifested that most of the samples could be separated according to their diets (Figures 6E,F). The dissimilarity of the microbiota between the two groups was further confirmed by the statistical analysis of the differences with the PERMANOVA analysis (P = 0.002).
The LEfSe algorithm makes a supervised comparison of the microbiota to elucidate variant taxa at different taxonomic levels between the two groups. The LEfSe analysis demonstrated that, compared with the VAN group, the intestinal microbiota of the VAD group was different in 12 bacterial genera, seven bacterial families, and five bacterial orders (Figures 7A,B). Particularly, the potentially beneficial bacteria Lactobacillus in the VAD group was significantly lower than that in the VAN mice, while the potentially harmful bacteria Clostridia_UCG-014 in the VAD group was greatly higher than that in the VAN mice, which was consistent with the Mann-Whitney U test and Metastats analysis in Figures 5D,E. Moreover, the reduction in Lactobacillus and the increment in Clostridia_UCG-014 were further verified by RT-qPCR using their respective taxa-specific primers (P < 0.05; Figure 7C).
We used the PICRUSt analysis to predict the functional changes in the metagenome of the present experiment. In the KEGG pathway prediction results, quite a few metabolic pathways were altered in the VAD mice. Interestingly, the VAD mice significantly enriched the pathways related to aging, amino acid metabolism, biosynthesis of other secondary metabolites, digestive system, environmental adaptation, and immune system at KEGG level 2, while lowered those involved in the GABAergic synapse, glutamatergic synapse, retinol metabolism, and lipoic acid metabolism at KEGG level 3 (P < 0.05; Figures 8A,B and Supplementary Table 4). To identify whether VAD dietdriven intestinal microbiota changes are related to the VA concentration or not, we performed correlation analyses between these parameters. Spearman's rank correlation indicated that both the liver retinol and serum retinol were positively correlated with the abundance of Lactobacillus (r = 0.51, P < 0.05; r = 0.70, P < 0.001; Figure 8C) and negatively correlated with the abundance of Clostridia_UCG-014 (r = − 0.66, P < 0.01; r = −0.49, P < 0.05; Figure 8C and Supplementary Table 5).
Vitamin A Deficiency Reduced the Expression of Brain-Derived Neurotrophic Factor and γ-Aminobutyric Acid Receptors in the Brain of the Amyloid Precursor Protein/Presenilin 1 Mice
To assess the effect of long-term VAD on the brain BDNF expression and secretion, we measured the BDNF in the cortex and hippocampus with IHC, western blot (WB), and RT-qPCR, respectively. As shown in Figure 9, compared with the VAN mice, the BDNF gene and the protein expression were markedly downregulated in the VAD-diet APP/PS1 transgenic mice (P < 0.05).
To verify the results of functional metagenomic prediction about the GABAergic synapse, Western blotting and RT-qPCR were employed to determine the expression of the GABA Aα2 and GABA B1b in the cortex and hippocampus of the APP/PS1 mice. As shown in Figures 9C,D, compared with the VAN group, the protein expression of GABA Aα2 and GABA B1b were downregulated in the cortex of the VAD-diet-fed mice, but only the GABA Aα2 was significantly changed. Moreover, the RT-qPCR results demonstrated that the transcription of GABA Aα2 in the cortex and hippocampus of APP/PS1 mice in the VAD group was notably lower than those in the VAN group (P < 0.05; Figure 9E), which was consistent with the WB results.
DISCUSSION
Vitamin A deficiency in older age is considered a modifiable risk factor for AD and other cognitive disorders (Woloszynowska-Fraser et al., 2020). However, the prevalence of VAD in FIGURE 7 | Linear discriminant analysis (LDA) effect size (LEfSe) analysis of the 16S rRNA gene sequencing data and the RT-qPCR verified abundance of Lactobacillus and Clostridia_UCG-014. (A) Histogram of the LDA scores computed for differentially abundant bacterial taxa between the VAD and VAN groups. Only taxa with a LDA score (log 10 ) > 2 and P < 0.05 are listed. (B) Cladogram showing the impact of the diet on the taxonomic distribution of the bacteria. A total of 23 differentially abundant bacterial taxa were detected. Of those, seven bacterial taxa were significantly overrepresented in the VAN group (blue) and 15 bacterial taxa were overrepresented in the VAD group (red). (C) The RT-qPCR validation of the changes in the relative abundance of Lactobacillus and Clostridia_UCG-014 by using their specific primers. VAD, vitamin A deficiency diet; VAN, vitamin A normal diet. n = 10 per group. * P < 0.05, and * * * P < 0.001. developing countries, particularly among older persons with AD, demonstrates the importance of ensuring adequate VA intake for seniors. Diet-derived VA, which are mainly stored in the liver, is released in a homeostatically controlled way to provide a constant source of RA to cells of the body, including the brain cells (Pardridge et al., 1985). Retinol from the liver can also pass through the blood-brain barrier (BBB) by binding to the retinol-binding protein 1 on the choroid plexus and vascular walls of the brain (Woloszynowska-Fraser et al., 2020). After entering the BBB, retinol is sequentially transformed to retinal and RA by the retinol dehydrogenase and retinaldehyde dehydrogenase (RALDH), respectively. VAD in rodents impairs both long-term potentiation and long-term depression in the brain, and the degree of their disruption correlates with the level of VA deprivation (Misner et al., 2001). VA deficiency may further aggravate the decline in RA signaling (Zeng et al., 2017a) because such conditions lead to a decrease in the expression of RARs and RXRs. In addition, the studies in RARβ/RXRγ knockout mice have shown that lack of these receptors leads to decreased long-term declarative (conscious) memory and cognitive flexibility (Mingaud et al., 2008). This evidence is in line with our results that long-term VA deprivation resulted in a significantly decreased serum and liver retinol level and reduced the mRNA expression of the VA-related receptors (RARγ, RXRα, RXRβ, and RXRγ) and VA-related enzyme (RALDH1 and CYP26B1) in the cortex of the APP/PS1 mice.
Accumulating studies illustrate that Aβ deposition and Tau hyperphosphorylation are tightly related to the cognitive deficits and participate in the pathological mechanism of AD (Ramos-Rodriguez et al., 2013). Through IHC staining and Luminex immunoassay in the AD model mice, our research shows that Significantly different functions of the gut microbiota at KEGG level 3 between the VAN and VAD group as predicted by the PICRUSt analysis (partial). (C) Correlation of the liver retinol and the serum retinol with the relative abundance of the top 50 bacterial genera in all mice. The red color represents a positive correlation. Blue color represents a negative correlation. VAD, vitamin A deficiency diet; VAN, vitamin A normal diet. n = 10 per group. * P < 0.05, * * P < 0.01, and * * * P < 0.001.
the Aβ plaque load significantly increased after VA deprivation. Interestingly, we found that the Aβ levels were increased in the small intestine and colon of the mice after VA deprivation. In addition, previous studies have illustrated that enteric Aβ may translocate to the brain via axonal transportation in the vagal nerves (Sun et al., 2020). Although VAD did not affect the total Tau levels, it significantly increased the phosphorylated Tau protein, suggesting that VA deprivation facilitated Tau hyperphosphorylation in the cortex of mice. Additionally, the increased expression of BACE1 indicated that VAD might lead to Aβ plaque accumulation by dysregulating BACE1 expression, which is consistent with previous studies (Wang et al., 2015;Zeng et al., 2017b). In addition to the increase in Aβ, p-Tau, and BACE1, the decreased mRNA expression of ADAM10 suggested that VA deprivation may facilitate AD pathogenesis. These pathological manifestations are consistent with the behavioral results of the MWM test and SDPA test in our study, which are shown as aggravated learning and memory deficits in VAD mice.
Interestingly, our results demonstrate that VA deprivation decreased the mRNA expression of RALDH1 and CYP26B1, suggesting that VA deprivation may lead to a decrease in the synthesis and decomposition of RA in the brain. However, the effect of VAD on RA production and metabolism needs further study to verify.
Considering that there are amounts of undefined soluble amyloid protein secreted by symbionts, the intestinal microbiome may play an essential role in the pathogenesis of neurodegenerative diseases characterized by amyloidogenic features, such as AD . The contribution of the gut microbiota to the formation and dissemination of amyloid in the elderly has become more critical because their gastrointestinal epithelial cells and the BBB have become more permeable to small molecules. Moreover, The BDNF, GABA Aα2 , and GABA B1b were quantitated according to the band density and normalized against the levels of β-actin. (E) RT-qPCR analysis of the expression of BDNF, GABA Aα2 , and GABA B1b genes in the cortex and hippocampus of the APP/PS1 mice. VAD, vitamin A deficiency diet; VAN, vitamin A normal diet. n = 3 per group. * P < 0.05, * * P < 0.01, * * * P < 0.001, and n.s., non-significant. Shen et al. (2017) reported that the cognitive deficits of APP/PS1 mice are related to the specific gut microbiome states. Although individuals have different gut microbiota, it mainly contains members of four phyla (Firmicutes, Bacteroidetes, Actinobacteria, and Proteobacteria), among which Firmicutes and Bacteroidetes account for the largest proportion (Gerard, 2016). As a simple indicator of intestinal microbiota status, the Firmicutes/Bacteroidetes (F/B) ratio increased with the age before adulthood (10.9) and then decreased with age (0.6) (Doifode et al., 2021). Vogt et al. (2017) demonstrated that the F/B ratio decreased in the patients with AD. Our results also demonstrated that the F/B ratio decreased in the VAD-diet mice, implying that the intestinal homeostasis was disrupted. The Venn diagram manifested that VAD led to more new genera in the intestinal microbiota, but most were not probiotics. The α-diversity is primarily associated with two factors. One is the number of species: richness; the other is diversity, the evenness of individual distribution in the community (Koh, 2018). Interestingly, the α-diversity increased in the VAD mice, which indicated that either there were more genera in the intestines of the VAD mice or there are more similar distribution of abundances of genera in VAD mice. But that does not mean it is in a better intestinal microbial homeostasis, as it may be caused by the increase or appearance of harmful bacteria, such as the 19 unique bacterial genera in the VAD mice (Supplementary Table 3).
The β-diversity uses the evolutionary relationship and abundance information of the sequences of each sample to estimate the relative distance between the samples to reflect significant differences in the microbial communities between the groups (Soininen et al., 2007). The PCoA plot indicated that the composition, species, and differentiation of gut microbiota altered significantly after long-term VAD.
The Mann-Whitney U test, LEfSe analysis, and RT-PCR validation showed that the genus Lactobacillus decreased significantly after long-term VAD (P < 0.05). In addition, the Spearman's rank correlation analysis showed that the liver and serum retinol level was positively associated with the abundance of Lactobacillus (r = 0.51, P < 0.05; r = 0.70, P < 0.001). Lactobacillus can enhance the intestinal mucosal barrier function by increasing the permeability resistance of HT-29 cells and Caco-2 cells, along with the increase of the phosphorylation level of tight junction proteins (Resta-Lenert and Barrett, 2003). It has been demonstrated that Lactobacillus can induce the expression of mucin in the colonic epithelial cells, which forms a protective barrier between the body and the external environment (Caballero-Franco et al., 2007). Moreover, previous studies have demonstrated that Lactobacillus could stimulate the gut-brain axis and upregulate the expression of BDNF (Ranuh et al., 2019). The study by Buchman et al. (2016) highlights the brain BDNF as a potential substantial contributor to slowing cognitive deterioration in the elderly, particularly in the context of advancing AD neuropathology. Accumulating studies have shown that BDNF levels decreased in both the brain and serum of the patients with AD (Carlino et al., 2013). In addition, Beeri and Sonnen (2016) argued that the brain BDNF expression could be regarded as a biomarker for the cognitive improvement against the AD pathological progression. In neurons, BDNF mainly binds to the tyrosine kinase receptor B (TrkB) to activate the intracellular signaling pathways, such as the PI3K-Akt signaling pathway, Ras/Raf-MEK-ERK signaling pathway, and PLC-PKC signaling pathway, thereby improving the viability and regeneration of the neurons (Hang et al., 2021) and increasing the synaptic plasticity and the learning and memory function of the brain (Zhang et al., 2012). On the contrary, the BDNF-devoid neurons often manifest neurofibrillary tangles, a characteristic of AD, which was absent in the densely BDNF-labeled neurons (Murer et al., 1999). Furthermore, Nigam et al. (2017) demonstrated that BDNF reduces the Aβ production by enhancing α-secretase processing of the APP. Therefore, BDNF may be an important mediator of the aggravated cognitive deficits after longterm VAD.
Meanwhile, we observed that the abundance of Clostridia_UCG-014 increased significantly in the VAD mice. Similar to the results of Lactobacillus, this difference was also verified by Mann-Whitney U-test, Metastats analysis, LEfSe analysis, Spearman's rank correlation analysis, and RT-qPCR. Clostridia_UCG-014 is an obligately anaerobic bacteria, which is common in soil and the mammalian intestine (Suen et al., 2007). Previous studies have shown that the increased abundance of Clostridia_UCG-014 is related to elevated fasting blood glucose and colitis . Clostridia-related diseases mainly occur when the spores enter the body, colonize the host, germinate and generate toxins (Mallozzi et al., 2010). Thus, the increase in Clostridia_UCG-014 after long-term VAD may cause more serious damage to the health of the patients with AD.
Functional interpretation of the fecal metagenomes by PICRUSt revealed that the pathways, such as amino acid metabolism, GABAergic synapse, glutamatergic synapse, retinol metabolism, and lipoic acid metabolism displayed remarkable alterations after long-term VAD. The decrease in the retinol metabolism pathway is consistent with the decrease of the retinol level in the liver and the serum and the decrease of VA-related receptors in the brain. An AD microbiome cohort (Liu et al., 2019) revealed that decreased amino acid metabolism was found in the patients with cognitive impairment. In addition, the function and structure of glutamatergic synapses are affected by Aβ in the early stage of AD (Marcello et al., 2012). Holmquist et al. (2007) found that lipoic acid could be regarded as a neuroprotective and anti-inflammatory treatment for patients with AD. Besides, the GABAergic synapses play a neuroprotective role in the process of Aβ-induced neurotoxicity by releasing GABA (Cisternas et al., 2020). More interestingly, Bravo et al. (2011) highlighted that Lactobacillus could upregulate the GABA receptors in the hippocampus and cortex through the vagus nerve, which is consistent with our RT-qPCR and WB results about the GABA Aα2 and GABA B1b receptors. Moreover, BDNF can promote the synaptic formation and maturation of the GABAergic synapses (Huang et al., 1999), and the excitatory action of GABA has been shown to activate the BDNF expression (Fukuchi et al., 2014). Therefore, we hypothesize that intestinal microbiota, especially Lactobacillus, may play an important role in the reduction of the BDNF and GABA receptors in the brain after long-term VAD in AD individuals, thereby increasing the production of Aβ and Tau hyperphosphorylation, and ultimately affecting the development of the cognitive dysfunction. Although the above hypothesis can be proposed from the results of our study, we do not know to what extent the AD pathological and behavioral impair can be attributed to the reduced BDNF and GABA receptors by the VAD-induced microbiota disruption. The present study is only a preliminary observation, but we provide a new perspective on the gut-brain axis that VAD exacerbates the gut microbiota dysbiosis and cognitive deficits, and we will conduct further experiments to verify this hypothesis in the future.
CONCLUSION
In summary, the findings of the present study provided valuable facts that long-term VAD exacerbates the gut microbiota dysbiosis and cognitive deficits, highlighting the importance of monitoring the VA levels during senescence and necessitating the timely supplementation of VA in person who is at a higher risk of developing AD. Additionally, we posit that VAD may reduce the expression of GABA receptors and downregulate BDNF in the brain by disturbing intestinal microbiota (especially Lactobacillus), thus leading to histological and cognitive impairment in AD. Further studies are warranted to verify the hypothesis and elucidate the mechanisms by which VAD exerts its effect on microecology.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found in the article/ Supplementary Material.
ETHICS STATEMENT
The animal study was reviewed and approved by the Ethics Committee of Capital Medical University (AEEI-2018-176).
AUTHOR CONTRIBUTIONS
P-GL and CY had primary responsibility for the final content and designed the research. B-WC and K-WZ conducted the experiment. B-WC analyzed the data and wrote the manuscript.
S-JC revised the manuscript. All authors contributed to the article and approved the submitted version.
FUNDING
This study was supported by grants from the National Natural Science Foundation of China (Nos. 81573128 and 81703216).
|
v3-fos-license
|
2024-03-08T06:44:32.292Z
|
2024-03-07T00:00:00.000
|
268264161
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://academic.oup.com/mnras/advance-article-pdf/doi/10.1093/mnras/stae689/56905043/stae689.pdf",
"pdf_hash": "8d9931ce82f592fa8795a5d28e1c7964149dd45e",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46720",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "8d9931ce82f592fa8795a5d28e1c7964149dd45e",
"year": 2024
}
|
pes2o/s2orc
|
The distance to CRL 618 through its radio expansion parallax
CRL 618 is a post-AGB star that has started to ionize its ejecta. Its central HII region has been observed over the last 40 years and has steadily increased in flux density at radio wavelengths. In this paper, we present data that we obtained with the Very Large Array in its highest frequency band (43 GHz) in 2011 and compare these with archival data in the same frequency band from 1998. By applying the so-called expansion-parallax method, we are able to estimate an expansion rate of 4.0$\pm$0.4 mas yr$^{-1}$ along the major axis of the nebula and derive a distance of 1.1$\pm$0.2 kpc. Within errors, this distance estimation is in good agreement with the value of ~900 pc derived from the expansion of the optical lobes.
INTRODUCTION
The fate of a star with a main sequence mass in the range from 0.8 to 8 M ⊙ is well established: it goes through the Asymptotic Giant Branch (AGB) phase, then into the Planetary Nebula (PN) phase and eventually it ends its evolution as a white dwarf (Kwok et al. 1978).However, the details of each evolutionary stage, such as the formation and the early evolution of PNe are not clear.For example, it is still uncertain which mechanism/s operate to produce the often complex morphology observed in a fraction of PNe and post-AGB stars, while the circumstellar envelopes (CSEs) around their progenitors (AGB stars) show regular and symmetric structures.
It is commonly accepted that the circumstellar shells of PNe are the product of the interaction between the residual, slowly expanding (∼20 km s −1 ) AGB envelope and a subsequent, rapid (100-1000 km s −1 ) shaping agent.High-speed collimated outflows, or jets, that operate during the early post-AGB evolutionary phase, or even towards the end of the AGB, have been proposed as the primary agent in shaping CSEs, the origin of such winds being linked to binarity (Soker 1998;De Marco & Izzard 2017).
To investigate this matter, many authors have tried to identify very young PNe or proto-Planetary Nebulae (PPNe), but this has turned out to be quite difficult, due to the short duration (≈1000 yr) of the transition from the AGB to the PN stage.One of the few objects that can be used for such studies is CRL 618, a PPN that started its post-AGB stage about 200 years ago and is rapidly evolving towards the PN stage (Westbrook et al. 1975).The rapid evolution of this object is explained by the nature of its central object, which has been found to ★ E-mail: luciano.cerrigone@alma.clbe an active symbiotic, whose companion star displays a WC8-type spectrum (Balick et al. 2014).This source provides us with a unique opportunity to study the physical processes taking place immediately before the birth of a planetary nebula.Multi-wavelength, multi-epoch observations (Sánchez Contreras et al. 2002;Sánchez Contreras & Sahai 2004;Sánchez Contreras et al. 2004;Pardo et al. 2004;Soria-Ruiz et al. 2013) have provided us with a complex picture of the source, consisting of: i) a large (∼20 ′′ ) molecular envelope made of the remnant AGB wind expanding at low velocity (∼17 km s −1 ); ii) multiple optical lobes, where shocked gas expands at high velocity (∼200 km/s); iii) a fast (up to ∼340 km s −1 ) bipolar molecular outflow along the polar axis; iv) a dense compact (∼1.5 ′′ ) molecular core surrounding the star and slowly (≤12 km s −1 ) expanding; v) a compact (about 0.2 × 0.4 ′′ ) HII region close to the central object, indicating the onset of ionisation in the envelope.
When the central star of a PPN becomes hot enough to ionize its CSE (T eff ≥20 000-30 000 K), radio continuum emission from the ionized gas can be detected in the centimetric range and this has been used to search for hot post-AGB stars where the ionisation has recently started (Umana et al. 2004;Cerrigone et al. 2011Cerrigone et al. , 2017)).CRL 618 was first detected at radio wavelengths by Wynn-Williams (1977) and was soon found to display increasing flux density over time, which was interpreted as due to the expansion of the ionized region (Kwok & Feldman 1981).Since then, several works have addressed the variability of the radio flux from this object, which displays a steady increase in the optically thick centimetric range and a more erratic behavior in the millimeter range, where the emission is optically thin (Martin-Pintado et al. 1988;Sánchez Contreras & Sahai 2004;Planck Collaboration et al. 2015;Sánchez Contreras et al. 2017).While the increase in flux density at frequencies where the emission is optically thick is interpreted as the expansion of the emitting surface, the erratic variability at frequencies of optically thin emission has been seen as an indication of on-going activity from a stellar post-AGB wind (Sánchez Contreras et al. 2017).
CRL 618 is also the only proto-PN studied in millimeter radio recombination lines (RRLs), which has allowed estimating a remarkable mass-loss rate of ∼8.4×10 −6 M ⊙ yr −1 for its post-AGB wind (Martin-Pintado et al. 1988;Sánchez Contreras et al. 2017).The existence of this still active ionized wind from the central star has also been invoked by Tafoya et al. (2013), who analyzed radio data of CRL 618 spanning years 1982-2007 and concluded that the ionisation of the circumstellar material started around 1971.Tafoya et al. (2013) also confirmed that the nebula is ionisation-bounded in the direction of its minor axis, indicating a much larger density in the equatorial direction than along the polar axis.
The distance to CRL 618 is still somewhat uncertain.Schmidt & Cohen (1981) estimated a distance of 1.8 kpc, assuming a main sequence luminosity of 2 × 10 4 L ⊙ for the B0 central star.Goodrich (1991) argued instead for a smaller luminosity of 10 4 L ⊙ , hence a distance of 0.9 kpc, although based on the radial velocity of the star and the Galactic rotation curve, they also mentioned a possible value of 3.1 kpc.Knapp et al. (1993) estimated a bolometric flux of 2 × 10 −7 erg cm −2 s −1 and quoted a distance of 1.1 kpc, for an assumed luminosity of 10 4 L ⊙ .Finally, Sánchez Contreras & Sahai (2004) measured proper motions in the high-velocity bipolar clumps that support the value of 0.9 kpc indicated by Goodrich (1991).
In the context of estimating the distance to a star, geometrical methods are the most reliable ones, when possible.One of these is the so-called expansion parallax, which allows deriving the distance by measuring the angular expansion of a structure such as a ring or shell, if its linear expansion velocity is known.Since the development of a PN implies the existence and expansion of an ionized shell, this method can be applied to this kind of sources even in their early phases, if their angular expansion and linear velocity can be measured.A version of this method was developed by Masson (1986) based on the comparison of radio interferometric visibility data sets and was later applied to several PNe (Guzmán-Ramírez et al. 2011, and references therein).
OBSERVATIONS AND DATA REDUCTION
We observed CRL 618 with the NSF Karl G. Jansky Very Large Array (VLA) telescope operated by the National Radio Astronomy Observatory (NRAO), while the array was in configuration A on 2011 June 12, at a frequency of ∼43 GHz (7 mm).The data were acquired within project 11A-171 (PI: G. Umana).The flux calibrator was 3C 138 and the complex-gain calibrator was JVAS J0414+3418.Two spectral windows were set up, spanning a total of 64 MHz in full polarization.The duration of the observations was 2 hours.
The data were reduced in CASA 5.6.1-8 (McMullin et al. 2007) with the VLA pipeline delivered with the software package, after correcting the intents 1 of the astrophysical sources in it: 3C 138 turned out not to have sufficient signal-to-noise ratio per channel to be used as bandpass calibrator, hence it was used only as flux calibrator, while 0414 + 3418 was used as both bandpass and complex-gain calibrator.After a first run of the pipeline, the data were inspected and some necessary manual flags were identified.The whole calibration 1 Intents are used by the pipeline to identify calibrators (in our case, pointing, bandpass, flux, and complex-gain calibrators) and science targets in an observation.was then repeated re-starting from the raw data and including the manual flags.Given the small band width of the data set, no correction was performed for the spectral slope introduced using the complexgain calibrator to calibrate the bandpass.
In this paper, we also make use of the data taken with the VLA on 1998-05-02 within project AW485 (PI: J. Wrobel).The data were acquired in continuum mode and span 100 MHz in frequency, centered around 43.3 GHz in full polarization.The array was in A configuration, but only a subarray of 12 antennas was used at 43 GHz, while a simultaneous subarray was used to observe in different frequency bands.The duration of the observation was about 8 hours.
This second data set was calibrated setting explicitly the flux scale of the phase calibrator JVAS 0443+3441 to the same values reported by Tafoya et al. (2013) 2 , since the flux calibrator in the original data is now known to be variable, hence not reliable.It must be noticed that the critical parameter for the analysis carried out in this work is not the absolute flux calibration in each data set, but the size of the emitting area, which is identified by a signal-to-noise ratio larger than 5, therefore it does not depend on the absolute calibration.Opacity and gain-curve correction were applied to the data, then a phase-only calibration table was generated, which was applied to generate the final amplitude and phase calibration.The reduction of the AW485 data was performed with the same CASA version used for the 11A-171 data set.
Both the 1998 and the 2011 data sets were self-calibrated only in phase, after the initial phase-reference calibration.This led to final flux densities of 0.98 ± 0.05 Jy beam −1 and 1.20 ± 0.06 Jy beam −1 in 1998 and 2011, respectively.The rms noise in each of the maps was instead of 0.4 mJy beam −1 in 11A-171 and 0.8 mJy beam −1 in AW485.
In Figure 1, we display the final coverage for CRL 618 in the two data sets at 43 GHz.It is evident that the sampling is less dense in AW485 due to the smaller number of antennas, but at the same time, the longer integration time allows for a more uniform distribution over the plane.AW485 achieves about the same baseline lengths in both and , which is expected to turn into a rounder synthesized beam.The 11A-171 data set achieves about the same baseline lengths in but shorter in u.Although the sampling is clearly different in the two data sets, these differences do not seem to be so substantial as to bias the final imaging products and their analysis, especially for values of || and || smaller than 3.5 M, which corresponds to a beam size of ∼0.06 ′′ .The differences can then be well mitigated through data weighting and beam convolution in the imaging step.In Figure 2, we display the dirty beams of the two data sets.
The 80th percentiles for the shortest and longest baselines in the two final data sets are respectively 3744 m and 15407 m in 11A-171 and 5377 m and 20408 m in AW485.The sampling in AW485 is therefore shifted towards longer baselines in comparison to 11A-171, due to the longer integration and different antenna distribution.To overcome the different angular resolution, we convolved the final products to a common beam with size matching both data sets.
NEBULAR EXPANSION
We performed our analysis of the expansion of the circumstellar ionized nebula in CRL 618 with the AW485 and 11A-171 data sets at 43 GHz following the method described by Masson (1986), to obtain a difference image from two radio interferometric data sets.
As explained above, the data sets were initially calibrated independently, then a round of self-calibration in amplitude and phase was executed on 11A-171, assuming as a model the image from AW485.As required by the radio parallax method (Masson 1986), this was done to align the two images on the same amplitude and phase reference, removing any offsets.
After this step, the AW485 data were subtracted in the visibility plane from 11A-171, using the CASA task uvsub.At this point, imaging of the difference data set was performed with tclean in CASA 5.6.1-8.In Figure 3, we display the images of the source at the two epochs and the difference obtained.The width of the convolution beam was set to 0.06 ′′ for both data sets.
The difference image (the bottom plane in Figure 3) displays a clear ellipse of emission.Along the direction of the major axis of the ellipse, two drops in emission at the eastern-most and westernmost locations can be seen (indicated by two arrows in the figure).This is compatible with the known difference in density between the material along the polar axis, which has been mostly swept away by the fast wind, and the material in the perpendicular direction, which is still an almost unaltered residual from the AGB phase (Sánchez Contreras et al. 2002).The inner region has negative emission (i.e., the older image has larger flux density in that area), as expected due to the expansion.
Expansion-parallax distance
After detecting the expansion in the difference image, we estimated the distance to our target as described by Guzmán-Ramírez et al. (2011).
We started from the image of CRL 618 obtained with the AW485 data set, then created a set of expanded images, each of them with an expansion factor of 1 + , where ranged from 0.1 to 0.3 in steps of 0.003.From every expanded image we then subtracted the AW485 one, resulting into a grid of model difference images.Every model difference image thus obtained was compared to the real difference image pixel by pixel, calculating a 2 value where ij is the value of the pixel with coordinates and in the model difference and ij in the real difference, and is the standard deviation of the ij − ij values over the whole map.
For a direct comparison, we display in Figure 4 the contours of the real difference image in black and those of the best-model difference image in red.
The values of 2 obtained are plotted in Figure 5 as a function of .The minimum was found by fitting a cubic curve in the range where 0.12 ≤ ≤ 0.2 (Figure 6) and then calculating its analytical minimum. 2 was then found to be minimum at = 0.160 ± 0.003, where the error is the statistical error on the fit to the curve of 2 values.
The distance to the object can be calculated with the following 5,5,7,9,11,13,15) and rms of about 0.8 mJy beam −1 and 1 mJy beam −1 , respectively.Negative levels are displayed as dashed lines and positive ones as solid lines.formula (Guzmán-Ramírez et al. 2011): where exp is the expansion velocity and is the angular expansion rate in milliarcsec (mas) per year, which is equivalent to mas 13.1205 yr (3) where is the expansion factor that we have determined, the angular radius at the time of the AW485 observation in mas, and 13.1205 yr is the time between the AW485 and 11A-171 observations.
To obtain an estimation of the size of the nebula in 1998, we first rotated the AW485 image by 2.5 degrees in the direction North to East, to align its main axis with the E-W direction, then slices of the nebula were taken in the N-S and E-W directions separately, measuring the length of every slice (amount of pixels above 5 multiplied by pixel length) and summing the flux density values along each of them.Finally, a flux-weighted mean size was calculated as where is the length of every slice and the total flux from it.This returned a size of 0.34 ′′ × 0.65 ′′ , with a relative error of about 10%, considering the error of 5% on the flux density measurements.We can then use the length of the semi-axes to derive the angular expansion along the polar direction and orthogonally to it.This translates according to Eq.3 into = 2.1 ± 0.2 mas yr −1 along the minor axis and 4.0 ± 0.4 mas yr −1 along the major axis, between years 1998 and 2011.These compare well with the values of 2.3 ± 0.6 mas yr −1 and 4.7 ± 1.1 mas yr −1 found by Tafoya et al. (2013) and averaged over time until 2007.
In the derivation of the distance from Eq. 2, a critical value is the velocity.It has been shown that expansion-parallax distances need to be corrected, if the velocity is obtained from optical spectral lines, due to the different velocities of the material traced by the radio continuum and the lines of ionized elements (Schönberner et al. 2018).To avoid this complication, we make use of the velocity derived from radio data by Martin-Pintado et al. (1988) and Sánchez Contreras et al. (2017), who estimate that the HII region expands at ∼20 km s −1 by modelling its radio recombination lines (namely, H30, H35, and H41), thus tracing the overall motion of the ionized gas, without abundance or line-excitation biases.As the expansion occurs mainly along the major axis, we associate this velocity (with a 10% error) to the expansion rate in this direction and thus obtain a distance of 1.1 ± 0.2 kpc.Though slightly larger, this value compares well with that of ∼900 pc derived from the expansion in the lobes, seen in the optical images of CRL 618 (Sánchez Contreras & Sahai 2004).
CONCLUSIONS
We have analyzed data of CRL 618 obtained in 1998 and 2011 at 43 GHz (7 mm) with the VLA.The 2011 data are presented here for the first time.A size increase of the nebula is immediately evident by eye in the images from the different epochs.The expansion was estimated by imaging the data after subtraction in the visibility plane and comparing this to a grid of model difference images obtained expanding the image from 1998.This assumes that the expansion is self-similar, which is not strictly true for CRL 618, because its major and minor axes have been found to expand at different rates.
Nevertheless, our analysis indicates that at first approximation this can be neglected and the method still returns reliable results.The expansion rates that we find are in fact compatible within errors with previous independent estimations: we find 2.1 ± 0.2 mas yr −1 along the minor axis and 4.0 ± 0.4 mas yr −1 along the major axis, leading to a distance to CRL 618 of 1.1±0.2kpc.While previous estimations of the distance to CRL 618 were derived from an assumed intrinsic luminosity of the source, this is its first direct measurement.The present value matches within errors with that of 0.9 kpc, which has been widely adopted in the last twenty years, and coincides with that given by Knapp et al. (1993) for a luminosity of 10 4 L ⊙ .
Figure 1 .
Figure 1. coverage of data sets 11A-171 and AW485 after calibration and flagging.
Figure 3 .
Figure 3. Images of CRL 618 in Q band from projects AW485 (top) and 11A-171 (middle); contours start at 5×rms and increase by steps of 5×rms (−5×rms is also plotted as a dashed line); the convolution beam of 0.06 ′′ is plotted in the bottom left; the rms noise is 0.8 and 0.4 mJy beam −1 for the AW485 and 11A-171 data, respectively.The bottom image displays the difference of the two data sets, linearly stretched between −5×rms and its maximum, with two arrows pointing at drops in flux density along the shell.
Figure 5 .
Figure 5. Distribution of 2 (as defined in Eq. 1) as a function of (the expansion factor minus one).
Figure 6 .
Figure 6.Cubic curve fitted to the 2 values in the range 0.2 ≤ ≤ 0.225.
|
v3-fos-license
|
2018-12-12T19:54:03.055Z
|
2018-12-01T00:00:00.000
|
54486658
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6643/10/12/1875/pdf",
"pdf_hash": "04fc4620f636aea687861b2d970bf5d5cfffcf43",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46722",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "04fc4620f636aea687861b2d970bf5d5cfffcf43",
"year": 2018
}
|
pes2o/s2orc
|
Influence of Parental Healthy-Eating Attitudes and Nutritional Knowledge on Nutritional Adequacy and Diet Quality among Preschoolers: The SENDO Project
Parental nutrition knowledge and attitudes play a fundamental role in their children’s food knowledge. However, little is known about their influence on their children’s diet quality and micronutrient intake. Thus, we aimed to assess the association of parental nutrition knowledge and healthy-eating attitudes with their children’s adherence to the Mediterranean dietary pattern and micronutrient adequacy. Parental healthy-eating attitudes and knowledge of the quality of their child’s diet as well as anthropometric, lifestyle, and nutrient intake characteristics were recorded with a basal questionnaire that included a 140-item-food frequency-questionnaire. A total of 287 pre-school children were included in the analyses. Intake adequacy was defined using the Estimated Average Requirements (EAR) cut-off point method. We developed a parental nutrition knowledge and healthy-eating attitudes scores and evaluated whether they were independently associated with 1) children’s inadequate intake (probability of failing to meet ≥3 EAR) of micronutrients, using logistic regression analyses, and 2) children’s diet quality (adherence to the Mediterranean Diet according to a Mediterranean Diet Quality Index for children and adolescents, the KIDMED index), using multiple linear regression models. A higher score in the parental healthy-eating attitudes score was associated with lower risk of failing to meet ≥3 EAR compared with the reference category (odds ratio (OR): 0.3; 95% confidence interval (CI) 0.12–0.95; p for trend: 0.037) and a higher adherence to the Mediterranean diet in the most adjusted model (β coefficient: 0.34; 95% CI 0.01–0.67; p for trend: 0.045). Our results suggest a positive association of parental healthy-eating attitudes with nutritional adequacy and diet quality in a sample of Spanish preschoolers. Public health strategies should focus on encouraging parental healthy-eating attitudes rather than simply educating parents on what to feed their children, recognizing the important influence of parental behavior on children’s practices.
Introduction
Healthy eating during childhood may be one of the most determinant factors in human health and it is a well-known growth and developmental booster [1].Eating behaviors are shaped by intrinsic (genetic, age, and sex) and environmental factors, such as family, friends, or neighborhood [2].Parents are important agents in the promotion of health, behavior, and education of their children; they create food environments and play a key role in structuring their children's first experiences with food and eating through their own beliefs, food practices, perspectives, eating attitudes, knowledge, and understanding of the benefits of food and nutrients on health [3].Particularly, parental nutrition knowledge and attitudes have been described as important factors for children's healthy food knowledge [3].However, little is known about children's diet quality and micronutrient adequate intake in relation to their parents' or caregivers' attitudes.
Knowledge is a complex scheme of beliefs, information, and skills gained through experience and education [3].In terms of nutrition and eating, knowledge can be described as the familiarization of the benefits of food and nutrients on health and the ability to remember and recall specific terminology and information on the subject.
Eating attitudes are emotional, motivational, perceptive, and cognitive beliefs that influence the behavior or practice of an individual whether or not they have knowledge [4].Attitudes are measured to identify individual positive or negative disposition regarding a health problem, dietary practices, nutritional recommendations, dietary guidelines, or dietary preferences.
Dietary intake and diet quality are difficult to measure.Despite some limitations, Food Frequency Questionnaires (FFQs) are considered the most efficient and feasible method to assess usual dietary intake, whereas diet quality indices can be defined either a priori or a posteriori using the information gathered by the FFQs.Diet quality indices evaluate the overall diet quality, with a higher score usually meaning higher quality diet or higher adherence to a particular dietary pattern [5].Thus, diet quality indices allow individuals to be categorized according to the quality of their diet.
Useful instruments are available to assess the relationship of nutrition knowledge and attitudes with dietary intake [3].Previous studies found a significant association between nutritional knowledge and self-diet quality.In an adult context, positive food-related attitudes have been associated with better diets measured by the Healthy Eating Index and by a higher consumption of vegetables and fruits [6].However, evidence of the specific association of parental nutrition knowledge, and especially eating attitudes, with their children's nutritional adequacy and diet quality is scarce.
The aim of this study was to assess the association of certain parental nutrition knowledge and healthy-eating attitudes with their children's nutritional adequacy and diet quality (adherence to the Mediterranean dietary pattern) in a sample of Spanish preschoolers.Findings of this investigation will help improve our comprehension of the elements that may influence children's dietary patterns and provide valuable understanding about whether nutritional knowledge and healthy-eating attitudes are equally important.
Study Aim, Design, and Setting
The Seguimiento del Niño para un Desarrollo Óptimo (SENDO) project is a prospective, dynamic, and multipurpose pediatric cohort designed to evaluate the effect of diet and lifestyle on the health of children and adolescents.The SENDO project started in 2015.Inclusion criteria for the SENDO project included children being 4-7 years old at recruitment and who were living in Navarra (Spain).There were no exclusion criteria.The study was conducted according to the guidelines laid down in the Declaration of Helsinki and all procedures involving human subjects were approved by the ethical committee of clinical research of the Government of Navarra (Pyto2016/122).Written informed consent was obtained from all parents before the initiation of the study.This cross-sectional study was performed using data collected between 2015 and 2017.For the present study, parents who first handled the informed consent and replied to the baseline questionnaire before the end of 2017 (n = 388) were eligible (Figure 1).We excluded participants who reported total energy intake outside the defined limits (<P1 or >P99) or micronutrients intakes ≥3 standard deviations from the mean.We opted for a complete case analysis.Therefore, the final simple consisted of 287 participants with no missing information.
Nutrients 2017, 9, x FOR PEER REVIEW 3 of 14 were approved by the ethical committee of clinical research of the Government of Navarra (Pyto2016/122).Written informed consent was obtained from all parents before the initiation of the study.This cross-sectional study was performed using data collected between 2015 and 2017.
For the present study, parents who first handled the informed consent and replied to the baseline questionnaire before the end of 2017 (n = 388) were eligible (Figure 1).We excluded participants who reported total energy intake outside the defined limits (<P1 or >P99) or micronutrients intakes ≥3 standard deviations from the mean.We opted for a complete case analysis.Therefore, the final simple consisted of 287 participants with no missing information.
Exposure Assessment
The main source of retrospective information regarding children's medical records, food intake, dietary habits, and physical activity was provided by the parents through their answers to the basal questionnaire.Anthropometric measurements (i.e., weight and height) were self-reported (reported by parents) at baseline.Participants received detailed information about how to measure children's weight and height.This questionnaire also included a semi-quantitative FFQ, commonly used and strongly recommended, to evaluate with accuracy dietary patterns as well as diet quality.
The use of supplements was collected at baseline.Participants were asked whether they used nutritional supplements during the previous year (answer: yes vs. no).If so, they were asked to specify brand and dosage.
Exposure Assessment
The main source of retrospective information regarding children's medical records, food intake, dietary habits, and physical activity was provided by the parents through their answers to the basal questionnaire.Anthropometric measurements (i.e., weight and height) were self-reported (reported by parents) at baseline.Participants received detailed information about how to measure children's weight and height.This questionnaire also included a semi-quantitative FFQ, commonly used and strongly recommended, to evaluate with accuracy dietary patterns as well as diet quality.
The use of supplements was collected at baseline.Participants were asked whether they used nutritional supplements during the previous year (answer: yes vs. no).If so, they were asked to specify brand and dosage.
Information about physical activity was gathered using a specific questionnaire previously used in pediatric populations [7].The questionnaire consisted of 14 activities, including sports, and 9 possible answers from "never" to "more than 11 hours per week".Participants were asked how often they practiced each of the activities in the previous year.
Parental nutrition knowledge and eating attitudes were evaluated throughout 2 different scores.Nutrition knowledge score consisted of 10 questions evaluating whether parents knew how often their children should consume 10 different food groups (Table S1) with 9 possible answers from "Never" to "≥6 times per day".Each question was assigned 1 point if the answer was correct and 0 points if it was not based on the dietary recommendations [8].Thus, the final score ranged from 0 to 10 points.Healthy-eating attitudes scoring consisted of 8 questions (Table S2) with 2 possible answers (Yes or No).Each affirmative answer was assigned 1 point and each negative answer was assigned 0 points.Thus, the eating attitudes score ranged from 0 to 8 points.The latter score was previously used in a cohort of Spanish university graduates [9].
Outcome Assessment
Dietary intake was evaluated using the information gathered through a semi-quantitative FFQ consisting in 140 items with 9 possible answers, from "Never/almost never" to "≥6 times per day".Nutrient content of each item was calculated multiplying the intake frequency by the edible portion and by the nutrient composition of the specified portion size, using data from updated Spanish food composition tables [10] and online databases.The total nutrient intake was obtained by summing the nutrient contribution of each item.
We assessed the intake of the subsequent 19 micronutrients with known public health importance: Ca, Fe, I, Mg, Zn, Na, K, P, and Se, and vitamins B 1 , B 2 , B 3 , B 6 , B 9 , B 12 , C, A, D, and E. In order to define intake adequacy, we used the Dietary Reference Intake (DRI), particularly, the Estimated Average Requirements (EAR).The EAR of a nutrient is defined as the amount that satisfies the needs of a 50% of a homogenous healthy group.
For estimating the prevalence of inadequate micronutrient intake, we used the EAR cut-off point method.The method is based on the assessment of the proportion of individuals with nutrient intake below the EAR [11], the outcome was defined as failing to meet the recommendations in ≥3 micronutrients that represented 10-20% of the dietary micronutrients and was considered clinically relevant.
The quality of the diet was defined using KIDMED, an a priori defined dietary index used to evaluate the adherence to the Mediterranean dietary pattern in children and adolescents.The KIDMED index consists of 16 items (Table S3): 12 items with a score of 0 or 1, and 4 items with a score of -1 or 0. Thus, the KIDMED index ranges from 4 to 12 points [12].According to their score, participants were considered as having poor (≤3 points), medium (4-7 points), or high adherence (≥8 points).
Statistical Analysis
We compared participant's baseline characteristics based on 3 categories of parental nutrition knowledge and the healthy-eating attitudes score.The results are presented as percentages for categorical variables and means (standard deviations) for quantitative variables.χ 2 tests (or Fisher exact test) or ANOVA were used to assess the statistical significance of the differences of proportions and means, respectively.
Logistic regression models were used to assess the relationship of parental nutrition knowledge score and healthy-eating attitudes score with the risk of inadequate nutrient intakes.Crude odds ratios (OR) and multivariate-adjusted OR of failing to comply with ≥3 DRI were calculated.The lowest category (0-4 points) was always used as the reference group in all the analyses.Analyses were adjusted for the main known confounders regarding nutritional adequacy and diet quality based on a previous literature search on the topic.
In additional analyses, multiple regression models were fitted to evaluate the association of parental nutrition knowledge and healthy-eating attitudes with children's adherence to the Mediterranean dietary pattern.
The linear trend test across the 3 categories of parental nutrition knowledge and healthy-eating attitudes were also conducted.
Analyses were carried out using Stata version 12.0 (Stata Corporation, College Station, TX, USA).All p-values are two tailed.Statistical significance was determined at the conventional cut-off point of p < 0.05.
Recruitment and Baseline Characteristics
Participant's socio-demographic characteristics, and food consumption and energy and nutrients intake by parental nutrition knowledge and healthy-eating attitudes are presented in Tables 1 and 2, respectively.Parents with a higher nutrition knowledge score have children with lower body mass index (BMI), and read meat and sugar-sweetened beverage consumption, and higher intakes of vegetables, white fish, eggs, micronutrients intake, and proteins.Parents with a higher healthy-eating attitudes score had children with higher intakes of fruits, vegetables, pulses, white fish, blue fish, olive oil, and vitamin D.
Influence of Parental Nutritional Knowledge and Healthy-Attitudes on Food and Nutrient Intakes
Parental nutritional knowledge regarding the recommended servings per day of different food groups [8] was positively associated with adequate child mean intakes of dairy products, fruit, vegetables, grains and cereals, meat, fish, eggs and olive oil (p < 0.05) (Figure 2a).Similarly, healthier parental eating attitudes was associated to children lower consumption of butter and meat, as well as greater consumption of fish, vegetables, and fruit (Figure 2b).
Influence of Parental Nutritional Knowledge and Healthy-Attitudes on Food and Nutrient Intakes
Parental nutritional knowledge regarding the recommended servings per day of different food groups [8] was positively associated with adequate child mean intakes of dairy products, fruit, vegetables, grains and cereals, meat, fish, eggs and olive oil (p < 0.05) (Figure 2a).Similarly, healthier parental eating attitudes was associated to children lower consumption of butter and meat, as well as greater consumption of fish, vegetables, and fruit (Figure 2b).
Influence of Parental Nutritional Knowledge and Healthy-Eating Attitudes on Micronutrient Inadequacy
Table 3 shows the OR of failing to meet the EAR in three or more micronutrients associated to parental nutritional knowledge and healthy-eating attitudes.The highest parental category in the Try to eat more fruit?
Try to increase fiber intake?
Try to eat more vegetables?
Try to eat more fish?
Avoid butter consumption?
Reduce fat intake?
Reduce meat consumption?
Reduce eating sweets and patries?
Food and nutrient intakes (g/day)
Influence of Parental Nutritional Knowledge and Healthy-Eating Attitudes on Micronutrient Inadequacy
Table 3 shows the OR of failing to meet the EAR in three or more micronutrients associated to parental nutritional knowledge and healthy-eating attitudes.The highest parental category in the eating attitude score showed a significant inverse association with the risk of inadequate intake, compared to the category of reference, after adjusting for potential confounders (OR: 0.34; 95% CI: 0.12-0.95).A significant linear trend was also found (p < 0.05).In contrast, parental nutritional knowledge score was not associated with the risk of inadequate intake.In further analyses, we found parental eating attitudes to be associated with children's adherence to the Mediterranean dietary pattern (Table 4).Compared with the reference category, a higher healthy-eating attitudes score was marginally associated with a slightly higher score in the KIDMED index in the crude model.This association remained marginally significant after further adjustment for potential confounders (β coefficient: 0.34; 95% CI: 0.01-0.67).No significant association between parental nutritional knowledge and children's adherence to the Mediterranean diet was found.
Discussion
To the best of our knowledge, this study is the first to evaluate the association of parental nutritional knowledge and healthy-eating attitudes with the micronutrient intake adequacy and diet quality among Spanish preschoolers.The results indicated that children whose parents reported healthier-eating attitudes were less likely to present micronutrient inadequacy and reported higher adherence to the Mediterranean diet.Parental nutritional knowledge was not associated with either nutritional adequacy or diet quality in their offspring, highlighting the importance of enhancing interventions to promote parental healthy eating attitudes and not simply addressing nutrition knowledge.
Our results do not agree with previous studies that confirmed parental nutritional knowledge may affect their offspring's diet quality [13][14][15][16][17]. Several studies have used different nutrition knowledge assessments methods [18,19], such as the nutrition knowledge questionnaire developed for adults by Parmenter and Wardle [19] used in a Japanese study or the 10-items questionnaire focusing on mis-conceptions of young children's diet developed in a sample of Flemish preschoolers [16].
Although considerable discussion has focused on the influence of parental nutrition knowledge, less research has focused on the effects of parenting attitudes on their offspring's food consumption, nutritional habits, and diet quality.Our findings agree with a recently published randomized controlled trial of a family-based behavioral nutrition intervention [20] in children with type 1 diabetes that concluded that parents' diet-related attitudes and beliefs were linked to their children's diet quality, remarking on the essential role of parental psychosocial factors.
Thus, our results are not surprising, as it is coherent and plausible to assume healthy-eating attitudes to be important determinants of children's eating behavior, even to a greater extent than parental nutritional knowledge.Attitudes seem to be based on family influence, experiences, knowledge, and norms imposed by the environment [21].Parents with greater nutritional knowledge may choose healthier options by interpreting nutritional information and labeling, but knowledge seems useless if it is not put into practice or it is not accompanied by healthy attitudes.
In our study, we also observed that children whose parents answered positively to the questions in the healthy-eating attitudes index consumed more fruit, vegetables, and fish and less butter and meat.Furthermore, children whose parents demonstrated a knowledge of the dietary recommended intakes of olive oil, eggs, fish, meat, grains and cereals, vegetables, fruit, and dairy products consumed appropriate amounts of them.These findings are consistent with previous studies that identified family environment, education, eating attitudes, nutritional knowledge, availability, and family norms are positively associated with children's and adolescents' consumption of fruit and vegetables [22][23][24][25] as well as fish [26], and negatively associated with fat consumption [27].
Despite these significant results, our study has limitations.First, FFQs may not be the best dietary assessment method for identifying nutrient inadequacy because they include a limited number of foods [28].FFQs, being self-administered, might lead to measurement error.Nevertheless, FFQs have been used in previous epidemiological studies to assess micronutrient inadequate intake [29] based on the idea that FFQs are the most efficient and feasible instruments to assess usual dietary intake.The FFQ used in this study was large (140 items), comprising most of the food items commonly consumed by Spanish children, and included the possibility of adding three extra food items.The potential random error due to a suboptimal classification of the participant according to their micronutrient intake would be non-differential; thus, the association would more likely be biased toward the null, making it more difficult to find statistically significant results.Second, to calculate micronutrient intakes from the information collected through the FFQs, we used Spanish food composition tables [10] and online databases, which might be slightly different to those found in other sources.Third, we did not consider micronutrients from supplements, fortified foods, or medication, which might have resulted in an underestimation of the real intakes.Fourth, our questionnaire, including the FFQ, needs to be validated in further studies.Nevertheless, the healthy-eating attitudes score was previously used in a cohort of Spanish university graduates [9].Fifth, despite the significant results, we acknowledge the change in the KIDMED index was small.Thus, further research is needed to assess the real magnitude of this association.Sixth, our sample was homogenous, with a large part of participants being Caucasian and highly-educated parents.However, previous studies have shown that, for causal inference, representativeness of the sample is not necessary, and may be detrimental when measurement tools involve some difficulty or require an important collaboration from participants, as it is the case with FFQs [30,31].Lastly, we used a cross-sectional design.Thus, further research is needed, preferably in the form of prospective studies, before causality can be inferred.
Our study has several strengths.We used EAR as the cut-off point to assess adequate nutrient intakes, which is widely recommended by the Institute of Medicine [11] and has been previously used by the European Food Safety Authority [32].Our study adds to the current knowledge because we defined the outcome as failing to meet the recommended dietary intakes of three or more micronutrients.To the best of our knowledge, no previous study has defined this outcome.However, growth retardation and several adverse outcomes throughout life might be caused by micronutrient deficiencies.We add to this topic since micronutrient intake adequacy has not been studied in detail in preschool children [33].
Conclusions
In conclusion, our findings are valuable as they highlight the importance of recognizing and better understanding the role of parents, particularly their own eating attitudes, on their children's diet quality and on their long-term health.Our results suggest that food knowledge alone may not be enough and that the implementation of parental behavioral-focused programs may be more efficient than usual nutrition education programs in order to improve children's diet quality.
Figure 2 .
Figure 2. Mean food group consumption according to the positive answer in the nutrition knowledge score (a) and healthy-eating attitudes score (b); * p < 0.05.
Figure 2 .
Figure 2. Mean food group consumption according to the positive answer in the nutrition knowledge score (a) and healthy-eating attitudes score (b); * p < 0.05.
3. 4 .
Influence of Parental Nutritional Knowledge and Healthy-Eating Attitudes on Mediterranean Adherence.
Table 1 .
Baseline main characteristics of the 287 participants of the SENDO project according to nutrition knowledge and healthy-eating attitudes score (Mean values and standard deviations or number of participants and percentages).
Table 2 .
Mean food and nutrients intakes among the 287 participants of the SENDO project according to the nutrition knowledge score and healthy-eating attitudes score (Mean values and standard deviations).
Table 3 .
Risk of nutrient inadequacy (failing to meet the estimated average requirements (EAR) in three or more micronutrients) associated to parental nutrition knowledge and healthy-eating attitudes score in the participants of the SENDO project.
Ref.: reference.Multivariable 1: adjusted for age and sex.Multivariable 2: additionally adjusted for BMI.Multivariable 3: additionally adjusted for energy intake, KIDMED index and parental education level.Multivariable 4 1 : additionally adjusted for physical activity (MET-hour/week) and healthy-eating attitudes.Multivariable 4 2 : additionally adjusted for physical activity (MET-hour/week) and nutrition knowledge.
Table 4 .
Multivariate regression coefficients (β coefficient and 95% confidence intervals) for the association of parental nutrition knowledge and healthy-eating attitudes score with children's score in the KIDMED test..: reference.Multivariable 1: adjusted for age and sex.Multivariable 2: additionally adjusted for BMI.Multivariable 3: additionally adjusted for energy intake, KIDMED index and parental education level.Multivariable 4 1 : additionally adjusted for physical activity (MET: hour/week) and healthy-eating attitudes.Multivariable 4 2 : additionally adjusted for physical activity (MET-hour/week) and nutrition knowledge. Ref
|
v3-fos-license
|
2017-09-17T04:26:35.780Z
|
2015-05-01T00:00:00.000
|
34744265
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://projecteuclid.org/journals/electronic-journal-of-statistics/volume-9/issue-1/Adaptive-Laguerre-density-estimation-for-mixed-Poisson-models/10.1214/15-EJS1028.pdf",
"pdf_hash": "6195529d341d088bbcd02eae8903376d414c1f30",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46723",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "c957cc0097df3b5d521c2fc565a189fec872ba14",
"year": 2015
}
|
pes2o/s2orc
|
Adaptive Laguerre density estimation for mixed Poisson models
. In this paper, we consider the observation of n i.i.d. mixed Poisson processes with random intensity having an unknown density f on R + . For fixed observation time T , we propose a nonparametric adaptive strategy to estimate f . We use an appropriate Laguerre basis to build adaptive projection estimators. Non-asymptotic upper bounds of the L 2 -integrated risk are obtained and a lower bound is provided, which proves the optimality of the estimator. For large T , the variance of the previous method increases, therefore we propose another adaptive strategy. The procedures are illustrated on simulated data. March 13
Introduction
Consider n independent Poisson processes (N j (t), j = 1, . . ., n) with unit intensity and n i.i.d.positive random variables (C j , j = 1, . . ., n).Assume that the processes (N j (t), j = 1, . . ., n) and the sequence (C j , j = 1, . . ., n) are independent.Under these assumptions, the random time changed processes (X j (t) = N j (C j t), t ≥ 0) are i.i.d. and such that the conditional distribution of X j given C j = c is the distribution of a time-homogeneous Poisson process with intensity c.The process X j is known as a mixed Poisson process (see e.g.Grandell (1997), Mikosch (2009)).Such processes are of common use in non-life insurance mathematics as well as in numerous other areas of applications (see Fabio et al. and references therein).
In this paper, we assume that the random variables C j have an unknown density f on (0, +∞) and our concern is the nonparametric estimation of f from the observation of a nsample (X j (T ), j = 1, . . ., n) for a given value T .We investigate this subject for large n and both for fixed T and large T with two different methods.The fixed T method performs well for small T (e.g.T = 1) and deteriorates as T increases while the large T method performs better and better as T increases.Thus, the two methods are complementary.
In Section 2, we consider the case T = 1.The distribution of X j (1) = N j (C j ) is given by: (1) which can be estimated by: (2) The problem of estimating f from the discrete observations (N j (C j ), j = 1, . . ., n) is thus an inverse problem, the problem of estimating a mixing density in a Poisson mixture.Several authors have considered this topic whether by kernel or projection methods, see Simar (1976), Karr (1984), Zhang (1995), Loh andZhang (1996, 1997), Hengartner (1997).These authors are mainly interested in estimating f on a compact subset of (0, +∞).We discuss with more details the links between the present results and the previous references in subsection 2. 4.
In this paper, we assume that (H) f ∈ L 2 ((0, +∞)) and propose a solution without any constraint on the support of the unknown function.We study the L 2 ((0, +∞))-risk and prove upper and lower bounds on an adequate function space.Our approach is a penalized projection method (see Massart (1997)) which provides a concrete adaptive estimator of f easily implementable.It is based on the following idea.By relations (1), α ℓ (f ) is the L 2 scalar product of f and the function c → e −c c ℓ /ℓ!.Choosing an orthonormal basis (ϕ k ) of L 2 ((0, +∞)), (1) can be written as: k are respectively the k-th component of f and e −c c ℓ /ℓ! on the basis.The problem is to choose a basis such that the mapping (θ k (f ), k ≥ 0) → (α ℓ (f ), ℓ ≥ 0) can be simply and explicitly inverted.Then, by plugging the estimators αℓ in the inverse mapping, we get estimators of the coefficients θ k (f ) and deduce estimators of f .An appropriate choice of (ϕ k ) is thus a key tool: we consider the Laguerre bases defined by (( √ aL k (at)e −at/2 , k ≥ 0)) where (L k (t)) are the Laguerre polynomials.Here, the choice a = 2 is especially relevant.Indeed, with k = 0 for all k > ℓ and the matrix Ω ℓ = (Ω (i) k ) 0≤i,k≤ℓ is lower triangular and explicitly invertible (Propositions 2.1 and 2.2).Therefore, the inverse problem has a solution: the linear mapping on R ℓ+1 (4) α ℓ = (α k (f ), k = 0, . . ., ℓ) ′ → θ ℓ = (θ k (f ), k = 0, . . ., ℓ) ′ = Ω −1 ℓ α ℓ .Moreover, a crucial consistency property holds: the first ℓ − 1 coordinates of α ℓ and θ ℓ are equal to those of α ℓ−1 and θ ℓ−1 .Note that, in Comte et al. (2013), another type of inverse problem involving functions of L 2 ((0, +∞)), has been solved also using a Laguerre basis.
So, we define a collection of estimators of f by fℓ = ℓ k=0 θk ϕ k , where ( θk ) are defined using (2) and (4).We study their L 2 -risk (Proposition 2.3).For this, we introduce appropriate regularity subspaces of L 2 ((0, +∞)), the Sobolev-Laguerre spaces with index s > 0. These spaces are defined in Shen (2000) and Bongioanni and Torrea (2009).We precise (see Section 7) the rate of decay of the coefficients of a function f developed in a Laguerre basis when f belongs to a Sobolev-Laguerre space with index s.This allows to evaluate the order of the bias term f − f ℓ 2 where .denotes the L 2 ((0, +∞))-norm.Using these regularity spaces, we discuss the possible rates of convergence of the L 2 -risk of fℓ .Functions belonging to a Sobolev-Laguerre ball with index s yield rates of order O((log n) −s ).This rate is optimal, as we prove a lower-bound result.Afterwards, we propose a data-driven choice l of the dimension ℓ and study the L 2 -risk of the resulting adaptive estimator (Theorem 2.2).We interpret the results in the case where the observation is (N j (C j T ), j = 1, . . ., n).This amounts to a change of scale which multiplies the variance term of the risk by a factor T and implies a deterioration of the estimator as T increases.
Section 3 is devoted to the estimation of f for large T .Our method relies on the property that for each j, C j,T = N j (C j T )/T is a consistent estimator of the random variable C j as T tends to infinity.Then, we use the i.i.d.sample ( C j,T ) 1≤j≤n to build estimators of f .We propose projection estimators on the Laguerre basis (3) using other estimators of the coefficients θ k (f ) together with an adaptive choice of the space dimension (Proposition 3.1, Theorem 3.1).The criterion for the model selection is non standard: it involves a penalization which is the sum of two terms, one depending on n, ℓ and the other on T, ℓ.
Section 4 gives numerical simulation results and some concluding remarks are stated in Section 5. Proofs are gathered in Section 6.In Section 7, regularity spaces associated with Laguerre bases are discussed and useful inequality is recalled in Section 8 .
2.
Estimation of the mixing density for T = 1.
Proposition 2.1.The coefficients Ω (ℓ) k defined by (7) are equal to Define the vectors The matrix Ω ℓ is therefore invertible and its inverse is explicitly computed in the following proposition.
Proposition 2.3.The estimator fℓ of f defined by ( 2)-( 8)-( 9)-( 10) satisfies Proposition 2.3 states a squared-bias/variance decomposition, and we need now to specify the bias order on adequate functional spaces, in order to evaluate optimal rates.
2.2.
Rates and rate optimality.As it is always the case in nonparametric estimation 1 , we must link the bias term f − f ℓ 2 with regularity properties of function f .In our context, these should be expressed in relation with the rate of decay of the coefficients (θ k (f )) k≥0 .The Laguerre-Sobolev spaces described in Section 7 provide an adequate solution.
For any h ∈ W s 2 ((0, +∞), K), we have h − h ℓ2 = ∞ k=ℓ+1 θ 2 k (h) ≤ K/ℓ s where h ℓ is the orthogonal projection of h on S ℓ .Proposition 2.4.Let for 0 < ǫ < 1, We have and Note that lǫ does not depend on s and is thus adaptive.With ℓ ⋆ , the bias and variance terms have the same order (log(n)) −s , which is better.In addition, the constant is improved.Nevertheless, this choice depends on s.
Proof.For f ∈ W s 2 (0, +∞), K), the risk bound in Proposition 2.3 writes The variance term has exponential order 2 4ℓ with respect to ℓ.Thus, we can not make the classical bias variance compromise.First we can choose ℓ such that the bias term dominates: this is obtained by choosing ℓ = ℓ ǫ .Second, a more precise tuning of both terms is obtained with ℓ = ℓ ⋆ .In both cases, the rate is of order O([log(n)] −s ).
We now prove that, for densities lying in Laguerre-Sobolev balls W s 2 ((0, +∞), K), the rate (log n) −s is optimal. 2 Theorem 2.1.Assume that s is a positive integer and let K ≥ 1.There exists a constant c > 0 such that liminf where inf fn denotes the infimum over all estimators of f based on (N j (C j )) 1≤j≤n .
The proof uses several lemmas established in Zhang (1995) and Loh and Zhang (1996).
Model selection.
Model selection is justified as the bias may have much smaller order.For instance, it can be null if f admits a finite development in the Laguerre basis.Exponential distributions also provide examples of smaller bias.Indeed, consider f an exponential density E(θ).Then . Choosing .
The rate depends on θ and can be O(n −β ) for any β < 1.For instance if θ = 5/3 the rate is O(n −1/2 ), for θ = 1/2, the rate is O(n −0.44 ) (see Section 4) and it tends to O(n −1 ) (the parametric rate) when θ tends to 1, which is coherent with the fact that the bias is null for θ = 1.This kind of result can be generalized to the case of a distribution f defined as a mixture of exponential distributions and to Gamma distributions Γ(p, θ), with p an integer.More precisely, if f p is the density Γ(p, θ), This term can be computed explicitly and we get, for Note that the bias is null for θ = 1 and ℓ > p − 1, which is expected since f p ∈ S p−1 .Moreover, the bias order depends on θ, which can be seen in simulations.Now we have to define an automatic selection rule of the adequate dimension ℓ.We make the selection among the following set: where [x] denotes the integer part of the real number x.For κ a numerical constant, we define (13) l = arg min ℓ∈Mn − fℓ 2 + pen(ℓ) , with pen(ℓ) = κ ℓ2 4ℓ n .
We can prove the following result Theorem 2.2.Consider the estimator fl defined by ( 10) and (13).For any κ ≥ 8, we have The infimum in the right-hand-side of the inequality above shows that the estimator is indeed adaptive.Note that the penalty is, up to a constant, equal to the variance multiplied by ℓ.This implies a possible negligible loss in the rate of the adaptive estimator w.r.t. the expected optimal rate.
Remark.Let us now assume that the observation is (N j (C j T ), j = 1, . . ., n).The previous method applies directly to estimate the density f T of C j T i.e. f T (t) = (1/T )f (t/T ).We can deduce the results for f (c) = T f T (T c).The function f is developed on the basis (ϕ ), k ≥ 0) and the following relation holds θ the orthogonal projection of f on the space S (T ) ℓ spanned by (ϕ where fT,ℓ is the estimator built for f T using (N j (C j T ), j = 1, . . ., n).The estimator Moreover, with l defined in ( 13), there exists κ > 0 such that The variance term in the L 2 -risk is multiplied by a factor T .This explains that the method may be worse when T increases.Actually, this was clear on simulated data.
2.4.Related works.In Simar (1976), it is proved that the cumulative distribution function F (x) of C j can be consistently estimated using (α ℓ ).The method is theoretical and concrete implementation is not easy.Noting that α 0 (f ) is simply the Laplace transform of f , Karr (1984) studies the properties of α0 to estimate α 0 (f ) in the more general context of mixed point Poisson processes.
For comparison purposes, we detail some of the results of Zhang (1995), Hengartner (1995) and Loh andZhang (1996, 1997) in the case of Poisson mixtures.In the case where f has compact support [0, θ ⋆ ], Zhang (1995) gives a kernel estimator of f (a) and studies pointwise quadratic risk on Hölder classes with index r (i.e.functions f admitting ⌊r⌋ derivatives such that f (⌊r⌋)3 is r − ⌊r⌋-Hölder).The estimator has a MSE of order [log(n)/ log log(n)] −2r which does not correspond to his lower bound which is [log(n)] −2r .In the case of non compact support for f , the kernel estimator MSE has order (log(n)) −r/2 , with no associated lower bound.Loh and Zhang (1996) generalize the results of Zhang (1995) by studying a weighted-L p -risk.
Hengartner (1997) considers the case where f has a compact support.He builds projection estimators using orthogonal polynomials on the support.The upper bound of MISE has order [log(n)/ log log(n)] −2r on the same class as above and on Sobolev classes with index r.On the latter classes, he proves a lower bound of order [log(n)/ log log(n)] −2r .
Loh and Zhang (1997), in the case of non compact support for f , use Laguerre polynomials and build projection estimators.Thus, the function is estimated by a polynomial; they study a weighted L 2 -risk.The upper bound is O([log(m)] −m/2 ) on the class of functions such that j≥m j m τ 2 j (f ) < M where τ j (f ) is the coefficient of f on the development with respect to the Laguerre polynomials.Their lower bound is O([log(n)] −m ), which does not correspond to the upper bound.
In all cases, the number of coefficients in the projection estimators does not depend on the regularity space.In this sense, the above methods are adaptive.
Let us now clarify our contribution.First, we use a L 2 ((0, +∞))-basis and a usual MISE, which is more fitted to the problem.Second, we clarify the functional spaces associated to the context of Laguerre bases on (0, +∞) and provide explicit links between regularity and coefficients of a development on these spaces.Upper and lower bounds match globally and without weights.
Here, the proof of our lower bound is inspired of Loh and Zhang's constructions.Therefore, our results synthesize and improves all these previous works.
Lastly, when the function under estimation has stronger regularity properties than considered in lower bounds, we show that the rate can be improved (polynomial instead of logarithmic).This justifies the proposal of an adaptive procedure, see Theorem 2.2, which is moreover non asymptotic.
Estimation for large
Conditionally to C j = c, we know that C j,T converges almost surely to c as T tends to infinity.Consequently, C j,T converges almost surely to C j .We now use the i.i.d.sample ( C j,T ) 1≤j≤n to build projection estimators of f , where the coefficients θ k (f ) are now estimated as follows.
( 14) Note that S ℓ has the norm-connection property: as can be seen from Lemma 6.1.We obtain the following risk bound.
Proposition 3.1.Recall that f ℓ is the orthogonal projection of f on S ℓ = span(ϕ 0 , . . ., ϕ ℓ ).Then The bound contains the usual decomposition into a squared-bias term f −f ℓ 2 and a variance term.The latter term is the sum of two components: the first one 2(ℓ + 1)/n is classical and no more exponential in ℓ, the second one is due to the approximation of the C j 's by the C j,T 's and gets small when T increases.To define a penalization procedure, we must estimate s 2 .Let ( 16) and ( 17) The following holds.
the estimator defined by ( 14) and ( 17).Then there exist numerical constants κ1 , κ2 such that where C is a numerical constant and C ′ a positive constant.
Thus, the estimator f (T ) l is adaptive and its risk automatically reaches the order of the biasvariance compromise.
Numerical simulations
In this paragraph, we illustrate on simulated data the two adaptive projection methods using the Laguerre basis: method 1 corresponds to Section 2 when T = 1, method 2 corresponds to section 3 for large T .We consider different distributions for the C j 's: (1) a Gamma Γ(p, θ) a mixed Gamma density 0.3Γ(3, 0.25) + 0.7Γ(10, 0.6). ( a Weibull density f (p,θ) (x) = θp −θ x θ−1 e −(x/p) θ 1 x>0 for p = 3 and θ = 2.Note that, as θ = 1, the density (1) has only three nonzero coefficients θ 0 , θ 1 , θ 2 in its exact development in the Laguerre basis.For density (3), we know that the rate of the L 2 risk depends on the value of θ (n −0.44 for θ = 1/2, see Section 2).In Figures 1-5, we illustrate the first method for T = 1 and n = 10000, n = 100000 and the second for sample sizes n = 1000 and T = 10, and n = 4000, T = 40, for the five densities defined above.We plot 25 consecutive estimates on the same picture together with the unknown density to recover, to show variability bands and illustrate the stability of the procedures.
• Comments on method 1.The method is easy to implement.As it is standard for penalized methods, the theoretical constant is too large and in practice, is calibrated by preliminary simulations.We have selected the constant κ = 0.001 in the penalty.This prevents from possible explosion of the variance, which has exponential order.The adaptive estimator performs reasonably well for large values of n (n ≥ 10000) but is very sensitive to the parameter values for distributions Gamma or exponential, as expected.The mixture density and the Pareto and Weibull densities, which do not admit finite developments in the basis, are correctly estimated.Increasing n improves significantly the estimation.We choose to select ℓ in {0, 1, . . ., 2⌊log(n)⌋ − 1}.
• Comments on method 2. The method is also easy to implement.We have selected the constants κ1 = 1.5, κ2 = 10 −5 .The very small value of κ2 simply kills the effect of the second term in the penalty in order to allow not too large values of T .This second method gives better results than the first method, as soon as T ≥ 10 (even T ≥ 5 provides good estimators).The number of observations need not be very large.We kept the same set of possible values for ℓ in the selection algorithm; here again, the selected values l are in {0, 1, . . ., 4}.
Concluding remarks
In this paper, we study the nonparametric density estimation of a positive random variable C from the observation of (N j (C j T ), j = 1, . . ., n), where (N j ) are i.i.d.Poisson processes with unit intensity, (C j ) are i.i.d.random variables distributed as C, and (N j ) and (C j ) are independent.Under the assumption that the unknown density f of the unobserved variables (C j ) is in L 2 ((0, +∞)) and for a fixed value T , we express the nonparametric problem as an inverse problem, which can be solved by using a Laguerre basis of L 2 ((0, +∞)).Explicit estimators of the coefficients of f on the basis are proposed and used to define a collection of projection estimators.The space dimension is then selected by a data driven criterion.For functions belonging to Sobolev-Laguerre spaces described in Section 2, f is estimated at a rate O((log(n)) −s ).So, an interesting question is to know whether there exist other functions than those of these spaces estimated at the same rate.This problem amounts to finding maximal functional classes for which a given rate of convergence of the estimators can be achieved.
For large T , estimators C j,T of the C j 's are used to build adaptive projection estimators in the Laguerre basis.In this approach, a moment condition on C j is required.
The numerical simulation results show that the Laguerre basis is indeed appropriate, to obtain estimators with no boundary effects at 0.
Possible developments of this work are the following.We may use specific kernel estimators on R + , as in Comte and Genon-Catalot Finally, ( 18) where we know that Ω (ℓ) k is a polynomial of degree k which is equal to 0 for ℓ = 0, 1, . . ., k − 1.Hence, we have Hence the result.
6.2.Proof of proposition 2.2.Denote by R ℓ [X] the space of polynomials with real coefficients and degree less than or equal to ℓ.The transpose of the matrix √ 2Ω ℓ represents the linear application of R ℓ [X], P (X) → P 1−X 2 , in the canonical basis (1, X, . . ., X ℓ ).The inverse linear mapping is Q(X) → Q (1 − 2X).Hence the result.We have ). Next, we write the variance term as follows: where cov(α i , αj ) = (α i δ j i − α i α j )/n and δ j i is the Kronecker symbol.Thus, for M symmetric and nonnegative, where D α = diag(α 0 , . . ., α ℓ ).Here, we get Note that Tr( t Ω −1 ℓ Ω −1 ℓ ) is known as the squared Frobenius norm of the matrix Ω −1 ℓ .It follows from Proposition 2.2 that ( 21) Noting that we get ( 22) As a consequence, we obtain the risk decomposition announced in Proposition 2.3.
Now, we define
Then, w 1n is chosen such that f 1n = 1 which yields: Finally, δ 0 is chosen by Proof.We first study f 0n .By construction, On the space of polynomials of degree 2s + 1 on [c 0 , c 1 ], is a norm and all norms are equivalent.Therefore, there exists C such that By Lemma 3 of Loh and Zhang (1996), |γ We deduce We have and . Therefore, provided that ε is small enough, the first term of f 0n (χ 0 + χ 2 ) is lower bounded as c 0 > 0 and the second term is O(u (s/2)+(1/4) /n) = o(1).Thus, we can choose ε small enough to have f 0n (( We have We check that f 1n ≥ 0 in the same way as for f 0n . • Step 2. For j = 0, 1, f jn ∈ W s 2 ((0, +∞), K) for all K ≥ 1.
Proof.This part is specific to our context as we do not have the same function spaces as Loh andZhang (1996,1997).
• Step 4.There exists C > 0 such that Proof.This part is also specific to our study: we only use two functions instead of three and our bound is global and not local.We have where w 1n /w 0n = o(1/n) and | cos | ≤ 1.Therefore, it is enough to bound from below: And as we only look at First T 1 = O( √ u) because, by Lemma 3 of Loh and Zhang (1996, p.573), Next, we write and as above by the choice of δ 0 .Therefore T ′ 2 = O( √ u/n 2 ).Consequently It follows that This concludes step 3 as u = δ 0 log(n).
The result of Proposition 6.1 inserted in Inequality (24), shows that for κ ≥ 8, we obtain 4p(ℓ, l) ≤ pen( l) + pen(ℓ) and Proof of Proposition 6.1.We apply the Talagrand Inequality recalled in Lemma 8.1 of Section 8. First note that Let us define M 2 = Tr( t M M ) and ρ 2 (M ) the largest eigenvalue of t M M .We consider the centered empirical process given by where Recall that ℓ * = ℓ ∨ ℓ ′ and define the unit ball for the maximization by B ℓ * = {t ∈ S ℓ * , |t| = 1}.
Next since β i,L has only one nonzero coordinate, equal to 1, we have to bound ψ t ( x) = t, Ω −1 L x for x = e j vector of the canonical basis of R L+1 , with j ≤ ℓ * and t ∈ B ℓ * .For such vectors x, We take ǫ 2 = δℓ * and for δ to be chosen afterwards, we get /C 3 ) and ℓ * ≥ 1 gives the result.
The proof of Proposition 6.2 is given in Section 6.8.Now, the definitions of p 1 , p 2 and pen(.)imply that 8p 1 (ℓ, ℓ ′ ) + 8p 2 (ℓ, ℓ ′ ) ≤ pen(ℓ) + pen(ℓ ′ ) for κ1 ≥ 32 and κ2 ≥ 64, ∀ℓ, ℓ ′ ∈ M n,T .Therefore, we obtain by Lemma 6.1.Moreover, using (15), on B ℓ∨ℓ ′ , t ∞ ≤ 2(ℓ ∨ ℓ ′ + 1) := M .Next, to find v, we split in two parts: sup where We write that where we apply the Taylor Formula and ξ T ∈ (C 1 , C 1,T ).Using Lemma 6.1 again, we get To conclude we use that E . Now the Talagrand Inequality implies that there exist constants A i , i = 1, 2, 3 such that √ n so that as which is the announced bound.Now we study R(t).Let D = sup t∈B l∨ℓ |R(t)| 2 − p 2 (ℓ, l) By Inequality (28), the first rhs term is zero.To deal with the second term, let Using the definition of M n,T , we get since ( 1 2 s 2 − s 2 ) + 1 Ω = 0.By the Markov inequality, we have ) and we use the Rosenthal Inequality (see Hall and Heyde (1980, p.23)) to get ) where m 4 is the fourth centered moment of X j = 3 C 2 j,T − 2 C j,T /T and m 2 2 the variance of X j .We write After some elementary computations using the centered moments of a Poisson distribution, we obtain that, if E(C When ρ ≡ 1, we denote this space as usual by L For any orthonormal basis We are especially interested in the weight functions (30) ρ(x) = x α e −x = w α (x), α ≥ 0 and the associated orthonormal bases of L 2 (R + , w α ), namely the Laguerre polynomials.Consider the second order differential equation: The solution is g(x) = L α k (x) the Laguerre polynomial with index α and order k.The function L α k is a polynomial of degree k, and the sequence (L α k ) is orthogonal with respect to the weight function w α .The orthogonality relations are equivalent to: We have The following holds, for all integer k and α ≥ 0 : , the sequence (φ α k ), k ≥ 0) constitutes an orthonormal basis of the space L 2 ((0, +∞), w α ).In particular, φ 0 k (x) = L 0 k (x) = L k (x), k ≥ 0 constitute an orthonormal basis of L 2 ((0, +∞), w), with w(x) = w 0 (x) = e −x .Noting that x α+1 e −x ′ = x α e −x (α + 1 − x), we obtain, using (31) and ( 33 For these formulas, see Abramowitz and Stegun (1964).
We can now prove the following result.
Figure 2 .
Figure 2. Estimation of the mixed Gamma density with method 1 (top left n = 10000 and top right n = 100000, for T = 1) and method 2 (bottom left, n = 1000, T = 10 and bottom right n = 4000, T = 40): true -thick (blue) line and 25 estimated (dashed (red) lines).The selected ℓ is 3 except for the bottom right plot where it is 4.
(2012), to compare them with projection Laguerre estimators.As in Fabio et al. (2012), we may enrich the data by considering several observation times.Another relevant extension is to study mixed compound Poisson processes, e.g. using the approach of Comte et al. (2014), or more general mixed Lévy processes.
Figure 4 .Proofs 6 . 1 .
Figure 4. Estimation of the Pareto density with projection method 1 (top left n = 10000 and top right n = 100000, for T = 1) and method 2 (bottom left, n = 1000, T = 10 and bottom right n = 4000, T = 40): true -thick (blue) line and 25 estimated (dashed (red) lines).Most of the time l = 2 for the top pictures and 0 for the bottom ones.
Proof of Proposition 6.2.First we study νn (t) and apply the Talagrand Inequality.To do this, we evaluate the bounds H 2 , M, v as defined in Lemma 8.1.Clearly 8 j ) < +∞, then there exist constants c 1 , c 2 such that m44 ≤ c 1 and m 2 2 ≤ c 2 .Laguerre polynomials and associated regularity spaces: General properties.For ρ : R +
|
v3-fos-license
|
2020-07-09T09:13:26.840Z
|
2020-07-05T00:00:00.000
|
221088180
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ijels.com/upload_document/issue_files/36IJELS-106202034-Theme.pdf",
"pdf_hash": "aa00215aed62564a2b65a4b53309df50897173de",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46725",
"s2fieldsofstudy": [
"Linguistics",
"Education"
],
"sha1": "f3a7a9fe23802f0b3b9ce4a754075a025901cd2b",
"year": 2020
}
|
pes2o/s2orc
|
Theme and Thematic Progression in Narrative Texts of Indonesian EFL Learners
This study aims to analyze and describe the Thematic progression patterns and types of Theme in students’ narrative texts of Indonesian EFL Learners. This study employs a descriptive-qualitative research design. The data were obtained from a collection of students’ texts. This study uses the theory of Thematic progression: Bloor and Bloor (2004), and the theory of types of Theme developed by Halliday (1994), Gerot & Wignell (1994), Halliday et al. (2004). The finding shows a dominant use of Constant pattern in narrative texts indicates that the students are able to create a focus on specific participants as this is one of the linguistics features of narrative texts. The use of Linear pattern indicates that the students have achieved the ability to create cohesion of the texts by introducing the new information by taking a Rheme to become Theme of the upcoming clause. In terms of types of Theme, the high number of Topical Themes in students’ text may indicate that the students are able to lead the reader to focus on participants (characters) of the story. The use of Interpersonal Themes, which dominantly used by middle and low achievers, indicate that high achievers tend to be complex lexically (written language), while middle and low achievers tend to use complex grammatically (spoken language). The dominant use of Textual Theme indicates that the students are able to create complex clause and to connect the clause which helps them to create a cohesive and coherent text. Keywords— systemic functional linguistic, textual metafunction, theme, thematic progression, narrative
I. INTRODUCTION
Study in Theme and Thematic progression in students' texts is important to evaluate coherence in a writing. Coherence is a kind of relationship between ideas in a text which helps the text to hang together. It also deals with ideas that arrange clearly and logically. Halliday et al. (2004) stated that a text is said to have global coherence if the text hangs together as a whole which is referred to as "discourse flow". Therefore, if a text is coherent, the reader can understand easily the main points of the text. According to Halliday & Hasan (2014) a text is coherent if two conditions are fulfilled, first, a text must be consistent with a context in which it is created, second, a text must be connected by cohesive devices. In other words, it suggests that coherence can be built if it is relevant to context, and if there is a connection between ideas in a text.
To create a coherent text some common problems in writing may arise, especially when students are asked to write a good composition, their writing may be good in grammar, but poor in text structure and disorganize in logic, that makes the whole texts lack unity. As grammar (as well as other systems in language) contributes to the coherent flow of information in a text (Jones & Lock, 2010), it might lead teachers to pay more attention to grammatical mistakes. This might make the students to think correct grammar is the essential factor in writing, thus the students are paying too much attention to grammar rather than the order of organization, and fail to create more coherent writing. Some studies have been conducted to examine coherence in students' writing by analyzing the use of Theme and Rheme. Study in Theme and Thematic Progression in learner English focus on how appropiate use of Theme and Thematic progression to improve coherence in learners output, mainly learners of English writing, by analyzing their problem in the use of Theme and Thematic progression. This is in line with Halliday and Matthiessen (2004) state that one way to evaluate the students' writing skills in creating the text is analyzing Thematic progression based on the Theme system of Systemic Functional Linguistic. Some studies stated that one of the major problems for the students' writing is in the logical order of the text content and the coherent layout of textual structure. This is supported by Wei (2010) argued that students' English writing was lack of coherence due to the inappropriate thematic choice and thematic progression. Furthermore, Zhang (2004) found that Theme in students' writing was confusing because they were not connected to preceding Theme, and as a result, Themes did not help develop the writing.
This study aims to analyze and describe the realization of Thematic progression and the use of Theme types in students' Narrative texts. Therefore, findings of this study can be useful to examine problems in students' text and it also applies to improve students' writing skills.
Systemic Functional Linguistic
Bloor et al., (2004) state that for Systemic Functional Linguistics (SFL), a language as a 'system of meanings', it follows semantic (concerned with meaning) and functional (concerned with how the language is used). That is to say that when people use language, precisely they construct meaning. Besides, Thompson, (2013) also states that SFL is essentially equated with function, and that describing language from this view appears to seem to be a much less workable task than describing the structures. According to Halliday, SFL is a theory that examines language as a system in terms of functions. This theory introduces the three metafunctions of meaning, they are experiental or ideational meaning, interpersonal, and textual meaning.
Textual Metafunction
Textual metafunction concerns Theme and Rheme. The Theme highlights a certain piece of information within a clause as being more important than others, which provides the "point of departure" for the message (Halliday, 1994 There are two types of Ideational theme; unmarked theme and marked theme. The unmarked theme in the English clause is usually started with the subject. The marked theme is a non-typical theme that is characterized by other than subjects such as complement, adjunct, or predicator. The following is an example: My sister talked to me yesterday.
Theme (unmarked theme) Rheme Ideational Theme
Last month my sister talked to me.
Theme (Marked theme) Rheme Ideational Theme
The Interpersonal theme relate to the relationship either between the speaker and his addresses or between the speaker and its message. In addition, Gerot & Wignell (1994) The Textual theme helps to structure the text and develops connection to other clauses (Halliday, 1994), it realized by conjuctive adjunct, e.g and, however; conjunction, e.g before, after, how, which, and continuative". Textual theme is categorized into continuative, conjunction, and conjunctive adjunct.
Thematic Progression
Thematic progression is the way the themes are linked together to form a text. According to Eggins (2004), Thematic progression is the flow of information between sequential themes and rhemes. Analyzing the flow of information is considered important because an analysis of how these themes progress and collaborate with rheme is essential to see the organization of a text. In this case, Bloor et al. (2004) have proposed several Thematic patterns that are commonly found in texts. They are Constant theme pattern, Linear theme pattern, Split Rheme pattern and Derived themes
Constant theme Pattern
The Constant theme pattern shows that the topic of the first clause is introduced in the first theme, and then becomes the second, the third and the fourth theme of each clause as represented below.
Linear theme pattern
The Linear theme pattern shows that the rheme of each clause becomes the theme of the next clause as represented below. Clause 1. Theme A + Rheme B Clause 2. Theme B + Rheme C Clause 3. Theme C + Rheme C The museum is located in the center of town near the square. This square is a common destination for tourist buses. The buses, all belonging to the Island Tour Bus. (Sujatna, 2013)
Split Rheme pattern
This type occurs when the rheme of a clause has two or more components, each rheme is taken as the theme of the next clause as represented below.
Derived Theme
The Derived theme occurs when the theme of a clause is not stated explicitly in the themerheme of the previous clauses by the form, but it relates in meaning to the theme or rheme of the previous clause as represented below.
Theme 3
The rat-like rodents include hamsters, lemmings, voles and gerbils, as well as rats and mice. The black rat is found in buildings, sewers and rubbish yards, but has been largely replaced by the bigger, more aggressive, brown rat. Voles are mouse-like rodents that live in the grassland of Europe and Asia; water voles, or water rats, build complex tunnels along river banks. The house mouse often lives inside buildings and is a serious pest because it eats stored food. The field mouse, on the other hand, very rarely comes near human dwellings. (Bloor et al., 2004)
III. METHODOLOGY
This study is designed as a qualitative and descriptive research method since the primary purpose of this study is to analyze and describe the types of Theme and Thematic progression in students' narrative texts, a descriptive qualitative method was considered appropriate (Creswell, 2009). The descriptive method is method research that attempts to describe and interpret the objects under reality. Moreover, Keizer (2015) argues that all linguistics research is first and foremost based on observation and description.
The data of the study were taken from the sixth grade of International Elementary School in Bandung. The data is nine students' narrative texts which were intentionally selected based on teacher's suggestion, these data represent low, middle, and high achievement category based on school's writing rubic score.
The steps in analyzing the data: first the identification of Thematic progression, followed by the Next, the identification of types of Theme is conducted, the text also divided into some stages of narrative, by breaking down the texts into numbered clauses, text orientation, complication, and resolution. Then the types of Theme were identified based on the theory of Halliday (1994), Gerot & Wignell (1994), and Halliday et al., (2004) in terms of Topical, Interpersonal, or Textual theme. The identification of types of Theme would describe how the students organize the idea textually.
in students' narrative texts. The findings concerning the types of Thematic progression are presented in Table 1.
The table above describes that only two types of Thematic progression (Constant and Linear) are used in Students' narrative texts. Based on the table above, the Constant patterns is the most frequent Thematic progression used. It occurs 80 times or equal to 63% of the total. The high number of Constant pattern in students' narrative texts indicates that the students tend to represent the continuing information by focusing on the Theme of the text in the preceding clause. This trend shows us how elementary students are able to maintain the focus of the story by repeating the Theme in the next clause. In terms of students' text category, Constant theme pattern is mostly found both in the middle achiever for 25 times and low achiever for 25 times, then followed by high achievers' for 30 times.
Besides the Constant Theme, another Thematic progression which occurs in students' narrative writing is the Linear pattern. The Linear pattern is mostly found in the low achievers' texts for 18 times, then followed by high achievers' for 17, and the middle achievers' 11. The high frequency of Linear pattern implicitly indicates that the students have achieved the ability to create cohesion of the texts by introducing the new information by taking a Rheme to become Theme of the upcoming clause. In terms of students' text category, the Linear pattern is mostly found in the low achievers' texts for 18 times, then followed by high achievers' for 17, and the middle a
IV. RESULT AND DISCUSSION
There are nine texts which were purposively analyzed in this study. These texts were categorized into three levels of achievement: low, middle, and high achievement. Each category consists of three texts. This section is to answer a part of research questions about type of Thematic progression and the type of Theme applied in students' narrative texts. The findings concerning the types of Thematic progression are presented in Table 1. The absence of the Split rheme pattern and Derived theme pattern in students' narrative texts show that the students of elementary school rarely tend to compose a complex text, however they are able to create a well-organized and understandable texts through Constant and Linear patterns. The dominant use of Constant pattern in narrative texts also means that the students are able to create focus on specific participants as this is one of the linguistics features of narrative text.
This section is to answer a part of research question about types of Theme realized in students' narrative texts, which is represented in Table 2.
The three types of Theme (Topical, Interpersonal, and Textual Themes) are used in students' narrative texts. Based on Table 2, the Topical Theme is the most frequent Theme used. It occurs 412 times or 69% of the total. This Topical Theme is divided into two parts, they are the marked Themes which occur 30 times or 5%, and the unmarked Theme which occur 392 times or 64%. The finding of the high number of Topical Themes in students' texts may indicate that the students are able to lead the reader to focus to participants (characters) of the story. As one of linguistic features of narrative text is to focus on specific and usually individualized participants, thus it may be the cause of the dominant number of Topical Theme occurrences in students' narrative texts. Meanwhile, the Interpersonal theme is the least type of Theme found in students' text. The number of Interpersonal Themes in the total of students' texts are 30 or equal to 5%. This theme mostly found in spoken text, which is in the dialogue of the story, as Interpersonal Theme commonly happens in the conversation (Eggin, 2004). The use of Interpersonal Theme is highly occured in middle achievers for 14 times, followed by low achievers for 13 times, and high achievers for 3 times. These numbers show that high achievers tend to use dialogue less than middle and low achievers, these also indicate that high achievers, tend to be complex lexically (written language), while middle and low achievers tend to be complex grammatically (spoken language).
On the other hand, the Textual Theme occurs 161 times or equal to 26% of the total. The use of Textual Theme is highly occured in middle achievers for 90 times or 35%, followed by high achievers for 37 times or 23%, and low achievers for 34 times or 17%. The dominant use of Textual Theme indicates that the students are able to create complex clause and to connect the clause which helps them to create a cohesive and coherent text To conclude, the Topical Themes are the most frequent that occur in students' narrative texts of all levels (69%), followed by Textual Themes (26%). Meanwhile, the Interpersonal Themes are the least used in the narrative text (5%). This result supports the research that conducted by Safitri (2013), about Theme system in the narrative text whose finding showed that there were three types of Theme used by the students, they are Topical, Textual and Interpersonal, and it stated that Topical Theme was the most frequently used.
V. CONCLUSION
The finding showed that Thematic progression and types of Theme support one of the linguistic features of narrative text. In terms of Thematic progression, the dominant use of Constant pattern in narrative text indicates that the students are able to create focus on specific participants as this is one of the linguistics features of the narrative texts. The use of Linear patterns indicates that the students have achieved the ability to create cohesion of the texts by introducing the new information by taking a Rheme to become Theme of the upcoming clause. In terms of types of Theme, the high number of Topical Themes in students' texts may indicate that the students are able to lead the reader to focus on participant (characters) of the story. The use of Interpersonal, which dominantly used by middle and low indicate that high achievers tend to be complex lexically (written language), while middle and low achievers tend to be complex grammatically (spoken language). The dominant use of Textual Theme indicates that the students are able to create complex clause and to connect the clause which helps them to create a cohesive and coherent text. Further research in Theme and Thematic progression for elementary school students can be applied in other genres such as in recount text, procedural text, or expository text.
|
v3-fos-license
|
2024-05-30T06:17:49.851Z
|
2024-05-29T00:00:00.000
|
270094189
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00384-024-04656-1.pdf",
"pdf_hash": "4a2be3f005d7120016d6ec8faf35f95280a59bf3",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46727",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "5ee2cad185ce203e2df3a3505bdf6c7aca895802",
"year": 2024
}
|
pes2o/s2orc
|
Assessing circulating tumour DNA (ctDNA) as a prognostic biomarker in locally advanced rectal cancer: a systematic review and meta-analysis
Introduction Circulating tumour DNA (ctDNA) has emerged as a promising biomarker in various cancer types, including locally advanced rectal cancer (LARC), offering potential insights into disease progression, treatment response and recurrence. This review aims to comprehensively evaluate the utility of ctDNA as a prognostic biomarker in LARC. Methods PubMed, EMBASE and Web of Science were searched as part of our review. Studies investigating the utility of ctDNA in locally advanced rectal cancer (LARC) were assessed for eligibility. Quality assessment of included studies was performed using the Newcastle Ottawa Scale (NOS) risk of bias tool. Outcomes extracted included basic participant characteristics, ctDNA details and survival data. A meta-analysis was performed on eligible studies to determine pooled recurrence-free survival (RFS). Results Twenty-two studies involving 1676 participants were included in our analysis. Methodological quality categorised by the Newcastle Ottawa Scale was generally satisfactory across included studies. ctDNA detected at various time intervals was generally associated with poor outcomes across included studies. Meta-analysis demonstrated a pooled hazard ratio of 8.87 (95% CI 4.91–16.03) and 15.15 (95% CI 8.21–27.95), indicating an increased risk of recurrence with ctDNA positivity in the post-neoadjuvant and post-operative periods respectively. Conclusion Our systematic review provides evidence supporting the prognostic utility of ctDNA in patients with LARC, particularly in identifying patients at higher risk of disease recurrence in the post-neoadjuvant and post-operative periods. Supplementary Information The online version contains supplementary material available at 10.1007/s00384-024-04656-1.
Introduction
Locally advanced rectal cancer (LARC) can cause significant challenges in terms of management [1].Increasingly, patients are having total neoadjuvant treatment (TnT) [2], and the need for better indicators of complete clinical response, especially in borderline cases, is vital [3].Circulating tumour DNA (ctDNA) has emerged as a promising biomarker in various cancer types, including LARC, offering potential insights into disease progression, treatment response and recurrence [4][5][6][7][8].
ctDNA refers to fragmented DNA shed by tumour cells into the bloodstream [9].These fragments carry genetic alterations characteristic of the originating tumour, providing a non-invasive means of interrogating tumour biology [10].The detection and analysis of ctDNA have garnered significant interest in cancer research due to its potential Page 2 of 10 applications in diagnosis, prognostication and treatment monitoring [11].ctDNA can be isolated from peripheral blood samples and analysed using various techniques, including next-generation sequencing (NGS), digital PCR (dPCR) and targeted amplicon-based assays [12].
NGS is a highly sensitive technique that allows for the comprehensive profiling of ctDNA, enabling the detection of a wide range of genetic alterations, including single nucleotide variants (SNVs), insertions and deletions (indels), copy number variations (CNVs) and structural rearrangements [13].Conversely, dPCR quantifies the absolute number of target DNA molecules, allowing for precise measurement of ctDNA levels and increased cost-effectiveness when compared to NGS [14].Alternative approaches include real-time PCR (qPCR), BEAMing and fragment analysis [15][16][17].By profiling the genomic landscape of tumours through ctDNA analysis, clinicians can gain insights into tumour heterogeneity, clonal evolution and potential therapeutic targets, thereby facilitating personalised treatment approaches [18].
While several clinicopathological factors (MRI and endoscopic response) are currently used to stratify response to treatments in patients with LARC, they have limitations [19].In addition, traditional prognostic factors such as tumour stage and histological grade can provide circumstantial value about potential disease behaviour and treatment response.There is a need to identify novel biomarkers that can complement existing prognostic tools and enhance risk stratification in LARC [20].This systematic review aims to comprehensively evaluate the utility of ctDNA as a prognostic biomarker in LARC, exploring its potential to address the unmet clinical needs in this challenging disease context.
Study design and reporting guidelines
This is a systematic review and meta-analysis of retrospective and prospective cohort studies conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) reporting guidelines [21].Local institutional ethical approval was not required.All authors declare no conflicts of interest.This research received no external funding.
Search strategy
The following databases were searched as part of our systematic review process in April 2024; MEDLINE, PubMed, Embase and Web of Science.The following search strategy was used: (rectal OR rectum OR colorect*) AND (circulating tumour DNA OR ctDNA OR ct-DNA OR circulating free DNA OR cfDNA or cf-DNA).The search was completed on 5th April 2024.The grey literature was also searched as part of our study to identify any further ongoing works of literature, including theses and conference abstracts.
Eligibility criteria
Original studies investigating an association between ctDNA and treatment or oncological outcomes in patients with locally advanced rectal cancer were eligible for inclusion.Case reports, conference abstracts and review articles were excluded.
Data extraction and quality assessment
A database was established utilising the citation management software EndNote X9TM.Independent reviews of search outputs were conducted by two researchers (NOS and HCT).Initially, duplicate entries were eradicated, followed by a screening of study titles to gauge potential relevance.Subsequently, the abstracts of selected studies underwent assessment for eligibility based on predetermined inclusion/ exclusion criteria.Excluded studies were categorised by reason within the database.Full texts of eligible abstracts were then scrutinised using identical criteria.
For efficient data extraction and storage, the Cochrane Collaboration's screening and data extraction tool, Covidence, was employed [22].Data collection was undertaken independently by two reviewers (NOS and HCT), encompassing study details, design, population, intervention, comparison groups and outcomes.Discrepancies between reviewers were resolved through open discussion, with final arbitration by the senior author (MK).
Potential biases in non-RCT studies were evaluated using the Newcastle-Ottawa Scale (NOS) risk of bias tool, with results tabulated accordingly [23].This tool assesses studies across various categories, assigning stars to denote quality: 7 stars indicating "very good," 5-6 stars for "good," 3-4 stars for "satisfactory," and 0-2 stars for "unsatisfactory."Two reviewers (NOS and HCT) independently conducted critical appraisals, with a third reviewer (MK) arbitrating in cases of discordance.
Statistical analysis
Statistical analysis was conducted using Revman Statistical Software (Ver.5 Copenhagen, Denmark).Generic inverse variance data were presented as hazard ratios (HR) alongside 95% confidence intervals (95% CI).Outcome measures, including mean with standard deviation and median with interquartile range, were documented.Only studies that provided hazard ratios with either confidence intervals or p-values were eligible for inclusion in the meta-analysis.When necessary, outcome variables (mean and SD) were estimated from the median and range using the formula described by Hozo et al. [24].Heterogeneity was evaluated using I-squared statistics, with values exceeding 50% indicating significant heterogeneity.Statistical significance was defined as a p-value of less than 0.05.A random effects model was employed uniformly.
Systematic review registration
Our systematic review was registered on PROSPERO in April 2024 (ID: 533712).
Search results
The previously outlined literature search yielded a total of 2123 results (Supplementary material S1).After eliminating 281 duplicates, 1842 studies underwent screening.Following the initial screening, 73 full abstracts were meticulously reviewed for eligibility, resulting in 42 being selected for fulltext scrutiny.Among these 42 full texts, 22 studies met the eligibility criteria and were consequently included in our analysis .Notably, eight of the included studies provided adequate statistical data for incorporation into our quantitative analysis [30, 32-34, 36, 42, 44, 45].A detailed depiction of the literature screening process can be found in Supplementary material S1.
Methodological characteristics and quality of studies
Among the 22 studies included, seventeen were conducted prospectively, four retrospectively, and one did not specify the study design.Regarding the study settings, ten were conducted in a single institute, nine in a multi-institutional setting, and three did not specify.Concerning the assessment of the risk of bias, eleven studies were rated as "very good", ten as "good", one as "satisfactory" and none as "unsatisfactory" according to the classification by the Newcastle Ottawa Scale.Supplementary material S2 provides a summary of our risk of bias assessment.The methodological characteristics of the included studies are outlined in Supplementary material S3.
Participant characteristics
The total number of participants across the included studies was 1676.All patients had a diagnosis of locally advanced rectal cancer.Baseline participant characteristics are outlined below in Table 1.ctDNA details and study findings ctDNA assay type, sequencing method and collection time points varied across included studies.Fourteen studies measured a mutation-specific panel, six measured cfDNA concentration and the remaining two measured promoter genes or multiple assays respectively.In regard to the sequencing method, next-generation sequencing (NGS) was the most commonly utilised method accounting for ten studies, followed by polymerase chain reaction (PCR) (n = 9) and direct fluorescent antibody (DFA) (n = 2).One study failed to report the sequencing method.
In terms of approach, eleven studies used an agnostic, ten used targeted and one study used both approaches.Agnostic approaches do not rely on prior knowledge of specific mutations but instead analyse the entire ctDNA for any alterations that may be present [47].These approaches are valuable in settings where comprehensive genomic profiling is necessary, particularly for identifying novel or unexpected mutations.They are beneficial in research and clinical scenarios where a wide array of genetic alterations needs to be detected to inform treatment decisions or understand tumour heterogeneity [48].Conversely, targeted approaches focus on detecting specific, known mutations that are of clinical interest [49].They often use panels designed to target commonly mutated genes in particular cancers.Targeted approaches are highly relevant in clinical settings where specific genetic alterations are known to influence prognosis or guide targeted therapies [50].They are typically more cost-effective and faster than agnostic approaches, making them suitable for routine clinical use, particularly when monitoring known mutations for treatment response or minimal residual disease.Both approaches have their place in the evaluation of ctDNA [47].Agnostic methods provide a broad view of the genetic landscape, which can uncover new targets for therapy, while targeted methods allow for focused, efficient and cost-effective monitoring of known genetic alterations.Tables 2 and 3 below outline ctDNA panel details, collection timepoints and main study findings.
Meta-analysis
Eight of the included studies provided sufficient statistical data to be included in our quantitative analysis [30, 32-34, 36, 42, 44, 45].We investigated for an association between recurrence-free survival and the presence of ctDNA at several timepoints in a meta-analysis, the results of which are demonstrated in Fig. 1 below.The pooled hazard ratio for ctDNA presence after completion of neoadjuvant therapy when compared to patients who were not found to have detectable ctDNA was 8.87 (95% CI 4.91-16.03,p = < 0.0001)).Similarly, the pooled hazard ratio for ctDNA presence post-operatively compared to those without detectable ctDNA was 15.15 (95% CI 8.21-27.95,p = < 0.0001)).These results indicate an increased risk for recurrence in patients with LARC with detectable ctDNA in either the post-neoadjuvant therapy or post-operative periods.
Ongoing research
Several trials are ongoing to further determine the prognostic capabilities of ctDNA in patients with LARC.The DYNAMIC-RECTAL trial (ACTRN12617001560381) is a multi-centre randomised controlled phase 2 trial aiming to investigate the prognostic benefits of ctDNA detection post-operatively to guide the need for adjuvant treatment in patients with LARC.Eligible patients underwent neoadjuvant chemoradiation and total mesenteric excision, and the primary endpoint was adjuvant chemotherapy use.Preliminary results suggest that fewer patients in the ctDNA-guided arm required adjuvant therapy with a lower risk of recurrence in patients with undetectable post-operative ctDNA [51].
The SYNCOPE study (NCT04842006) is a randomised controlled treatment trial aiming to reduce both overtreatment and metastatic disease progression in patients with rectal cancer.Participants with high-risk features will be randomised into two groups: early systemic chemotherapy followed by ctDNA and organoid-guided adjuvant therapy, or conventional treatment.Primary outcomes include RFS and ctDNA positivity rates in post-operative patients within the conventional treatment arm not exposed to chemotherapy.
The REVEAL study (NCT05674422) is a prospective multi-institutional study evaluating response to total neoadjuvant therapy (TNT) with liquid biopsy in eligible patients with LARC.The study aims to investigate the role of ctDNA in the prediction of relapse in this cohort of patients followed by a watch and wait programme or TME depending on the response to initial treatment.
Finally, CINTS-R (NCT05601505) is a multi-institutional randomised controlled trial aiming to evaluate outcomes of ctDNA-guided neoadjuvant treatment in patients with LARC.Treatment groups will receive either NCRT followed by immunotherapy, NCRT alone or TNT depending on mutation status following NGS of tumour tissue and variant allele frequency (VAF) of ctDNA.Control groups will receive standard NCRT only.
Discussion
Our systematic review of the current available literature demonstrates the significant potential of ctDNA as a prognostic biomarker in patients with LARC.Particularly, ctDNA detection post-neoadjuvant therapy or post-operatively was associated with an increased risk of recurrence, suggesting its utility in predicting disease progression and informing treatment decisions.Metaanalysis of available data further supported these findings, indicating a significantly higher hazard ratio for recurrence in patients with detectable ctDNA at these critical time points.These results align and extend upon existing literature on ctDNA in various cancer types, highlighting its potential as a non-invasive biomarker for monitoring disease burden and treatment response in a multitude of malignancies [4][5][6][7][8].Despite its promise, ctDNA analysis is not without limitations [52].Technical challenges, such as low concentrations of ctDNA in circulation and the need for sensitive detection methods, can impede accurate assessment.Similarly, tumour heterogeneity and clonal evolution may further complicate interpretation, potentially leading to false-negative or falsepositive results [53].Moreover, the lack of standardised protocols for ctDNA analysis and variability in assay performance across studies underscore the need for methodological standardisation and validation [54].
While our systematic review provides valuable insights into the role of ctDNA in LARC, several limitations warrant consideration.Firstly, studies included in our review demonstrated heterogeneity in terms of methodology, patient populations and outcome measures, which may have introduced bias and confounded interpretation.Additionally, the reliance on published data may have led to publication bias, with studies reporting significant findings more likely to be included.Despite these limitations, the findings of our review have significant implications for future research and clinical practice.The consistent association between ctDNA detection and adverse outcomes in LARC suggests its potential as a prognostic biomarker for risk stratification and treatment decision-making.Integrating ctDNA analysis into routine clinical practice could facilitate personalised treatment strategies, including the identification of highrisk patients who may benefit from intensified therapy or closer surveillance [55].Furthermore, ongoing research efforts aimed at refining ctDNA-based assays and elucidating the underlying biological mechanisms driving ctDNA release and clearance are crucial for maximising its clinical utility [56].
Future research in this field should prioritise several key areas to address current knowledge gaps and optimise the implementation of ctDNA analysis in clinical practice.Firstly, large-scale prospective studies with standardised methodologies are needed to validate the prognostic utility of ctDNA across diverse patient populations and treatment settings [57].Additionally, efforts to standardise ctDNA assays, including sample collection, processing and analysis protocols, are essential to ensure the reproducibility and comparability of results.Finally, collaborative efforts to establish international consortia and biobanks for ctDNA research could facilitate data sharing and accelerate progress towards clinical implementation [58].
Conclusion
Our systematic review provides evidence supporting the prognostic utility of ctDNA in patients with LARC, particularly in identifying patients at higher risk of disease recurrence in the post-neoadjuvant and post-operative periods.
Table 1
Baseline participant characteristics
Table 3
Study findings
Table 3
(continued) cCR clinical complete response, CRT chemoradiotherapy, cfDNA circulating free DNA, TNT total neoadjuvant therapy, OS overall survival, HR hazard ratio, CI confidence interval, AJCC American Joint Committee on Cancer, MFS metastasisfree survival, a/w associated with, DFS disease-free survival, RFS recurrence-free survival, TRG tumour regression grade, NACRT neoadjuvant chemoradiotherapy, mrTRG magnetic-resonance TRG
|
v3-fos-license
|
2023-11-05T16:04:49.367Z
|
2023-11-01T00:00:00.000
|
265006800
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/24/21/15919/pdf?version=1698986900",
"pdf_hash": "b3516703679f25f20497b8bbeec4a98e420fe83b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46729",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "07a0f728f1c68007a19e9e6b2bada3f8c54a1f27",
"year": 2023
}
|
pes2o/s2orc
|
PHB2 Alleviates Neurotoxicity of Prion Peptide PrP106–126 via PINK1/Parkin-Dependent Mitophagy
Prion diseases are a group of neurodegenerative diseases characterized by mitochondrial dysfunction and neuronal death. Mitophagy is a selective form of macroautophagy that clears injured mitochondria. Prohibitin 2 (PHB2) has been identified as a novel inner membrane mitophagy receptor that mediates mitophagy. However, the role of PHB2 in prion diseases remains unclear. In this study, we isolated primary cortical neurons from rats and used the neurotoxic prion peptide PrP106–126 as a cell model for prion diseases. We examined the role of PHB2 in PrP106–126-induced mitophagy using Western blotting and immunofluorescence microscopy and assessed the function of PHB2 in PrP106–126-induced neuronal death using the cell viability assay and the TUNEL assay. The results showed that PrP106–126 induced mitochondrial morphological abnormalities and mitophagy in primary cortical neurons. PHB2 was found to be indispensable for PrP106–126-induced mitophagy and was involved in the accumulation of PINK1 and recruitment of Parkin to mitochondria in primary neurons. Additionally, PHB2 depletion exacerbated neuronal cell death induced by PrP106–126, whereas the overexpression of PHB2 alleviated PrP106–126 neuronal toxicity. Taken together, this study demonstrated that PHB2 is indispensable for PINK1/Parkin-mediated mitophagy in PrP106–126-treated neurons and protects neurons against the neurotoxicity of the prion peptide.
Introduction
Prion diseases are infectious neurodegenerative disorders characterized by vacuolar degeneration of the central nervous system (CNS).They affect various animal species and have limited zoonotic potential [1][2][3].The transmissible nature of prion diseases is attributed to the template-directed misfolding of the normal cellular prion protein (PrP C ) by the disease-associated conformation (PrP Sc ) [4][5][6].This misfolding event leads to increased protease resistance and β-sheet content in the protein, resulting in the deposition of PrP Sc in the CNS [7,8].The neurotoxic PrP fragment 106-126 (PrP 106-126 ) shares many physiological properties and pathogenic characteristics with PrP Sc .Consequently, PrP 106-126 is commonly used to investigate the structural and physicochemical features underlying PrP neurotoxicity [9][10][11].
The mitochondrion plays a critical role in the life of eukaryotic cells and is involved in regulating cell death, innate immune responses, and cell differentiation [12,13].In prion diseases, mitochondrial dysfunction is frequently observed in brain tissues [9,14,15].Damaged mitochondria are impaired in ATP production and release higher levels of reactive oxygen species (ROS) [16,17], which are detrimental to cells.Consequently, the autophagic system targets impaired mitochondria for degradation through a catabolic process known as mitophagy, which contributes to maintaining mitochondrial quality control in various cell types [18][19][20].
Prohibitin 2 (PHB2) is a highly conserved inner mitochondrial membrane (IMM) protein that is critical in regulating mitochondrial assembly and function [30].Previous studies have shown that PHB2 acts as a key mitophagy receptor in mammalian cells, facilitating Parkin-mediated mitophagy by stabilizing PINK1 and enhancing the recruitment of Parkin to mitochondria [31,32].Furthermore, PHB2 mediates mitophagy by interacting with LC3, a protein associated with the autophagosomal membrane, and binding to damaged mitochondria through its LC3-interaction region domain [30].However, the specific impact of PHB2 on mitophagy in the context of prion diseases remains unclear.
In this study, we aimed to examine the status of mitophagy and the involvement of PHB2 in mitophagy and neuronal death in primary cortical neurons treated with PrP 106-126 .Our findings demonstrated that PrP 106-126 induced PINK1/Parkin-dependent mitophagy in primary neurons.We also determined that PHB2 is essential for PINK1/Parkin-mediated mitophagy and plays a protective role against the neurotoxic effects of the prion peptide on neurons.
PrP 106-126 Caused Morphological Abnormality of Mitochondria in the Primary Neurons
Neuronal mitochondria play a critical role in maintaining cellular homeostasis, modulating reactive species, providing energy in the form of ATP through oxidative phosphorylation, and regulating various forms of programmed cell death, among numerous other functions [33][34][35].Previous studies have described the dysfunction and morphological changes of mitochondria in the brains of prion diseases [9,14].In our study, primary neurons were treated with 100 µM PrP 106-126 for 24 h and 48 h, respectively, and then the morphology changes of the neurons were observed.As shown in Figure 1A, PrP 106-126treated neurons exhibited neuron shrink in size, and some of them were lost.The axons were broken, fragmented, or even disappeared.Subsequently, we investigated the ultrastructural changes of mitochondria; primary cortical neurons were treated with 100 µM PrP 106-126 for 24 h and then fixed and subjected to TEM observations.In the TEM images of the control group (Figure 1B, top panel), mitochondria with clear cristae and matrix were evident.In contrast, in neurons exposed to PrP 106-126 stimulation (Figure 1B, bottom panel), many mitochondria became swollen and rounder, the matrix became shallower, and the cristae fractured or even disappeared.These findings suggest that PrP 106-126 induces severe morphological abnormalities in mitochondria.Aiken and colleagues suggested that the scrapie agent, or prion, was present in brain mitochondria from prion-infected hamsters [36].To investigate whether PrP 106-126 was also present in the mitochondria of primary neurons, we transfected mito-DesRed into primary neurons for 24 h, followed by the treatment with FITC-PrP 106-126 for an additional 24 Aiken and colleagues suggested that the scrapie agent, or prion, was present in brain mitochondria from prion-infected hamsters [36].To investigate whether PrP 106-126 was also present in the mitochondria of primary neurons, we transfected mito-DesRed into primary neurons for 24 h, followed by the treatment with FITC-PrP 106-126 for an additional 24 h.The colocalization between mitochondria and FITC-PrP 106-126 was examined.The results demonstrated that mitochondria predominantly colocalized with FITC-PrP 106-126 in the neuronal cell body (Figure 1C).These findings indicate that PrP 106-126 is present in mitochondria and may induce mitochondrial morphological abnormalities in primary neurons.
PrP 106-126 Induced Mitophagy in Primary Neurons
Mitophagy is an evolutionarily conserved process that involves the clearance of damaged mitochondria through the autophagy-lysosome pathway, playing an essential role in maintaining the health of the mitochondrial network [37].In this study, we examined the occurrence of mitophagy in primary neurons exposed to PrP 106-126 .Primary cortical neurons were treated with different concentrations of PrP 106-126 for 24 h, and the protein levels of TOMM20 (translocase of the outer mitochondrial membrane 20), a marker for the mitochondrial outer membrane, and microtubule-associated protein 1 light chain 3B-II (LC3B-II), an autophagosomal marker, were examined via Western blotting.As shown in Figure 2A, the protein levels of TOMM20 reduced, while LC3B-II levels increased after 50 µM PrP 106-126 treatment.We further treated primary neurons with 100 µM PrP 106-126 at different times, as shown in Figure 2B.The reduced protein levels of TOMM20 and the increased levels of LC3B-II were detected after 12 h treatment (Figure 2B).Additionally, using TEM, we observed larger double-layer membrane structures containing damaged and broken mitochondria in PrP 106-126 -treated neurons (Figure 2C), indicating the induction of mitophagy.To further confirm the induction of mitophagy, we expressed COX8-EGFP-mCherry, a tandem fluorescent-tagged mitochondrial targeting sequence of the inner membrane protein COX8, in primary neurons.This allowed us to monitor the delivery of mitochondria to lysosomes based on the different pH stability of EGFP and mCherry fluorescent proteins.As depicted in Figure 2D, PBS-treated neurons displayed yellow staining of mitochondria with a merge of green (EGFP) and red (mCherry) signals at 24 and 36 h.In contrast, distinct red puncta were detected in PrP 106-126 -treated neurons at 24 and 36 h, indicating an increased delivery of mitochondria into lysosomes.Collectively, these findings suggest that mitophagy is induced in primary neurons in response to the PrP 106-126 treatment.
PrP 106-126 Triggers PINK1/Parkin-Dependent Mitophagy
In healthy cells, PINK1 is continually degraded by mitochondrial proteases.However, upon mitochondrial damage, PINK1 proteolysis is inhibited, leading to its accumulation in the mitochondria.Subsequently, this accumulation promotes the recruitment of cytosolic E3 ubiquitin protein ligase Parkin to the mitochondrial outer membrane, facilitating the process of mitophagy [26,38].PINK1/Parkin-dependent mitophagy is an extensively studied form of mitophagy and has been implicated in the pathogenesis of neurodegenerative diseases [39].To investigate the involvement of PINK1 and Parkin in PrP 106-126 -induced mitophagy, primary neurons were exposed to PrP 106-126 for 6 to 48 h, and the protein levels of PINK1 and Parkin were examined via Western blotting analysis.As depicted in Figure 3A, the PrP 106-126 treatment increased the protein level of PINK1, followed by a slight increase in the Parkin protein level.
Moreover, cytoplasmic and mitochondrial fractions were extracted from PrP 106-126treated neurons and control neurons, respectively, and subjected to Western blotting.The results displayed elevations in the protein levels of PINK1, Parkin, and LC3B-II in the mitochondria of PrP 106-126 -treated neurons compared to the non-treated group (Figure 3B).Additionally, mito-DesRed, a eukaryotic expression plasmid that is used as a mitochondrial marker, was transfected into primary neurons for 24 h, followed by the treatment with PrP 106-126 .Confocal microscopy was employed to examine the colocalization between mitochondria and Parkin.As shown in Figure 3C, the PrP 106-126 treatment increased the merging of green (Parkin) and red (mitochondria) fluorescence, indicating the recruitment of Parkin to mitochondria.In summary, these findings suggest that PrP 106-126 treatment increases the protein level of PINK1 and stabilizes PINK1 in the mitochondrial fraction, promoting the recruitment of Parkin to mitochondria and mediating mitophagy in primary neurons.Moreover, cytoplasmic and mitochondrial fractions were extracted from PrP 106-126treated neurons and control neurons, respectively, and subjected to Western blotting.The results displayed elevations in the protein levels of PINK1, Parkin, and LC3B-II in the mitochondria of PrP 106-126 -treated neurons compared to the non-treated group (Figure 3B).Additionally, mito-DesRed, a eukaryotic expression plasmid that is used as a mitochondrial marker, was transfected into primary neurons for 24 h, followed by the treatment with PrP 106-126 .Confocal microscopy was employed to examine the colocalization between mitochondria and Parkin.As shown in Figure 3C, the PrP 106-126 treatment increased the merging of green (Parkin) and red (mitochondria) fluorescence, indicating the recruitment of Parkin to mitochondria.In summary, these findings suggest that PrP 106-126 treatment increases the protein level of PINK1 and stabilizes PINK1 in the mitochondrial fraction, promoting the recruitment of Parkin to mitochondria and mediating mitophagy in primary neurons.
PHB2 Involves in PrP 106-126 -Induced Mitophagy in Primary Neurons
Prohibitin 2 (PHB2), an essential IMM protein, serves as a receptor in mitophagy and is responsible for mitochondrial quality control [31].Our findings demonstrate that the PrP 106-126 treatment increases the protein level of PHB2 (Figure 4A).To investigate the involvement of PHB2 in PrP 106-126 -induced mitophagy, different shRNAs were transfected into primary neurons to knock down endogenous PHB2, and the efficiency of knockdown
PHB2 Involves in PrP 106-126 -Induced Mitophagy in Primary Neurons
Prohibitin 2 (PHB2), an essential IMM protein, serves as a receptor in mitophagy and is responsible for mitochondrial quality control [31].Our findings demonstrate that the PrP 106-126 treatment increases the protein level of PHB2 (Figure 4A).To investigate the involvement of PHB2 in PrP 106-126 -induced mitophagy, different shRNAs were transfected into primary neurons to knock down endogenous PHB2, and the efficiency of knockdown was confirmed by Western blotting (Figure 4B).After treating the neurons with PrP 106-126 for 24 h, the protein levels of LC3B-II and TOMM20 were examined.As shown in Figure 4B, compared to the control group transfected with shRNA-NC, the PHB2-knockdown groups exhibited a reduction in LC3B-II levels, while the protein level of TOMM20 increased.Additionally, we transfected primary neurons with different concentrations of the recombinant vector flag-PHB2 for 24 h, using the empty vector flag-PCMV as a control.Subsequently, the neurons were treated with PrP 106-126 for an additional 24 h.As shown in Figure 4C, the PHB2-overexpression group exhibited an increase in LC3B-II and a decrease in TOMM20 protein levels, indicating that PHB2 may participate in PrP 106-126 -induced mitophagy.To further verify the influence of PHB2 on mitophagy, we transfected COX8-EGFP-mCherry into PHB2-knockdown neurons and PHB2-overexpression neurons, respectively.Subsequently, the neurons were exposed to PrP 106-126 stimulation.The results demonstrated that compared to the peptide-only treatment group, the knockdown of PHB2 increased the yellow staining of mitochondria, indicating the merge of green (EGFP) and red (mCherry) signals.Conversely, the overexpression of PHB2 exhibited more prominent red puncta (Figure 4D), suggesting that PHB2 influences the delivery of mitochondria into lysosomes.Collectively, these results suggest that PHB2 is indispensable for PrP 106-126 -induced mitophagy in primary neurons.To investigate the role of PHB2 in neuronal survival, we examined the effects of PHB2 knockdown on cell viability in neurons exposed to PrP 106-126 .The results revealed that PHB2 knockdown led to a significant decrease in cell viability compared to the sh-NC control group under PrP 106-126 stimulation (Figure 6B).Furthermore, we transfected primary neurons with flag-PHB2 and subsequently treated them with PrP 106-126 .As depicted in Figure 6C, the cell viability of flag-PHB2-transfected neurons was markedly higher than that of the flag-PCMV control group, indicating that PHB2 offers some level of protection to neurons against PrP 106-126 toxicity.To assess the colocalization of Parkin with mitochondria, we treated PHB2-knockdown and PHB2-overexpression neurons with PrP 106-126 for 24 h and performed a confocal microscopy assay.The results showed that PHB2-knockdown neurons displayed fewer yellow stains with the merge of green (Parkin) and red (mitochondria) signals; in contrast, PHB2overexpression neurons exhibited more yellow fluorescence with the merge of Parkin and mitochondria signals (Figure 5B), which indicate that PHB2 influences the colocalization of Parkin with mitochondria.Therefore, these findings suggest that PHB2 is essential for accumulating PINK1 and subsequent recruitment of Parkin to mitochondria.
PHB2 Modulates PrP 106-126 -Induced Neuronal Death
Neuronal death is a prominent characteristic of prion diseases, as both PrP Sc and PrP 106-126 have been shown to induce apoptosis in neuronal cells [40].Initially, we treated primary neurons with 100 µM PrP 106-126 for 24 and 36 h, respectively.Cell viability was assessed using the CCK-8 assay, and the results demonstrated that the PrP 106-126 treatment reduced cell viability compared to PBS-treated at both time points (Figure 6A).To validate the impact of PHB2 on PrP 106-126 -induced neuronal damage, we conducted a TUNEL assay to measure cell apoptosis.The results demonstrated that the treatment with PrP 106-126 increased the fluorescence intensity (red) in the TUNEL staining at 24 and 36 h.Moreover, the knockdown of PHB2 resulted in a further increase in the number of TUNEL-positive neurons.Conversely, the overexpression of PHB2 reduced the number of TUNEL-positive neurons (Figure 6D).These findings collectively indicate that PHB2 modulates PrP 106-126 -induced neuronal death.The depletion of PHB2 exacerbates the neuronal toxicity of PrP 106-126 , while overexpression of PHB2 alleviates the neuronal toxicity of PrP 106-126 .
Discussion
Mitochondria play a crucial role in regulating cell death, a significant characteristic of neurodegeneration [12,41,42].In our study, we observed that PrP 106-126 induced morphological abnormalities in mitochondria, such as mitochondrial swelling and vacuolation, in primary neurons.Similar mitochondrial abnormalities have been reported in the brains of various neurodegenerative diseases, including prion diseases [14,32,43].The impaired mitochondrial activity is typically observed at advanced disease stages in both human patients and animal models of neurodegenerative diseases [44].It has been documented that mitochondrial dysfunction contributes to the formation of Aβ plaques and neurofibrillary tangles, which are defining features of Alzheimer's disease, and this, in turn, exacerbates mitochondrial defects [45,46].Li and colleagues reported extensive mi- To investigate the role of PHB2 in neuronal survival, we examined the effects of PHB2 knockdown on cell viability in neurons exposed to PrP 106-126 .The results revealed that PHB2 knockdown led to a significant decrease in cell viability compared to the sh-NC control group under PrP 106-126 stimulation (Figure 6B).Furthermore, we transfected primary neurons with flag-PHB2 and subsequently treated them with PrP 106-126 .As depicted in Figure 6C, the cell viability of flag-PHB2-transfected neurons was markedly higher than that of the flag-PCMV control group, indicating that PHB2 offers some level of protection to neurons against PrP 106-126 toxicity.
To validate the impact of PHB2 on PrP 106-126 -induced neuronal damage, we conducted a TUNEL assay to measure cell apoptosis.The results demonstrated that the treatment with PrP 106-126 increased the fluorescence intensity (red) in the TUNEL staining at 24 and 36 h.Moreover, the knockdown of PHB2 resulted in a further increase in the number of TUNEL-positive neurons.Conversely, the overexpression of PHB2 reduced the number of TUNEL-positive neurons (Figure 6D).These findings collectively indicate that PHB2 modulates PrP 106-126 -induced neuronal death.The depletion of PHB2 exacerbates the neuronal toxicity of PrP 106-126 , while overexpression of PHB2 alleviates the neuronal toxicity of PrP 106-126 .
Discussion
Mitochondria play a crucial role in regulating cell death, a significant characteristic of neurodegeneration [12,41,42].In our study, we observed that PrP 106-126 induced morphological abnormalities in mitochondria, such as mitochondrial swelling and vacuolation, in primary neurons.Similar mitochondrial abnormalities have been reported in the brains of various neurodegenerative diseases, including prion diseases [14,32,43].The impaired mitochondrial activity is typically observed at advanced disease stages in both human patients and animal models of neurodegenerative diseases [44].It has been documented that mitochondrial dysfunction contributes to the formation of Aβ plaques and neurofibrillary tangles, which are defining features of Alzheimer's disease, and this, in turn, exacerbates mitochondrial defects [45,46].Li and colleagues reported extensive mitochondrial fragmentation, the collapse of mitochondrial membrane potential (MMP), ATP loss, and cell death in PrP 106-126 -treated N2a cells in vitro and the hamster prion model in vivo [9].
Additionally, we discovered the presence of PrP 106-126 in mitochondria.Manczak et al., reported that mitochondria serve as direct sites for Aβ accumulation in neurons affected by Alzheimer's disease, leading to the generation of free radicals and impairment of mitochondrial metabolism during disease development and progression [47].In a mouse model of Parkinson's disease, mutant α-synuclein has been detected within mitochondria in specific brain regions, suggesting that it may directly damage mitochondria [48].Therefore, we hypothesize that the presence of prion peptides in mitochondria could be one of the reasons leading to mitochondrial damage.
Autophagy is a highly conserved process essential for cellular homeostasis and survival [49].Moreover, selective autophagy forms specifically target damaged organelles, such as mitophagy, which clears dysfunctional mitochondria [50].Proficient mitophagy responses are crucial for maintaining optimal mitochondrial numbers, preserving energy metabolism, and protecting cells, including neurons, from the harmful effects of damaged mitochondria [19,37,51].In this study, we found that the PrP 106-126 treatment induced mitophagy in primary neurons, which aligns with previous studies indicating enhanced mitophagy in prion-infected cultured cells and prion-infected experimental mice [14].Mitochondrial dysfunction plays a critical role in developing numerous neurodegenerative diseases, and cells have evolved the capacity to limit impairment by activating mitophagy [52].Multiple lines of evidence suggest that mitophagy mediates neuroprotective effects in certain forms of neurodegenerative diseases and acute brain damage [46,51,53].Therefore, the activation of mitophagy in PrP 106-126 -treated neurons may contribute to eliminating damaged mitochondria caused by the prion peptide, to some extent, preserving mitochondrial homeostasis and promoting neuronal survival.However, whether the accumulation of PrP 106-126 in mitochondria directly activates mitophagy or PrP 106-126 exerts its toxic effects via other pathways leading to mitophagy remains to be further studied.Furthermore, we observed an increase in the protein levels of PINK1 and Parkin in PrP 106-126 -treated neurons.PINK1 plays a role in mitochondrial maintenance and functions upstream of Parkin [27,54].It is selectively stabilized by mitochondrial dysfunction, leading to the recruitment of Parkin to the mitochondrial outer membrane [24,55].This recruitment triggers the ubiquitination of several mitochondrial outer membrane proteins, such as TOMM20, which, in turn, bind specific autophagy receptors such as SQSTM1/p62 [24,56].Subsequently, LC3B-coated phagophores encapsulate the damaged mitochondria and facilitate their delivery to the lysosome for degradation [22,26,57].In our study, we observed higher protein levels of PINK1, Parkin, and LC3B-II in the mitochondria of PrP 106-126treated neurons compared to untreated neurons.Additionally, the PrP 106-126 treatment increased the colocalization between Parkin and mitochondria.These findings suggest that the PrP 106-126 treatment stabilized PINK1 on mitochondria, leading to the recruitment of Parkin to impaired mitochondria.This recruitment process likely involves the involvement of LC3B-II in mediating mitophagy.However, the specific mechanism needs more research.PHB2 is a highly conserved mitochondrial inner membrane protein that forms the mitochondrial prohibitin complex along with PHB/PHB1 [58][59][60].It has been reported that PHB2 binds to the LC3 through an LC3-interaction region (LIR) domain upon mitochondrial depolarization and proteasome-dependent outer membrane rupture.This binding is necessary for the clearance of paternal mitochondria [30,31,38].Therefore, we explored the role of PHB2 in PrP 106-126 -induced mitophagy.We observed an increase in the expression of PHB2, and PHB2 knockdown inhibited mitophagy; conversely, the overexpression of PHB2 increased mitophagy.These findings suggest that PHB2 is essential for PrP 106-126induced mitophagy.
It has been reported that PHB2 is required for classic Parkin-induced mitophagy in mammalian cells [38,61,62].In our study, we found that PHB2 depletion blocked the mitochondrial accumulation of PINK1 and inhibited the recruitment of Parkin to mitochondria.On the other hand, PHB2 overexpression directly increased PINK1 accumulation and subsequent recruitment of Parkin to mitochondria in primary neurons exposed to PrP 106-126 stimulation.These data demonstrate that PHB2 is essential for PINK1/Parkin-dependent mitophagy in PrP 106-126 -treated neurons.Several studies have reported that PHB2 may be a novel target for diseases associated with mitophagy.Yan and colleagues found that the small molecule compound FL3 could inhibit the function of PHB2, thereby significantly blocking mitophagy and exerting antitumor effects [38].Furthermore, PHB2 is also required for cholestasis-induced mitophagy via LC3 into the injured mitochondria [30].
Studies have reported that mitochondrial dysfunction and bioenergy deficiency in many neurodegenerative diseases can be alleviated by stimulating PINK1/Parkin-mediated mitophagy [14,27,63].Considering our observation of PHB2's involvement in PINK1/ Parkin-dependent mitophagy in PrP 106-126 -treated neurons, we investigated whether PHB2 influences the neurotoxicity of PrP 106-126 .We analyzed the effects of PHB2 on neuronal death and apoptosis induced by PrP 106-126 stimulation.The results showed that PHB2 knockdown exacerbated PrP 106-126 -induced neuronal death and apoptosis.Conversely, PHB2 overexpression inhibited PrP 106-126 -induced neuronal damage.These findings indicate that PHB2 provides protection for neurons against PrP 106-126 toxicity.Furthermore, considering the previous finding that PHB2 is required for PINK1/Parkin-mediated mitophagy, it is plausible that the neuroprotection provided by PHB2 against PrP 106-126 toxicity involves, to some extent, PINK1/Parkin-mediated mitophagy.Several studies have demonstrated the protective role of PHB2-mediated mitophagy in various diseases.One study reported that the depletion of PHB2 decreased the interaction between PHB2 and LC3, resulting in reduced mitophagy and exacerbated loss of dopaminergic neurons in a Parkinson's disease mouse model [64].Lai and co-workers reported that Rutin, a natural botanical ingredient, attenuated oxidative damage, through PHB2-mediated mitophagy, in MPP + -treated SH-SY5Y cells [65].Artemisinin, known for its powerful antioxidative stress effect, alleviated oxidative damage caused by cerebral ischemia/reperfusion by regulating PHB2-mediated autophagy in the human neuroblastoma SH-SY5Y cell line [66].
In conclusion, our findings demonstrate that PrP 106-126 induces PINK1/Parkin-mediated mitophagy.PHB2, a mitophagy receptor, protects neuronal cells against the toxicity of the prion fragment, and this effect is likely mediated through PINK1/Parkin-dependent mitophagy (Figure 7).While further research is needed to confirm and explore the full role of PHB2 in prion diseases, our studies provide hope for targeting PHB2 as a therapeutic approach for neurodegenerative diseases associated with prions.[66].
In conclusion, our findings demonstrate that PrP 106-126 induces PINK1/Parkin-mediated mitophagy.PHB2, a mitophagy receptor, protects neuronal cells against the toxicity of the prion fragment, and this effect is likely mediated through PINK1/Parkin-dependent mitophagy (Figure 7).While further research is needed to confirm and explore the full role of PHB2 in prion diseases, our studies provide hope for targeting PHB2 as a therapeutic approach for neurodegenerative diseases associated with prions.
Cell Culture
Cerebral cortex neuronal cultures were prepared from postnatal 1-day-old Sprague-Dawley rats, following the previously described procedure [67,68].Briefly, after sterilization, the brain was dissected, and then the cerebral cortices were collected and digested with ice-cold HBSS containing 2 mg/mL papain (Solarbio Life Sciences, Beijing, China) and 50 µg/mL DNAse (Sigma-Aldrich, St. Louis, MO, USA) for 30 min at 37 • C. The digested tissues were gently triturated into single cells.After repeated sedimentation and washing, the cells were separated by centrifugation at 800 rpm for 5 min.Then the isolated cells were seeded in 50 µg/mL poly-D-lysine (Solarbio)-coated plates (Corning, Corning, NY, USA) at a final density of 7 × 10 5 cells/well in a 12-well plate or 5 × 10 5 cells/well in a 24-well plate.The cells were cultured in DMEM/F12 (Gibco, Grand Island, NY, USA), supplemented with 10% fetal bovine serum (Nulen Biotech, Shanghai, China) and 2% B27 (Invitrogen, Carlsbad, CA, USA).After 48 h, 10 µM cytarabine (Sigma-Aldrich) was added to suppress the growth of glial cells.Experimental treatments were initiated after 6 days of culture.
Prion Protein Peptide and Peptide Treatment
The PrP 106-126 peptide (sequence: KTNMKHMAGAAAAGAVVGGLG) and fluorescein isothiocyanate-labeled PrP 106-126 (FITC-PrP 106-126 ) were synthesized by Sangon BioTech (Shanghai, China).The purity of the prion peptides was >98%, as indicated by data from the synthesizer.The peptides were dissolved in 0.1 M PBS to a concentration of 1 mM and allowed to aggregate at 37 • C for 24 h [68][69][70].The neurons were washed with 0.1 M PBS, and then the cells were treated with PrP 106-126 or FITC-PrP 106-126 in a culture medium for indicated times.
Mitochondrial Isolation
Mitochondria were isolated using the mitochondrial isolation assay (C3601, Beyotime, Shanghai, China).Briefly, 5 × 10 7 cortical neurons or treated neurons were washed and resuspended in pre-cooled 0.1 M PBS.The cells were then homogenized with a mitochondrial separation reagent supplemented with a protease inhibitor solution (Beyotime) and then centrifuged at 600× g for 10 min at 4 • C. The supernatants were transferred to a clean 1.5 mL tube and centrifuged again at 11,000× g for 10 min at 4 • C. The supernatant containing cytoplasmic fractions and the precipitation containing mitochondrial fractions were carefully collected, respectively.The collected cytoplasmic fractions were then centrifuged at 12,000× g for 10 min at 4 • C, and the supernatant was collected.
Immunofluorescence Microscopy
Primary cortical neurons were transfected with mito-DesRed and treated after 24 h.The fluorescence images of mitochondria were acquired using a confocal microscope (Carl Zeiss, Oberkochen, Germany).
Primary cortical neurons, PHB2-knockdown neurons, or PHB2-overexpression neurons were transfected with the COX8-EGFP-mCherry plasmid for 24 h, respectively.Subsequently, the neurons were exposed to either PBS or PrP 106-126 .Fluorescence images were visualized using a confocal microscope (Carl Zeiss).
Primary cortical neurons, PHB2-knockdown neurons, or PHB2-overexpression neurons were transfected with mito-DsRed, respectively.The neurons were then fixed, permeabilized, and sealed.Following this, the neurons were incubated with a primary antibody overnight at 4 • C, followed by fluorescently labeled secondary antibodies for 1 h in the dark at 37 • C. Finally, DAPI dihydrochloride was used for nucleus staining, and the fluorescence images were visualized using a confocal microscope (Carl Zeiss).
Western Blotting Analysis
Cell lysates were prepared, and Western blotting was performed as previously described [69].Briefly, equal amounts of protein were separated using SDS-PAGE, transferred onto nitrocellulose membranes (Millipore, Billerica, MA, USA), and then blocked.The membranes were incubated with different primary antibodies, followed by the incubation with the HRP-conjugated secondary antibody.Protein bands were visualized using a FluorChem M Imaging System (ProteinSimple, San Jose, CA, USA).
Transmission Electron Microscopy (TEM)
TEM (Transmission Electron Microscopy) was performed following the previously described method [67,68].Treated neurons were fixed in ice-cold 5% glutaraldehyde in 0.1 M sodium cacodylate buffer (pH 7.4) at 4 • C.After thoroughly rinsing with sodium cacodylate buffer, the cell pellets were further fixed on ice with 1% OsO4 in 0.1 M sodium cacodylate buffer.Subsequently, dehydration was carried out using a series of ethanol and acetone.The cell pellets were then embedded in resin and polymerized at 60 • C. Ultrathin sections were mounted onto copper grids and stained with 4% uranyl acetate and lead citrate.Imaging was performed using a transmission electron microscope (Hitachi, Tokyo, Japan).
Cell Viability Assay
Cell viability was assessed using a Cell Counting Kit-8 (FC101, TransGen Biotech, Beijing, China).Following the treatment, cells were incubated in a medium containing 10% CCK solution for 2 h.The absorbance at 450 nm was measured using a microplate reader, with a background control used as a blank.The cell survival ratio was calculated as the percentage of the untreated control.
Terminal Deoxynucleotidyl Transferase dUTP Nick End Labeling (TUNEL) Assay
TUNEL analysis was conducted to assess cellular apoptosis using an in situ cell death detection kit, TMR red (12156792910, Roche, Basel, Switzerland), following the manufacturer's instructions.Primary cortical neurons were cultured on coverslips in a poly-D-lysine-coated 12-well plate at a density of 5 × 10 5 cells/well.Cells were counterstained with propidium iodide (PI) for nuclei visualization.The slides were examined using an upright fluorescence microscope (Nikon, Tokyo, Japan).
Statistical Analysis
The data were presented as means ± standard deviation (SD) from three independent experiments.Parametric data were performed by one-way analysis of variance (ANOVA) with Turkey post hoc multiple comparisons using SPSS software (Version 21.0; SPSS Inc., Chicago, IL, USA).A p-value of <0.05 was considered statistically significant.
Figure 2 .
Figure 2. PrP 106-126 induced mitophagy in primary neurons.(A) Cortical neurons were treated with different concentrations of PrP 106-126 for 24 h, and then, the protein levels of TOMM20 and LC3B-II were analyzed using Western blotting.(B) Cortical neurons were treated with PrP 106-126 for the indicated periods, and the protein levels of TOMM20 and LC3B-II were analyzed by Western blotting.(C) Representative TEM images of autophagosome containing mitochondria (red arrows) in PrP 106- 126 -treated cortical neurons.(D) Mitophagy in cortical neurons transfected with COX8-EGFP-mCherry.After transfection, the cells were subjected to the PrP 106-126 treatment for 24 h.The red puncta represent mitochondria in lysosomes with acidic pH.
Figure 2 .
Figure 2. PrP 106-126 induced mitophagy in primary neurons.(A) Cortical neurons were treated with different concentrations of PrP 106-126 for 24 h, and then, the protein levels of TOMM20 and LC3B-II were analyzed using Western blotting.(B) Cortical neurons were treated with PrP 106-126 for the indicated periods, and the protein levels of TOMM20 and LC3B-II were analyzed by Western blotting.(C) Representative TEM images of autophagosome containing mitochondria (red arrows) in PrP 106-126 -treated cortical neurons.(D) Mitophagy in cortical neurons transfected with COX8-EGFP-mCherry.After transfection, the cells were subjected to the PrP 106-126 treatment for 24 h.The red puncta represent mitochondria in lysosomes with acidic pH.
Figure 3 .
Figure 3. PrP 106-126 triggers PINK1/Parkin-dependent mitophagy.(A) Cortical neurons were treated with PrP 106-126 for the indicated periods, and then, the protein levels of PINK1 and Parkin were analyzed by Western blotting.(B) Proteins of cytoplasm and mitochondria in PrP 106-126 -treated neurons and control neurons were extracted, respectively, and the protein levels of PINK1, Parkin, and LC3B-II were analyzed using Western blotting; HSP60 (the mitochondrial matrix protein) served as loading control.(C) The colocalization between mitochondria (labeled with mito-DesRed, red) and Parkin (green) in cortical neurons was observed using immunofluorescence imaging (nuclei, blue).
Figure 3 .
Figure 3. PrP 106-126 triggers PINK1/Parkin-dependent mitophagy.(A) Cortical neurons were treated with PrP 106-126 for the indicated periods, and then, the protein levels of PINK1 and Parkin were analyzed by Western blotting.(B) Proteins of cytoplasm and mitochondria in PrP 106-126 -treated neurons and control neurons were extracted, respectively, and the protein levels of PINK1, Parkin, and LC3B-II were analyzed using Western blotting; HSP60 (the mitochondrial matrix protein) served as loading control.(C) The colocalization between mitochondria (labeled with mito-DesRed, red) and Parkin (green) in cortical neurons was observed using immunofluorescence imaging (nuclei, blue).
Figure 4 .
Figure 4. PHB2 involves PrP 106-126 -induced mitophagy in primary neurons.(A) Cortical neurons were treated with PrP 106-126 for the indicated periods, and then, the protein levels of PHB2 were analyzed using Western blotting.(B) Cortical neurons were transfected with shRNA-PHB2 (sh#1, sh#2) plasmids or shRNA-NC control plasmid before the incubation with PrP 106 -126 for 24 h.PHB2, TOMM20, and LC3B-II were analyzed using Western blotting.(C) Cortical neurons were transfected with different concentrations of flag-PHB2 vector or flag-PCMV control vector for 24 h and then incubated with PrP 106 -126 for another 24 h.PHB2, TOMM20, and LC3B-II proteins were analyzed using Western blotting, respectively.(D) Representative images of PrP 106-126 -induced mitophagosome (red) formation in PHB2-knockdown neurons or PHB2-overexpression neurons.Cortical cells were firstly transfected with shRNA-PHB2 (sh#1) and flag-PHB2, respectively, and 24 h later, these neurons were transfected with COX8-EGFP-mCherry.After 24 h, the neurons were subjected to the PrP 106-126 treatment for another 24 h.Finally, the neurons were fixed for confocal microscopy.
Figure 4 .
Figure 4. PHB2 involves PrP 106-126 -induced mitophagy in primary neurons.(A) Cortical neurons were treated with PrP 106-126 for the indicated periods, and then, the protein levels of PHB2 were analyzed using Western blotting.(B) Cortical neurons were transfected with shRNA-PHB2 (sh#1, sh#2) plasmids or shRNA-NC control plasmid before the incubation with PrP 106-126 for 24 h.PHB2, TOMM20, and LC3B-II were analyzed using Western blotting.(C) Cortical neurons were transfected with different concentrations of flag-PHB2 vector or flag-PCMV control vector for 24 h and then incubated with PrP 106-126 for another 24 h.PHB2, TOMM20, and LC3B-II proteins were analyzed using Western blotting, respectively.(D) Representative images of PrP 106-126 -induced mitophagosome (red) formation in PHB2-knockdown neurons or PHB2-overexpression neurons.Cortical cells were firstly transfected with shRNA-PHB2 (sh#1) and flag-PHB2, respectively, and 24 h later, these neurons were transfected with COX8-EGFP-mCherry.After 24 h, the neurons were subjected to the PrP 106-126 treatment for another 24 h.Finally, the neurons were fixed for confocal microscopy.
Figure 7 .
Figure 7. Schematic representation of PHB2-mediated mitophagy and neuronal death inhibition under PrP 106-126 stimulation.PrP 106-126 accumulates in mitochondria and leads to mitochondrial damage, which stabilizes PINK1 and recruits Parkin to mitochondria to mediate mitophagy.IMM component PHB2 acts as a mitophagy receptor and plays a vital role in PINK1/Parkin-dependent
|
v3-fos-license
|
2021-12-22T16:31:09.138Z
|
2021-12-01T00:00:00.000
|
245371980
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-2615/11/12/3581/pdf",
"pdf_hash": "a7802955aea397c0dd24565bbc5641a74e0708ce",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46730",
"s2fieldsofstudy": [
"Education"
],
"sha1": "03abf502a806d20668aba5ae74a4f8e7bb670d55",
"year": 2021
}
|
pes2o/s2orc
|
Impact of Gait and Diameter during Circular Exercise on Front Hoof Area, Vertical Force, and Pressure in Mature Horses
Simple Summary Circular exercise is used frequently to exercise, train, and evaluate horses both under saddle and with lunging. However, little is known of the impacts this type of repetitive exercise has on the front limbs of horses. Nine mature horses wore TekscanTM Hoof Sensors on their front hooves to determine if changing the circle size and gait at which the horse is traveling impacts the area, vertical force, or pressure output. Sensor data were collected while horses travelled in a straight line at the walk and trot and in small and large counterclockwise circles at the walk, trot, and canter. Gait was found to be a driving factor for differences in outputs, with mean area, mean vertical force, and mean pressure being greater at the walk in a straight line, and the area being greater at the canter when circling. When traveling in a counterclockwise circle, the mean area of the outside front leg was highest at the canter. This study shows gait is an important factor when evaluating exercise in a circle or straight line. Horse owners may choose to perform circular exercise at slower gaits or minimize unnecessary circular exercise to decrease differences between limbs and potentially reduce injury. Abstract Circular exercise can be used at varying gaits and diameters to exercise horses, with repeated use anecdotally relating to increased lameness. This work sought to characterize mean area, mean vertical force, and mean pressure of the front hooves while exercising in a straight line at the walk and trot, and small (10-m diameter) and large circles (15-m diameter) at the walk, trot, and canter. Nine mature horses wore TekscanTM Hoof Sensors on their forelimbs adhered with a glue-on shoe. Statistical analysis was performed in SAS 9.4 with fixed effects of leg, gait, and exercise type (PROC GLIMMIX) and p < 0.05 as significant. For all exercise types, the walk had greater mean pressure than the trot (p < 0.01). At the walk, the straight line had greater mean area loaded than the large circle (p = 0.01), and both circle sizes had lower mean vertical force than the straight line (p = 0.003). During circular exercise at the canter, the outside front limb had greater mean area loaded than at the walk and trot (p = 0.001). This study found that gait is an important factor when evaluating circular exercise and should be considered when exercising horses to prevent injury.
Introduction
The use of circular exercise is frequent in equine training, both under-saddle and in-hand via lunging, and anecdotally has the potential to contribute to lameness. During early training, horses are often exercised in a circular manner through lunging or in a round pen. Some riding disciplines, such as dressage, reining, and barrel racing, use circular exercise during training and competition throughout a horse's career. Often, the circles performed in these disciplines are on a small radius with high speed and are utilized frequently within a training session. Thoroughbred racehorses also experience circular forces as they lean into a bend at high speeds [1,2]. Lunging with and without lunging aids, and the use of mechanical horse walkers is found in many rehabilitation protocols [3,4]. When surveyed, 50% of Thoroughbred trainers in Victoria, Australia indicated the use of a mechanical walker as an alternative exercise method to track work [5]. In lameness evaluations, a higher proportion of lameness can be found while utilizing lunging on hard or soft surfaces compared to straight lines [6]. While circular exercise is commonly used, many in the industry are unaware of the potential negative impacts it can have on joint health. While being exercised in a circle, horses will lean into the continuous turns up to 20 degrees to maintain balance. As speed and curve increase, the lean angle will also increase [1,[7][8][9][10]. Greater tilt on a flat curved surface has also been found compared to a banked curve surface [10]. Due to the possibility of a reduced loaded area, uneven vertical forces may be placed on joints and bones of the limbs during circular exercise.
Quadrupeds, such as horses, may be at an advantage during curve running, as they can redistribute weight to multiple stance legs within a stride [1]. It has been found that, while cantering in a 10-m circle, horses will have greater peak ground reaction force on the outside forelimb compared to the inside forelimb. This difference in vertical forces between limbs was not evident at slower speeds and may be due to the presence of a lead and non-lead forelimb and hindlimb during the canter. In both humans and horses, the outside limb while sprinting through a curve is known to generate more vertical and lateral force than the inside limb [1]. When speed is held constant between a straight line and a curve, stride duration is seen to increase when horses travel around a curve compared to a straight line. With training, this increase in stride duration around a curve is seen to decrease, potentially through familiarity via training or neuromuscular adaptation [2].
There are few equine studies evaluating circular exercise; however, human exercise studies evaluating running on a curve are abundant. It has been found that in humans, when running around an unbanked curve, the inside leg has a lower peak ground reaction force than the outside leg. Peak ground reaction forces of both legs and speed are also lower when sprinting around a curve compared to in a straight line [11]. Not only does the presence of a curve impact runners, so does the sharpness of a curve. Running on a sharply curved track (5-m radius) led to greater torsion on the inside tibia compared to running on a gently curved track (15-m) or a straight line [12]. Both the speed and the radius of a circle will impact the gait asymmetry of horses circling at the trot [9]. A retrospective study of risk factors for jockeys in Japanese Thoroughbred racing found smaller tracks to have a greater risk of injury [13].
In racing Quarter Horses, who race on straight portions of tracks, the right forelimb is most commonly involved in catastrophic musculoskeletal injuries (CMIs), with the left forelimb following (57% and 24% of CMIs, respectively). This may be due to the preference of the right lead in racing Quarter Horses [14]. However, the presence of motor laterality within the equine population is questionable and may not exist outside of horses in race training, as it was not found in Quarter Horses trained for cutting [15]. Thoroughbreds typically race counterclockwise on an oval track in North America, with the left front leg as the leading leg while traveling around a curve in the track. A study of Midwestern U.S. racetracks found that the left forelimb was the most common site for injuries (56% of CMIs) in Thoroughbreds, while the right forelimb was the most common (60% of CMIs) in Quarter Horses [16]. As a result of greater impulse on the left forelimb while galloping around the turn of a race track, Thoroughbred racehorses are at greater risk for injury to the left forelimb while traveling around a turn [17].
It has been hypothesized that horses handle traveling in a circle similar to the adjustments they make in response to lameness on a straight line; horses counteract uncomfortable limb loading with asymmetrical movement, which redistributes limb loading [18].
Horses will lean into the circle in which they are traveling, with the lean increasing with speed and smaller radii [9]. The level of training can also impact lean angle, as a horse acclimated to tracking in a circle can travel more upright than a horse that is not acclimated to tracking in a circle or bend. This has especially been noted in dressage, where older, trained horses are able to travel while engaging their neck, back, and hindlimb musculature, where a younger horse may not be able to do so through a circle. Body lean may also be increased when a lame forelimb is on the inside of the circle [8,10,19]. When circling at the trot, a two-beat diagonal gait, horses have decreased loading when the inside front leg and outside hind leg are in a stance, compared to a push-off pattern for the outer limbs [20,21].
Osteoarthritis (OA) and joint injuries have been reported as a leading cause of lost training days and horse wastage. There may be a connection between OA and circular exercise, but interactions between circular exercise and joint damage have not been explored. With up to 60% of lameness being related to OA, this gap in research greatly affects the equine community [14,22]. Within the United States, Quarter Horses and Thoroughbreds have been identified as making up half the population affected by OA [23]. Osteoarthritis can occur secondary to excessive loads on normal cartilage or normal loads on abnormal cartilage [24,25], with both mechanisms possibly exacerbated by circular exercise. It has been noted that horses are able to perform adaptations to limb position while exercising on a circle via abduction (pushing the limbs away from the midline of the body) and adduction (bringing the limbs towards the midline), but these adaptations performed over long periods of time and at faster gaits may lead to greater risk of injury to the distal limb [10,25].
Utilizing the Tekscan TM Hoof System (Tekscan, Inc., Boston, MA, USA), the aim of this study was to categorize the outputs (i.e., area, vertical force and pressure) for the front hooves of horses during counterclockwise circular exercise, and to demonstrate that these outputs vary depending on circle diameter and gait. It was hypothesized that faster gaits and a decreased circle diameter would lead to greater disparity in the mean area, vertical force, and pressure of the front limbs, with the outer limbs having a smaller mean loaded area with greater mean vertical force and pressure.
Materials and Methods
This research was approved by the Michigan State University (MSU) Institutional Animal Care and Use Committee (PROTO201800148).
Horses
A total of nine mature horses participated in this study (14 ± 2 years). Arabian horses were obtained from the MSU Horse Teaching and Research center (n = 4: Two mares and two geldings) and stock horses were obtained from a local training operation (n = 5: Two mares and three geldings). One week prior to beginning exercise, horses were evaluated by two board-certified veterinarians (large animal surgery and one also boarded in equine sports medicine and rehabilitation) with a Lameness Locator ®® and subjective lameness evaluation to be sound (American Association of Equine Practitioners lameness grade of <2 on each front leg). Horses were transported to the MSU Pavilion South Arena for one day for exercise analysis. During the day, while not being exercised, horses were given ad libitum access to water and hay and kept in individual stalls.
Hoof Preparation and Sensors
Hoof and sensor preparations were performed in a method previously utilized when using the Tekscan Hoof System TM with a glue-on shoe [26]. Horses were trimmed by a certified farrier (Certified Journey Farrier, Advanced Skill Farrier, Associate of the Worshipful Company of Farriers) within a week before exercise. Horses were trimmed for medial-lateral balance according to the long axis of the limb observed with the hoof picked up. Excess hoof was removed as needed to achieve a flat plane to place sensors, and shoe placement was guided by the center of rotation (COR) [27]. Horse hooves were measured for width and length to determine an accurate glue-on shoe (SoundHorse Technology, Unionville, PA, USA) size for each horse (Table 1). An exact fit for each front hoof and shoe was desired so that the loaded sensor area represented the loaded area of the hoof. Tekscan hoof sensors were trimmed to the size of the front hooves of each of the horses after initial hoof trimming. The trimmed sensors were sealed in two layers of liquid rubber (FlexSeal, Weston, FL, USA) to protect the sensors from moisture and sand exposure. The liquid rubber sealing was allowed to dry for 24-48 h before sensors were placed on horses. The day before horses exercised, horses were weighed for calibration. Two scales of equal height were used. Horses stood with their hind legs on one scale and their front legs on another and were encouraged to stand square with their weight equally distributed. The weight of the front half of the horse was recorded on one scale for sensor calibration (Tru-Test Multipurpose MP600 Load Bars). This weight was divided in half, to represent the left limb and the right limb (Table 1).
On the day of exercise, the sealed and trimmed sensors were attached to the front hooves of each horse with a glue-on shoe and animal-safe epoxy. Horses were shod with a ratio of 60% in front of COR and 40% behind [27]. After the initial mixing, the two-part epoxy used to adhere the shoe to the hoof wall was dried for approximately 30 min to be cured for exercise. Before beginning exercise, the sensors were calibrated with the previously recorded weight of the forelimbs. Each horse walked the length of the indoor arena to pre-load the sensors. Afterwards, they were brought to a flat spot in the middle of the arena and encouraged to stand squarely with their weight evenly distributed. Using F-Scan research software (Tekscan TM ), the previously determined weight of the front limbs (Table 1) was inputted with the step calibration function, and a calibration file was saved for the left and right forelimbs for each horse.
Exercise
Previous research with the Tekscan Hoof TM system has found that when used with a glue-on shoe on the front hooves, the sensors are reliable within a session of exercise [26]. Bearing this in mind, the current study was designed that each horse would complete all their exercise within one session, and each set of sensors would only be used once. Straight-line exercise was performed first for each horse, so that if a sensor was damaged during circular exercise, then all horses would have straight-line exercise recordings. The space in which straight-line exercise was recorded was 25 m in length. Multiple recordings have previously been suggested for exercise protocols utilizing sensors such as this to gain a mean of the desired outputs [28][29][30][31]. Each horse was recorded three times at both the walk and trot traveling in a straight line the length of the indoor arena. The canter was not recorded in a straight line, as the gait is not able to be safely and consistently attained when horses are led in-hand. Each recording included at least 10 steps of the horse performing the specified gait consistently with no break. All straight-line exercise was performed by the same handler.
After the straight-line exercise, each horse was led to the portable round pen in the middle of the indoor arena. Order of size for the circular exercise (small first or large first) was randomly assigned for each horse. The size of the small circle was 10 m in diameter, while the size of the large circle was 15 m. Circle diameter was adjusted by adding or removing portable round pen panels with the perimeter for each circle size marked in the sand of the arena so that both circles were set up the same each time. For each horse, three recordings of at least 10 strides at the walk, trot, and canter were taken for both the large circle and the small circle. All circular exercise was performed in a counterclockwise direction, with the left forelimb on the inside and the right forelimb on the outside. Only one direction was evaluated so that one limb could be consistently denoted as the outside limb and the other as the inside limb. Counterclockwise was chosen as this is the direction of travel for racing horses and is typically the first direction of travel for horses competing in judged classes on the rail. Horses were encouraged to maintain gait speed with a human handler either verbally or visually encouraging them to gain speed or slow down. The speed of the walk, trot, or canter was not controlled between animals, as each individual animal has a speed for each gait at which they are able to move comfortably and maintain their gait consistently through recordings. Other gait analysis studies have preferred to allow animals to travel at their natural speed within a gait during testing [6,20,22,29]. If there were errors such as an incorrect lead or break of gait, the recording in process was stopped and re-recorded. Following exercise, the glue-on shoes and sensors were removed from each horse.
Data Analysis
Sensor data were recorded at a sampling rate of 112 frames/second for all conditions. Recorded data were then analyzed with Tekscan F-Scan Software (version 6.85). F-Scan Software collected vertical force and area outputs, and calculated pressure from the coordinating vertical force and area for a frame. The first and last steps were removed from each recording dataset to ensure that no transitional steps between gaits were included in the dataset. Each recording dataset still had at least 10 steps with the first and last step removed. If individual sensor cells, sensels, were loaded during a suspension phase for a hoof, these erroneous sensels were manually removed. The mean vertical force, area, and pressure for each step were exported from F-Scan Software in an ASCII file.
Statistical Analysis
Data were exported from the F-Scan Software (version 6.85) and imported into SAS (version 9.4) for statistical analysis and were evaluated for normality via residual plots. To evaluate the impacts that gait and circle size have on kinematic outputs, two datasets were created. One dataset removed the gait "canter", so that at both the walk and trot outputs could be compared for the straight line and both circle sizes. A second dataset removed the exercise type "straight", so that at the walk, trot, and canter, the two circle sizes could be compared. These two separate datasets were necessary, as a canter on a straight line was not able to be safely included in the data collection for this study. The main effects of gait, exercise type, and leg were evaluated in PROC GLIMMIX with Tukey adjustment. Interactions of "gait and exercise type", "gait and leg", "exercise type and leg", and "gait, exercise type, and leg" were also evaluated. Horse, and all interactions including horse, was included as a random effect. Significance was set at p < 0.05. Means are reported at means ± standard error of the mean (SEM).
Loaded Hoof Area
Gait (p = 0.03) and exercise type (p = 0.03) were significant main effects for area (Table 1), but leg was not (p = 0.44). The walk had a greater mean area than the trot by 12%. Gait and exercise type constituted a significant interaction (p = 0.005), but "gait and leg" as well as "exercise type and leg" were non-significant interactions (p = 0.10 and 0.71, respectively). "Gait, exercise type, and leg" was not a significant interaction (p = 0.48). At the walk and trot, there were no significant between-leg (left vs. right) differences in the mean area (p = 0.33 and p = 0.58, respectively) At the walk, the area was different between exercise types (p = 0.01, Table 2), but not at the trot. While trotting in a straight line, the mean area was lower than at the walk ( Table 2). Within the small and large circles, gait was not different in terms of the area (p = 0.09 and p = 0.19, respectively).
Vertical Force
Gait (p = 0.007) and exercise type (p = 0.007) were both significant main effects for vertical force (Table 3), but leg was not (p = 0.71). The walk resulted in a greater mean vertical force than the trot by 14%. No two-way interactions were significant. The three-way interaction of "gait, exercise type, and leg" was significant (p = 0.02). At the walk and trot, there were no significant between-leg differences in the mean vertical force (p = 0.75 and 0.66, respectively). At both the walk and trot, the exercise type resulted in different mean vertical force outputs, with straight-line exercise leading to a higher mean vertical force at both gaits, although only significantly higher than both circle sizes at the walk (Table 3). During straight-line exercise and small-circle exercise, the walk had a greater mean vertical force (Table 3).
Front Hoof Pressure
Gait was a significant main effect (p = 0.0007, Tables 4 and 5) for pressure, but exercise type and leg were non-significant effects (p = 0.16 and 0.14, respectively). The walk had a greater mean pressure than the trot by 23%. No two-way interactions were significant, but the three-way interaction of "gait, exercise type, and leg" was significant (p = 0.0008). For the right and left legs, it was found that the mean walk pressure was greater than the mean trot pressure when all exercise conditions were averaged (Table 4). At the walk and trot, the exercise type did not lead to different mean pressures, but the walk had a greater mean pressure for all three exercise types compared to the trot (Table 5).
Loaded Hoof Area
Gait was a significant main effect (p = 0.03, Table 6) for area. The canter had a 21% greater mean loaded area than the walk and a 29% greater mean loaded area than the trot. Exercise type and leg were non-significant effects (p = 0.78 and p = 0.33, respectively). The interactions of "gait and exercise type" (p = 0.28) as well as "exercise type and leg" (p = 0.72) were non-significant; however, the interaction of "gait and leg" was significant (p = 0.02). "Gait, exercise type, and leg" was a significant interaction (p = 0.0035). For the right front leg (outside leg), the canter had a greater mean loaded area than other gaits ( Table 6), but this was not found in the left leg. For exercise in a large circle, the mean area loaded was different between gaits, with the canter being greater than the trot (p = 0.01, Table 6). While exercising in a small circle, the mean area was not different between gaits (Table 6).
Vertical Force
Gait, exercise type, and leg were non-significant main effects (p = 0.33, 0.88, and 0.87, respectively) for the vertical force. "Gait and exercise type" as well as "exercise type and leg" were non-significant interactions (p = 0.17 and 0.71, respectively), but "gait and leg" was a significant interaction (p = 0.004, Table 7). "Gait, exercise type, and leg" was a significant interaction (p < 0.0001). At the walk (p = 0.83), trot (p = 0.72), and canter (p = 0.31), the right and left legs did not have different mean vertical forces between legs. The right leg did have a lower mean vertical force at the trot than the canter (Table 7). During the large-and small-circle exercise, the mean vertical force did not differ by gait. At the walk, trot, and canter, the circle size did not have a different mean vertical force (Table 7).
Front Hoof Pressure
Gait was a significant main effect (p = 0.001, Table 8) for pressure, while exercise type and leg were not (p = 0.59 and 0.11, respectively). The walk had 22% and 28% greater mean pressures than the trot and canter, respectively. No two-way interactions were significant. The three-way interaction of "gait, exercise type, and leg" was significant (p = 0.005). For both the right and left limbs, the walk was found to have a greater mean pressure than other gaits (Table 8). In both the large and small circles, the walk was found to have a larger mean pressure than other gaits. Within each gait, there were no differences between the large and small circle sizes (Table 8).
Discussion
The objectives of this study were to determine how changes in gait and circle diameter influence area, vertical force, and pressure of the front hooves. We hypothesized that a decrease in circle diameter and an increase in speed would lead to greater differences between inside and outside limb outputs. The results determined that changes to gait more frequently lead to differences in the mean vertical force, area, and pressure outputs than changes to the circle diameter size. Most of the differences noted in this study were driven by gait, with gait being a significant effect for all evaluated outputs except for vertical force in the dataset including canter.
When evaluating gait differences, the walk typically had greater mean area and vertical force in this study, but when canter was included, the canter had the greatest mean area loaded. The walk having the greatest pressure is driven by the inclusion of the area and vertical force in the calculation of pressure. Most studies utilizing the Tekscan Hoof System for gait analysis have done so at the walk [28][29][30]32] or trot [31,33,34]. The Tekscan sensors may not be able to record data as accurately when speed increases for gaits such as the canter or even a faster trot. It is also worth exploring that adaptation to circular exercise has been previously noted as gait-specific [10]. The differences in gait may be due to increased speed or different loading patterns, such as the presence of a lead while cantering, as horses are known to protract the lead limb of a canter by flexing the elbow, carpal, hip, and tarsal joints [35]. One study found that as speed within the walk or trot increases while exercising on a treadmill, vertical impulse to the forelimbs and hindlimbs decreases [36]. While the current study did not compare speed within gaits, we did find that as gait increased from walk to trot, and therefore speed increased, the mean area, vertical force, and pressure decreased for the forelimbs. Another study found peak stress of the metacarpus and radius to be lower at a slow trot than the walk and canter and attributed the lower values of the slow trot to the symmetrical, diagonal movement of the gait [37].
Due to its two-beat diagonal footfall, the trot is considered a symmetrical gait and is the preferred gait for a lameness evaluation. The lower outputs seen at the trot in this study may be due to the fact that horses are able to utilize both forelimbs and hindlimbs within a trot stride in a more-even manner than the walk and canter [37][38][39]. The trot and canter also have moments of suspension, where the walk does not. Given that the results in this study are reported as means of the area, vertical force, and pressure, the lack of suspension in the walk could contribute to longer data collection for the right and left forelimbs at the walk than the trot and canter. Using a pressure plate, the stance phase of the walk has been found to be longer than that of a trot when tracking over both a hard surface and a soft surface [40]. Horses may also use other parts of their body, such as the musculature of the hindquarters, more so in the trot and canter than the walk, potentially leading to a decrease in the forelimb outputs [10,11,41,42]. One study found that activity of the hindlimb biceps femoris is minimal during the walk, but highly active according to electromyography at the trot and canter [43]. Another study found that at the walk and canter, horses exercising on flat and banked curves have a shorter stride length of the inside leg compared to the outside leg [10]. As horses increase in gait speed from a walk, to a trot, to a canter, it has been found that trunk muscle engagement increases as well [44].
At the trot, the mean hoof area loaded was similar regardless of exercise type, once again suggesting the trot to be the more stable gait [38,39]. Vertical force was greatest on a straight line for both the walk and trot, while pressure was not found to be different between exercise types at the walk or trot. In humans, similar results have been found where peak vertical ground reaction forces are greater in a straight line than while running around a curve [11], similar to what was found in the current study. Considering the Tekscan TM sensors measure vertical forces normal (perpendicular) to the sensor, it is conceivable that shear vertical forces were higher during circular exercise. During straightline exercise, when a horse is tracking upright, the resultant force would be vertically measured. However, when horses are tracking in a circle, as was performed in this study, lateral forces are also considered when calculating the resultant force [45]. As the sensors were worn on the front hooves and measured the force of the area that came into contact with the arena surface, only the vertical forces were included in this study. While the walk had greater vertical force in this study, other forces, such as lateral force, may be greater in the trot and canter, especially during circular exercise [45,46].
When the canter was retained in the dataset, at all gaits, the large circle did not have a different mean area, vertical force, or pressure than the small circle. When exercising in the large circle, the canter did have a greater mean area than the trot. When exercising at both small and large circles, the walk had greater pressure than both the trot and canter. With the use of a pressure plate, another study found the vertical impulse of the walk to be almost twice that of the trot on both hard and soft surfaces [40]. Comparisons between pressure plates and sensors such as the Tekscan TM system should be made cautiously, as these two technologies have not been found to reliably produce the same outputs [34].
When the canter was removed from the dataset, the mean area and mean vertical force were not different between the right and left legs at the walk or trot. When the canter was maintained in the dataset, the mean loaded area of the right (outside) leg was greatest at the canter, and the mean vertical force for the right leg was greater at the canter than the trot. In this study, minimal differences were seen between limbs, but it was notable that the outside limb loaded area was greater at the canter. At racetracks with the smallest radii (>50-114 m), the outside front limb was found to have the highest number of fatal limb fractures [45]. A study evaluating body lean angle at the trot and canter lunged horses through a bitted bridle at a diameter of 10 m while wearing an inertial measurement unit on the sacrum [47]. The lean angle was reported to be greater at the canter (19 • ) than the trot (12 • ) when tracking both left and right. A greater lean-in angle at greater speeds could be cause for more push off with the outside leg [1,10], and therefore a greater mean area loaded in the outside limb at the canter, as was seen in this current study. Our findings are supported by another study, which found the third metacarpal of the outside limb endures greater strain than the inside limb when Thoroughbred horses are running around a turn [48]. While galloping around a turn, the stance phase for the inside front limb is greater, while the stance phase for the outside front limb is shorter, with larger centripetal, propulsive, and vertical forces [45]. The presence of greater peak ground reaction vertical forces in the outside leg compared to the inside leg on a curve has also been noted in humans [11]. This study did not evaluate horses tracking at a gallop, which many studies referenced in this study have evaluated. Instead, this study allowed for an exploration of gaits that are easily attainable and frequently used across the industry, including to exercise racing horses when they are not galloping. It is reasonable to expect when working a horse in a round pen or lunging a horse, especially in initial saddling and riding, that increased speed is needed to reach the optimal training state for the horse. However, given these results, the frequency of circular exercise via lunging or a round pen as a replacement to pasture turn-out or ridden exercise should be evaluated.
Circular exercise is frequently used to exercise and train animals, especially through lunging. A review of risk factors for lameness in dressage horses found lunging to be protective against lameness, while the use of walkers increased the risk of lameness [49]. Mechanical walkers are often used during recovery from lameness, so it may be difficult to separate horses that are being placed on a walker for recovery or for exercise. When on a mechanical walker, animals may be unsupervised, and are not controlled by a handler that would encourage them to travel upright and at consistent speeds. However, when lunging is utilized in dressage, often the use of a surcingle and bridle could encourage the horse to track in an upright and balanced manner, very similar to the way that a horse is "on the bit" while under saddle in dressage. In disciplines outside of dressage, lunging is typically performed with only the lunge line attached to a halter. This gives the handler less control of the horse, often resulting in lunging sessions where the horse is leaning into the circle and does not consistently engage the hindquarters and topline musculature to travel in an upright manner, making lunging in this manner less likely to be a protective factor against lameness. Circular exercise is also used under saddle for both training and competition in events such as dressage and reining. The presence of a rider is known to alter how horses utilize their back musculature at various gaits [44,50]. Further exploration into circular exercise with a rider present is needed to determine if differences between front limb outputs at the walk, trot, and canter are mitigated or exacerbated.
The current study evaluated straight line exercise versus circular exercise of a horse in a round-pen that was not attached to a lunge line. Further studies of similar design are needed to evaluate the impact of a lunge-line simply attached to a halter on forelimb disparity while an animal is in motion. When different head and neck positions were evaluated on a straight line on a treadmill, it was found that a high head position impacted limb functionality compared to an unrestrained horse [51]. Head and neck position has also been found to alter the center of motion of a horse while lunging [19]. The current study found frequent differences in gait, but limited differences in circle size for a horse moving freely in a round-pen. Differences in forelimb outputs between small and large circles may be detectable when a horse is exercised on a lunge line, as to make the circle smaller, greater tension could be applied to the lunge line attached to the horse's halter, potentially encouraging the horse to lean in and push off more with the outside leg.
A limitation in this study is that recordings of the canter on a straight line were not attainable, and thus two sets of data were evaluated to best compare gait and exercise types. Future studies could use a long aisle-way with an appropriate surface to have horses travel in a straight line without the need of a handler. This may help to better answer the question of whether circular exercise at a canter has differences in the outside limb because of the lead or just the increase in speed. Our current study only evaluated the front limbs, which permitted us to determine differences between the inside and outside limbs. It is recognized that the hindlimbs are important in the adaptation to motion, such as turning [41,45,46], and future studies to determine the impact of gait and the circle diameter to hind limb outputs should be explored. The Tekscan TM system used in the manner of this study measured vertical force and not shear force, which is certainly important for turning, especially for the hind limbs [45]. It should be noted that many technologies are utilized for gait analysis, such as vertical force plates, inertial measurement units, and sensors such as the Tekscan TM Hoof System. Between studies, the technology used for analysis, as well as attachment methods and locations, is not standardized. While our study protocol and recorded metrics have been shown to be reliable [26], comparisons between studies should be made recognizing the current limitation of no standardized protocol for quantitative gait analysis for horses in motion currently.
Conclusions
While circular exercise is used frequently in the training, exercising, and competing of horses, little is known of its potential connection to joint and bone injury. This study explored the impact of gait as well as circle size to mean area, vertical force, and pressure of the front hooves. It was found that gait (walk, trot, canter) drives changes to outputs more than exercise type (straight, circular). The trot frequently had lower mean outputs than other gaits, suggesting that it is a more dynamically stable gait that could potentially allow horses to adapt to circular exercise easier than other gaits. Handlers looking to utilize circular exercise while maintaining the longevity of equine careers may consider doing so at slower gaits, as differences in outside limb output were noted at the canter, or minimizing the use of circular exercise. Future studies will help to determine if a round-pen allows the horse to adapt to changes in gait and diameter better than when exercised on a lunge line or under saddle. Lateral forces may be greater during circular exercise and should be evaluated and compared with the findings of vertical forces provided in this research.
|
v3-fos-license
|
2017-04-14T20:10:12.194Z
|
2007-02-01T00:00:00.000
|
9334265
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://aacr.figshare.com/articles/journal_contribution/Supplementary_Table_1_from_Impaired_Dihydrotestosterone_Catabolism_in_Human_Prostate_Cancer_Critical_Role_of_AKR1C2_as_a_Pre-Receptor_Regulator_of_Androgen_Receptor_Signaling/22367028/1/files/39812079.pdf",
"pdf_hash": "420bf868aa7d237916350a4b0b664897e21cc80b",
"pdf_src": "Grobid",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46731",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "420bf868aa7d237916350a4b0b664897e21cc80b",
"year": 2007
}
|
pes2o/s2orc
|
Impaired Dihydrotestosterone Catabolism in Human Prostate Cancer: Critical Role of Akr1c2 as a Pre-receptor Regulator of Androgen Receptor Signaling
We previously reported the selective loss of AKR1C2 and AKR1C1 in prostate cancers compared with their expression in paired benign tissues. We now report that dihydrotestosterone (DHT) levels are significantly greater in prostate cancer tumors compared with their paired benign tissues. Decreased catabolism seems to account for the increased DHT levels as expression of AKR1C2 and SRD5A2 was reduced in these tumors compared with their paired benign tissues. After 4 h of incubation with benign tissue samples, 3 H-DHT was predom-inately catabolized to the 5A-androstane-3A,17B-diol metab-olite. Reduced capacity to metabolize DHT was observed in tumor samples from four of five freshly isolated pairs of tissue samples, which paralleled loss of AKR1C2 and AKR1C1 expression. LAPC-4 cells transiently transfected with AKR1C1 and AKR1C2, but not AKR1C3, were able to significantly inhibit a dose-dependent, DHT-stimulated proliferation, which was associated with a significant reduction in the concentration of DHT remaining in the media. R1881-stimulated proliferation was equivalent in all transfected cells, showing that metabolism of DHT was responsible for the inhibition of proliferation. PC-3 cells overexpressing AKR1C2 and, to a lesser extent, AKR1C1 were able to significantly inhibit DHT-dependent androgen receptor reporter activity, which was abrogated by increasing DHT levels. We speculate that selective loss of AKR1C2 in prostate cancer promotes clonal expansion of tumor cells by enhancement of androgen-dependent cellular proliferation by reducing DHT metabolism.
Introduction
Prostate cancer is the second leading cause of cancer-related deaths among men and has a major effect on the health of older Americans (1).Androgens are essential for prostate cancer development, and their elimination and blockade remain the cornerstone of medical management since Huggins first showed symptomatic response and regression of prostate cancer after androgen elimination (2,3).Epidemiologic studies have also implicated life-long exposure to androgens as a contributing factor for prostate cancer development due to increased cellular proliferation (4,5).Dihydrotestosterone (DHT) is the key ligand for the androgen receptor (AR) in the prostate and is locally synthesized predominately from circulating testosterone by 5asteroid reductase type II (SRD5A2; ref. 6).The clinical importance of this pathway was amply shown in a successful chemopreventive trial in which blocking the prostatic conversion of circulating testosterone to DHT by finasteride, a SRD5A2 inhibitor, led to a significant 24.8% reduction of the incidence of prostate cancer in those treated with the inhibitor compared with the control group (7).Thus, in situ hormone synthesis plays an essential role in the intracellular availability of DHT to interact with AR.
Numerous studies have focused on the in situ synthesis of prostatic DHT and the molecular mechanism of AR activation and function (8,9).However, little emphasis has been placed on the importance of DHT catabolism in the prostate and what role it might play in regulating the intracellular pool of this critical androgen.In the prostate, DHT is predominately metabolized to the weak androgen 5a-androstane-3a,17h-diol (3a-diol) by 3a-hydroxysteroid dehydrogenase (3a-HSD) type III encoded by AKR1C2 (10,11).We originally identified this protein by its high affinity for bile salts as the human bile acid binder, whereas others purified it as a 3a-HSD or a dihydrodiol dehydrogenase (12)(13)(14).Subsequent studies revealed that AKR1C2 is one of four highly related AKR1C subfamily members that have unique substrate specificity and tissue distribution, despite sharing >84% sequence identity (10,15).In the human prostate, three of these family members are expressed (11), and they have the following enzymatic activities pertinent to androgen metabolism: AKR1C2, 3a-HSD; AKR1C1, 3h-HSD with minor 3a-HSD (97% identity; ref. 16); and AKR1C3, 17h-HSD type V (84% identity).DHT is predominately metabolized to 3adiol by AKR1C2 and 3h-diol by AKR1C1, whereas AKR1C3 can convert androstendione to testosterone but has little 3a-HSD activity for DHT (10,17).
We previously observed the selective reduction of AKR1C2 and AKR1C1, but not AKR1C3, gene expression in tumor samples compared with their paired benign tissue (11).Our findings have now been extended to establish that loss of these genes is associated with f2-fold reduced ability to catabolize 3 H-DHT to 3a-diol, its major metabolite, in freshly isolated tumors compared with their paired benign tissues.Furthermore, using prostate cancer cell lines, we have found that increased AKR1C2 or AKR1C1, but not AKR1C3, could inhibit DHT-stimulated growth and AR signaling, which was associated with reduced DHT levels in the media.In contrast, no inhibition in proliferation or AR reporter activity was found when using the nonmetabolizable androgen R1881, confirming that catabolism can modify DHT-stimulated proliferation and AR function.Thus, androgen catabolism, in addition to synthesis, can indirectly regulate the activity of AR and thereby provide new therapeutic targets for the treatment of prostate cancer.
Materials and Methods
Prostate tissue processing and hormone measurements.After Institutional Review Board approval, 13 freshly isolated and intact human prostatectomy samples (primary cancers) were immediately cut into 5-or 6-mm slices, and visibly apparent tumors in the posterior zone of the prostate were removed with 6-mm punch biopsy or a 5-mm punch biopsy if tumor was limited.Benign-appearing tissue was isolated from anterior or central zone with a 6-mm punch biopsy.Tumors were staged by standard histology, and the purity of tumor and benign-appearing tissue samples was confirmed by reviewing tissue surrounding punch biopsy sites yielding highly enriched tumor and benign tissue samples minimally contaminated with tumor.Qualified freshly isolated samples were used for RNA isolation, DHT measurement, or incubations with 3 H-DHT for monitoring of DHT metabolism.
DHT measurement.Prostate tissue samples were homogenized in 0.1 mol/L phosphate buffer (pH 7.4; assay buffer); internal standard ( 3 H-DHT) was added to follow procedural losses; and the homogenate was centrifuged.Steroids in the supernatant were then extracted with ethyl acetate/hexane (3:2, v/v).After evaporating the organic solvents, the residue was reconstituted in isooctane and applied on a Celite partition chromatography column using ethylene glycol as a stationary phase and toluene in isooctane as the mobile phase.The solvents in the eluted fractions [10% (v/v) toluene in isooctane] containing DHT were evaporated; the residue was reconstituted in assay buffer; and DHT was quantified by a sensitive and specific RIA (18).This method was also used in cellular proliferation studies.
DHT metabolism.Fresh pairs of prostate tumor and benign tissue samples were minced in 2 mL of RPMI 1640 supplemented with 100 Amol/L NADP(H).A total of 0.08 ACi (124 Ci/mmol; final concentration, 320 pmol/L) of 3 H-DHT (Perkin-Elmer Life Science, Boston, MA) was added to 2 mL of the incubation media.Tissue and media were extracted following 4 h of incubation in a shaking water bath maintained at 37jC.DHT metabolism was terminated by adding the extraction solvent mixture ethyl acetate/ hexane (3:2, v/v), and the residue was reconstituted in methanol. 3H-DHT and its metabolites were well resolved by the reverse-phase highperformance liquid chromatography (HPLC) method of O'Donnell et al. (19) employing a Waters Spherisorb ODS-2 column (5 mm, 250 mm  4 mm).A Shimadzu LC-10AT HPLC system was used in series with an IN/US hRAM 2B detector to monitor and quantify the radioactivity by continuously mixing 1 mL/min of the effluent with 3 mL/min of ScintiVerse E scintillation cocktail (Fisher, Tustin, CA).
Cell culture.The PC-3 cell line was purchased from the American Type Culture Collection (Rockville, MD), and the LAPC-4 cell line was kindly provided by Dr. Sawyers (Department of Medicine and Howard Hughes Medical Institute, University of California at Los Angeles, Los Angeles, CA).Both cell lines were cultured in phenol red-free RPMI 1640 with 10% fetal bovine serum (FBS; Invitrogen, Carlsbad, CA).Permanently transfected PC-3 cell lines expressing AKR1C1, AKR1C2, or AKR1C3 were developed as described (11) and used to assess the effects of AKR1C family members on the DHT-and AR-dependent activation of a mouse mammary tumor virus (MMTV) promoter.For these studies, 4% charcoal/dextran-treated FBS (HyClone, Logan, UT) was used with DHT or R1881 treatments.
Western blot.AKR1C expression was monitored by Western blot with a1850 antiserum that recognizes each AKR1C family member as previously described (11,20).
Quantitative real-time PCR.Relative expression of AKR1Cs, SRD5A1, and SRD5A2 was done using a gene-specific real-time PCR as described (11,20), in which their expression in paired tissues were compared with RNase P and reported as mRNA fold change by dividing expression levels in tumor by its paired normal expression level.Primer/probe sequences for real-time PCR are listed in Table S1.Total RNA was isolated using RNeasy Midi kit (Qiagen, Valencia, CA) as described (11,20).cDNA libraries for Taqman quantitative real-time PCR were made using the Omniscript kit (Qiagen) primed with random hexamers (Applied Biosystems, Foster City, CA).Statistical analysis was used to compare gene expression profile, DHT levels, and DHT metabolism as previously described (11,20,21).
Cell proliferation assay.Nine micrograms of pSVL-AKR1C1, pCED4-AKR1C2, or pcDNA3.1(+)AKR1C3encoding the corresponding AKR1C family members in combination with 3 Ag of CMV-eGFP-c2 plasmid (kindly provided by Dr. Debbie Johnson, Keck School of Medicine, University of Southern California) were transiently transfected into LAPC-4 cells at 80% to 90% confluence plated in 10-cm dishes as previously described (20).Following transfection, batches of cells were allowed to recover for 24 h and then grown in RPMI 1640 supplemented with 4% charcoal/dextran-treated FBS and 1 nmol/L DHT.Following recovery, a portion of the cells was applied onto a Becton Dickinson FACSCalibur (BD Biosciences, Rockville, MD) to assess transfection efficiency.Pools of transfected cells were then seeded onto six-well plates and maintained in RPMI 1640 supplemented with 4% charcoal/dextran-treated FBS; 3 mL of cell culture medium were changed daily and supplemented with 1 or 10 nmol/L DHT or R1881, and cells were counted for three consecutive days as previously described (20).Real-time PCR was used to document gene expression levels of AKR1C family members 2 days after the transfection.
Effect of AKR1Cs on DHT-dependent AR transactivation.PC-3 cell lines expressing different amounts of AKR1Cs, along with a stable, vehicle vector-transfected PC-3 cell line used as a control, were transiently transfected with 2 Ag pMMTV-Luc ( 22 Medicine, University of Southern California) together with 5 ng Renilla luciferase plasmid pRL-SV40 (Promega, Madison, WI) as described previously (20).Transiently transfected cells were then allowed to recover for 36 h in RPMI 1640 with 4% charcoal/dextran-treated FBS.Cells were then washed three times with PBS and treated for 16 h with DHT or R1881 in 4% charcoal/dextran-treated FBS.Luciferase activity was measured using the Luminoskan Ascent Machine (Labsystems, Franklin, MA) with the Dual Luciferase Activity kit (Promega).The ratio of firefly to Renilla luciferase activity was normalized to total protein to compare promoter activity from different cell lines.Luciferase activity of control PC-3 cells transfected with vehicle vector was arbitrarily defined as 100 F SD units.Experiments were repeated three times in triplicate.
Results
DHT levels in paired prostate cancer tumor and benign tissue samples.Little is known about the cellular localization of AKR1C1 and AKR1C2 proteins in the prostate as their high amino acid sequence homology (97% identity) prevents differentiating each other from AKR1C3, which shares 86% sequence identity.We have previously developed an antiserum that cross-reacts with AKR1C1 and AKR1C3 but not AKR1C2 (20).In contrast, AKR1C3specific polyclonal antisera and a monoclonal antibody are available as its sequence significantly diverges from the other family members at its COOH-terminal end.In normal prostate tissue, Fung et al. immunolocalized AKR1C3 using a monoclonal antibody on endothelial, perineural, smooth muscle cells, and stromal cells with minimal staining in the epithelium (23,24).In contrast, El-Alfry et al. localized AKR1C3 predominately on the basal epithelium and luminal cells using a polyclonal AKR1C3specific antiserum, which was confirmed by in situ hybridization (25,26).Another report using the same antiserum also showed a similar immunohistochemical staining pattern (27).To identify which populations of prostatic cells expressed AKR1C1 and AKR1C2, transcriptome analysis of four highly enriched populations of luminal, basal, stromal, and endothelial cells from the prostate was queried using the SCGAP Urologic Epithelial Stem Cells Project web site5 (28).Relative expression profile from the most abundant to least revealed the following expression patterns: AKR1C1, stromal, luminal > endothelial >> basal; AKR1C2, stromal, luminal > endothelial > basal; and AKR1C3, basal, endothelial > stromal > luminal cells.These studies indicate that AKR1C1 and AKR1C2 transcripts are equivalently expressed in the stromal and acinar components.
Highly enriched tumors with corresponding paired benign tissue samples were harvested and confirmed by histologic evaluation, as illustrated in Fig. 1.Table 1 lists the pathologic diagnosis, DHT levels, and the relative changes in gene expression responsible for either DHT catabolism (AKR1C2 and AKR1C1) or DHT synthesis (SRD5A1 and SRD5A2) in these paired prostate samples, compared with RNase P, whose expression is unchanged in prostate cancer (11).Relative enrichment of stromal or epithelial content for each sample was broadly assessed by comparing the relative expression profile of the human keratin 8 gene (KRT8), a recognized marker for the epithelial compartment, and the human myosin heavy polypeptide 11, smooth muscle (SM1), a stromal-specific marker (29,30).In these paired samples, relative expression of stromal or epithelial markers was closely matched.Approximately three quarters of the patients had a relative loss of AKR1C2 and/or AKR1C1 in their tumor samples.As AKR1C1 and AKR1C2 are equivalent expressed in both stroma and epithelium, relative enrichment of either component in tumor samples could not account for the observed loss of AKR1C1 or AKR1C2 gene expression.These findings are in close agreement with our prior findings (11).Relative reduction of SRD5A2 expression (>4-fold), but not SRD5A1, in prostate tumors were found in 5 of 13 paired samples, consistent with previous studies (29).The average DHT levels detected in our paired prostate tissues were comparable with previously reported values.When changes in DHT levels were compared within individual pairs, a 42% higher DHT levels was detected on average in prostate tumors compared with their paired benign tissue samples (P < 0.05).As SRD5A2 expression was decreased, we reasoned that increased DHT synthesis was unlikely to account for the greater DHT levels in these samples.
Reduced DHT metabolism in prostate tumor tissues compared with paired benign tissues.We developed in vitro incubations of freshly harvested prostatic tissues to assess if loss of AKR1Cs was associated with altered metabolism of exogenously supplied 3 H-DHT by using a radio-reverse phase HPLC (11,19).To assess the importance of AKR1Cs in the metabolism of DHT in benign tissue, the non-substrate-competitive inhibitors indomethacin (100 Amol/L) or tolmetin (200 Amol/L) were added to the media.We (31) and Steckelbroeck et al. (16) have previously used these agents to inhibit the enzymatic activity of AKR1Cs within cells to determine their relative contribution to either androgen metabolism or intracellular distribution and transport of bile salts by primary hepatocytes.In two independent benign tissue samples, indomethacin inhibited the loss of 3 H-DHT by >3-fold in each case (data not shown).In one sample, the AKR1C inhibitor tolmetin was also as effective at inhibiting the loss of 3 H-DHT, thereby confirming that the AKR1Cs are required for the rapid catabolism of 3 H-DHT.
DHT metabolism in five pairs of prostate tissue samples was then quantified using this radio-HPLC, and results were compared with their relative AKR1C1 and AKR1C2 gene expression.The 3 H-DHT data were normalized to tissue weight to allow crosscomparisons of changes in the relative radioactivities of 3 H-DHT and its metabolites with those of the endogenous DHT levels.Within individual pairs, results from tumors were compared with their corresponding benign tissues, which were assigned a value of 100.As shown in Fig. 2A, greater 3 H-DHT retention was found in four of the five prostate tumors compared with their paired benign tissues, and as a group, significantly greater 3 H-DHT retention was found in these tumors compared with their paired benign tissues (P < 0.05).As the average relative retention of 3 H-DHT was 2-fold greater in tumors compared with their corresponding benign tissues (P < 0.05), isotope dilution within their respective endogenous intracellular pool of DHT (Table 1) could not account for this significantly greater retention of 3 H-DHT.Moreover, as shown in Fig. 2B, the radioactivity associated with the 3a-diol metabolite in the tumors consistently accounted for a lower fraction (54 F 15%) of the combined radioactivity present in the elution of DHT and its metabolites compared with benign tissues (P < 0.01).As shown in Fig. 2C, production of the 3h-diol metabolite was also reduced in tumors, and the 3h-diol metabolite only constituted f20% of all the DHT metabolites present in the incubation from benign tissues.In Fig. 2D, relative changes in AKR1C2 and AKR1C1 expression in tumors compared with paired benign tissues paralleled their reduction in DHT metabolism.No correlation was found between DHT metabolism and the expression profile of SRD5A1 and SRD5A2 (data not shown).Three other pairs of samples processed using TLC chromatography also showed a similar retention of 3 H-DHT in tumor samples relative to their paired benign tissues (data not shown).Taken together, these results indicate that metabolism of DHT to 3a-diol was significantly impaired in tumors compared with their paired benign tissues.
Inhibition of DHT-dependent growth of LAPC-4 cells by expression of AKR1C1 and AKR1C2.Similar to the prostate cancer cell lines PC-3, DU-145, and LNCaP (11), LAPC-4 cells minimally expresses AKR1C1 and AKR1C2, in contrast to AKR1C3 (data not shown).LAPC-4 cells were chosen for these studies because they express a wild-type AR, in contrast to the AR expressed in LNCaP cells that harbors a mutation in the ligandbinding domain, which permits binding by other class of steroids (32).LAPC-4 cells were individually transfected with AKR1C family members to determine if they can modify androgen-stimulated cellular proliferation.Transfection efficiencies for AKR1C1, AKR1C2, and AKR1C3 expression plasmids in LAPC-4 cells were, correspondingly, 43%, 44%, and 38% for these studies.LAPC-4 cells were transiently transfected with AKR1C1, AKR1C2, or AKR1C3 expression plasmids, which resulted in >5,000-fold increased in AKR1C gene expression, and their growth in response to 1 or 10 nmol/L DHT was compared with LAPC-4 cells transfected with comparable amount of empty vector.In Fig. 3, transfection with pSVL-AKR1C1 (A) or pCED4-AKR1C2 (B) significantly reduced dose-dependent DHT-stimulated proliferation compared with vehicle vectors.However, no inhibition was observed when the nonmetabolizable androgen R1881 was used to stimulate cellular proliferation.In addition, proliferations of nontransfected LAPC-4 cells by R1881 treatments were comparable with LAPC-4 cells transfected with vehicle vector or expression plasmids, which were also observed with DHT treatments (data not shown).AKR1C2 was more effective at inhibiting DHT-stimulated proliferation compared with AKR1C1-transfected LAPC-4 cells.Transfection with AKR1C2 significantly suppressed 1 and 10 nmol/L DHTstimulated cellular proliferation (P < 0.01), whereas AKR1C1transfected cells were only able to significantly reduce 1 nmol/L DHT-dependent proliferation (P < 0.05).In contrast, as shown in Fig. 3C, cells transfected with pcDNA3.1(+)AKR1C3showed no inhibition of DHT-or R1881-stimulated growth.As shown in Fig. 3D, DHT levels in LAPC-4 cells transfected with AKR1C1 and AKR1C2 were significantly reduced to, respectively, f80% and 30% of that remaining in the media of LAPC-4 cells transfected with AKR1C3 or vector controls.As R1881-stimulated proliferation was not reduced by AKR1C1 and AKR1C2, our findings confirm that inhibition of DHT-dependent proliferation was not due to metabolism of another substrate or some other unknown function of AKR1C1 or AKR1C2, such as inhibition of the cell cycle.The significant reduction in DHT levels after 4 h implicates reduction in Figure 2. DHT metabolism in short-term incubations is impaired in prostate cancer.A, relative amounts of 3 H-DHT retained in five individual pairs of benign and prostate cancer samples after 4 h of incubation. 3H-DHT radioactivity quantified by radio-HPLC was initially expressed per gram of tissue, which was then normalized for each pair as the ratio of values in tumors divided by paired benign tissue sample, which was designated as 100.Significantly higher ratio 2.0 F 0.7 (mean F SD; P < 0.05) of relative 3 H-DHT levels were retained in all the combined prostate tumors compared with their paired benign tissue samples.B, relative 3 H-3a-diol values in tumors were compared with their corresponding paired benign tissue samples, which were normalized to 100.In all tumor samples, 3 H-3a-diol values contributed a significantly (P < 0.01) lower percentage (54 F 15%) of the combined radioactivity of 3 H-DHT and its metabolites compared with their paired benign tissue samples, showing a relative reduction in DHT metabolism by tumors.C, relative 3 H-3h-diol values in tumors were compared with their corresponding paired benign tissue samples, which were normalized to 100.In two of the samples, 3 H-3h-diol was below levels of detection.The amount of DHT metabolized to 3 H-3h-diol was significantly (P < 0.05) lower in the combined tumor samples compared with their paired benign tissues sample.D, changes in gene expression of AKR1C2 and AKR1C1 in tumors were divided by their corresponding relative expression in paired benign tissue samples and defined as mRNA fold change.Negative number represents a decreased expression in tumor compared with paired benign tissue sample.Bars, SD.
DHT levels as the major cause for reduce proliferation of all LAPC-4 cells present in the wells transfected with AKR1C1 or AKR1C2 plasmids.
Inhibition of DHT-dependent AR activation by AKR1C2.To assess if AKR1C2 could directly block DHT-dependent activation of an AR-driven reporter, PC-3 cells stably expressing AKR1C1, AKR1C2, or AKR1C3 were transfected with a MMTV promoter that harbors a hormone-responsive element along with an AR expression plasmid (33).No luciferase activity was detectable in the absence of AR or DHT (data not shown).In Fig. 4A, AKR1C2 significantly reduced AR-and DHT-dependent MMTV reporter activity (P < 0.01) to a greater extent than AKR1C1, although more AKR1C1 was expressed than AKR1C2.Note that no change in luciferase activity was found in cells overexpressing AKR1C3, confirming that DHT is a poor substrate for this family member.
As shown in Fig. 4B, increasing AKR1C2 significantly reduced ARdependent MMTV reporter activity in a dose response fashion with a single DHT concentration at the P < 0.01 level but had no effect on R1881-dependent activation.Conversely, increasing concentrations of DHT were able to overcome the inhibition of the reporter activation in PC-3 cells permanently expressing the lowest level of AKR1C2 in Fig. 4C.Taken together, these data show that AKR1C2 can influence DHT-dependent gene expression and signaling by promoting DHT metabolism.
Discussion
Prostate cancer and tissue DHT.In prostate cancer cell lines, androgens activate survival pathways that block cell death mediated by either tumor necrosis factor-a or Fas activation Proliferative response of LAPC-4 cells to DHT or a nonmetabolizable androgen (R1881) were determined in cells transiently transfected with AKR1C1, AKR1C2, AKR1C3, or empty vector plasmid.A common legend for (A ), (B ), and (C ) are shown above depicting the concentration of DHT or R1881 with or without the AKR1Cs expression plasmids used for that Panel.A, 1 or 10 nmol/L DHT or R1881 significantly increased cellular proliferation.Proliferations of nontransfected LAPC-4 cells by R1881 were comparable with LAPC-4 cells transfected with vehicle vector or AKR1C expression plasmids.Transient transfection of AKR1C1 plasmid was only able to significantly suppress 1 nmol/L DHT-stimulated cellular proliferation (P < 0.05 significant level) compared with corresponding vehicle vector control cells.AKR1C1 was unable to significantly suppress proliferation of either concentration of R1881 or 10 nmol/L DHT.B, transient transfection of AKR1C2 was able to significantly inhibit 1 or 10 nmol/L DHT-dependent cellular proliferation (P < 0.01) compared with vehicle vector control cells.Comparable proliferation rates were also observed in cells transfected with AKR1C2 and treated with either 1 or 10 nmol/L R1881.The ability of AKR1C2 to inhibit proliferation of both 1 and 10 nmol/L DHT compared with AKR1C1 suggests greater catalytic activity than AKR1C1 for DHT.C, transfection with AKR1C3 was unable to inhibit either 1 or 10 nmol/L DHT-dependent proliferation, confirming that DHT is a poor substrate for AKR1C3.No inhibition of R1881dependent proliferation was also observed.D, AKR1C1 and AKR1C2 transfected cell respectively showed f80% and 30% of the relative content of DHT remaining in the media after 4 h of incubation in vehicle vector-or AKR1C3-transfected cells.(34), prevent etoposide-induced cell death (35), inhibit phosphatase and tensin homologue deleted on chromosome 10-induced apoptosis (36), and abrogate the toxicity of FKHR (37).Androgen ablation, by both decreased testosterone synthesis and blockade of AR, effectively reduces AR signaling with both shrinkage of tumors and symptomatic relief.Ultimately, a so-called androgenindependent state evolves in which prostate cancer becomes progressively less responsive to androgen ablation (38).Numerous pathways participate in this process, including AR activation in the absence of ligand and nongenomic effects of AR (38).Recent studies have implicated increased AR expression as a cause for androgen independence with increased sensitivity to low levels of androgens (9).In cell lines, expression of AR by itself is sufficient to induce androgen-independent features, and selective deletion of AR can reverse tumorigenicity of androgenindependent cells (9,39).
Prior epidemiologic and pathologic studies have carefully sought for correlations between androgen levels in either serum or within prostate cancer tumors and the risk of development or progression of prostate cancer (4,40,41).Mohler et al. reported changes in prostatic androgen levels in primary prostate cancer compared with recurrent disease as well as differences in prostatic androgens among different ethnic groups (40,41).However, prior studies have failed to directly compare androgen levels in paired tissues, and none have correlated these changes in DHT levels with gene expression of AKR1C.Most studies have compared production of DHT from testosterone, whereas use of 3 H-DHT in our experimental design allowed us to bypass the synthetic pathway to specifically focus on the ability of these samples to catabolize DHT.Kreig et al. also noted significantly greater DHT levels and reduced 3a-diol levels in prostate cancer compared with nonpaired benign tissues (42).In our present studies, DHT levels were found on average to be 42% higher in tumors compared with paired benign tissues.Over prolonged periods of time, a proportional 42% increase in DHT binding to AR could potentially cause cumulative proliferation and growth of tumors.This would be consistent with the molecular epidemiologic studies of Makridakis et al., who have reported increases in prostate cancer risk associated with functional polymorphism of SRD5A2 that increase catalytic activity (43).
Role of AKR1C2 as a pre-receptor regulator of AR signaling.Steady-state levels of intracellular DHT are maintained through a balance between their local synthetic and catabolic rates.In LAPC-4 cells, metabolism of DHT by AKR1C1 and AKR1C2 was sufficient to inhibit DHT-stimulated proliferation, although a majority of cells were not transfected.However, the significant reduction in DHT levels in the media for all cells exposed to the AKR1C1 or AKR1C2 expression plasmid is likely to have also reduced the DHTdependent proliferation of the nontransfected cells.In addition, no DHT-and AR-dependent reporter activity was determined with PC-3 cell lines stably expressing AKR1C1, AKR1C2, or AKR1C3 and were compared with cells stably transfected with vehicle vector.A, expression of AKR1C2 was able to significantly reduce AR-dependent MMTV reporter activity by DHT (P < 0.01).AKR1C1 was also able to reduce AR-dependent MMTV reporter activity (P < 0.05) but not to the same extent as AKR1C2.AKR1C3 was unable to reduce DHT-dependent activation of the MMTV promoter.B, increased expression of AKR1C2 in PC-3 cells was able to significantly inhibit AR-dependent MMTV reporter activity with 100 pmol/L DHT (P < 0.01) compared with vehicle vector control.No inhibition of reporter activity was observed when R1881 was used.C, in PC-3 cells expressing the least amount of AKR1C2, increasing concentrations of DHT were able to significantly overcome the inhibition of AR-dependent reporter activation.
Figure 5. Proposed model for increased AR signaling due to loss of AKR1C2 and AKR1C1 in prostate cancer.Circulating testosterone (T ) diffuses into prostatic cells and is reduced by SRD5A2 to form DHT, the major ligand for the AR.AKR1C2 is responsible for the majority of DHT catabolism and predominately reduces DHT to the weak androgen 3a-diol.AKR1C1 also catalyzes the stereospecific reduction of DHT to the weak androgen 3h-diol.3h-diol is a ligand for ER-h, which promotes antiproliferative response in the prostate.In paired prostate cancer samples, decreased expression of AKR1C2, AKR1C1, and SRD5A2 was observed with increased DHT levels.Our model predicts that reduced AKR1C2 expression in prostate cancer leads to reduced DHT catabolism, resulting in a greater retention of DHT, thereby enhancing AR-dependent DHT signaling in prostate cancer.
inhibition of proliferation was observed with R1881 treatment.Data from promoter activity assay revealed that increasing AKR1C2 expression reduced the AR-DHT driven MMTV promoter activity in PC-3 cell line.In our paired tissue samples, reduced metabolism of DHT was found to correspond with the loss of AKR1C2 and AKR1C1 expression.As SRD5A2 expression was not increased in these prostate cancer samples, the loss of DHT metabolism associated with reduced AKR1C2 expression is probably the major mechanism for the observed increase in DHT levels in the tumor as a result of enhanced retention.Of note, Bauman et al. recently reported comparable levels of AKR1C2 transcripts in epithelial cell lines established from normal and prostate cancer tissue samples (44).Potential explanations for these discrepancies include comparison of whole tissue expression levels with epithelial cell lines established from tissue samples and variation in gene expression profile in these established cell lines due to culture conditions or lack of contact with other cell types.
Currently, androgen ablation is the major treatment for hormone-sensitive prostate cancer and is achieved by inhibition of testicular testosterone synthesis with gonadotropin-releasing hormone receptor antagonists in combination with an AR ligand antagonist (8).However, androgens synthesized from adrenal precursors are only reduced by 50% with these treatments, and Mizokami et al. have reported elevated androst-5-ene-3h,17h-diol levels after androgen ablation (45).This and other adrenal derived androgens may be important contributors to AR-dependent gene expression (46).Indeed, Nishiyama et al. reported that prostatic DHT levels remained at 25% of pretreatment levels in patients undergoing androgen deprivation therapy, in sharp contrast to serum levels of testosterone that were decreased by 93% (47).Based on the reasons mentioned above, in situ DHT catabolism mediated by AKR1C2 in the prostate could be developed as a potential adjuvant therapy to androgen ablation for hormone-sensitive prostate cancer by induction of AKR1C2 expression.
We speculate that the selective loss of AKR1C2 in prostate cancer promotes clonal expansion of tumor cells by enhancement of androgen-dependent cellular proliferation.Isaacs et al. have emphasized the acquired remodeling of terminal epithelium from being normally dependent on paracrine stimulation for growth to being independent of these factors after malignant transformation (48,49).From these studies, we conclude that increased DHT as a result of reduced DHT catabolism in prostate cancer would lead to cellular proliferation in an AR-dependent fashion.Our cellular proliferation experiments showed that increased AKR1C2 expression can reduce DHT-stimulated cell growth, thereby confirming that increased metabolism of DHT can block the activation of AR.
In summary, catabolism of androgens by AKR1Cs can function as effective pre-receptor regulators of AR-dependent gene expression by modulation of the essential ligand for AR. Figure 5 illustrates our model for how reduction of AKR1C2 and possibly AKR1C1 metabolism of DHT in prostate cancer could promote AR signaling by increasing the intracellular DHT levels.According to Steckelbroeck et al., AKR1C2 favors the reduction of DHT to the 3a-diol compared with the 3h-diol metabolites by a ratio of 20:1, whereas AKR1C1 ratio of metabolites is 1:4.In addition, AKR1C2 apparent K cat activity for 3a-diol production is 2.81 compared with AKR1C1's K cat of 0.61 for 3h-diol production, which agrees with our findings in Figs. 3 and 4 (16).Thus, AKR1C2 is responsible for reducing DHT to 3a-diol because of its catalytic efficiency and is the major enzyme responsible for DHT catabolism.Although formation of 3h-diol from DHT was relatively minor compared with 3a-diol, 3h-diol is a recognized ligand for estrogen receptor h (ER-h), which promotes antiproliferative pathways (50).Loss of AKR1C1 resulting in decreased production of a potent ER-h ligand would also provide a selective growth advantage for these tumor cells.As SRD5A2 expression is decreased in prostate cancer, we predict that a significant impairment in the catabolic pathway is responsible for the increased DHT levels that we observed in tumor samples.We speculate that selective loss of AKR1C2 and AKR1C1 in prostate cancer promotes clonal expansion of tumor cells by enhancement of androgen-dependent cellular proliferation.
Figure 1 .
Figure 1.Representative histology of paired tumor and benign appearing prostatic tissue samples.Prostatic tissue slices were processed, stained with H&E, and reviewed to assess tumor stage and homogeneity of resected tissue samples.Histology of two representative pairs of tumor (T ) and benign-appearing (N ) tissue samples are shown.
Figure 3 .
Figure 3. Transient expression of AKR1C2 or AKR1C1 reduces DHTstimulated proliferation of LAPC-4 cells.Proliferative response of LAPC-4 cells to DHT or a nonmetabolizable androgen (R1881) were determined in cells transiently transfected with AKR1C1, AKR1C2, AKR1C3, or empty vector plasmid.A common legend for (A ), (B ), and (C ) are shown above depicting the concentration of DHT or R1881 with or without the AKR1Cs expression plasmids used for that Panel.A, 1 or 10 nmol/L DHT or R1881 significantly increased cellular proliferation.Proliferations of nontransfected LAPC-4 cells by R1881 were comparable with LAPC-4 cells transfected with vehicle vector or AKR1C expression plasmids.Transient transfection of AKR1C1 plasmid was only able to significantly suppress 1 nmol/L DHT-stimulated cellular proliferation (P < 0.05 significant level) compared with corresponding vehicle vector control cells.AKR1C1 was unable to significantly suppress proliferation of either concentration of R1881 or 10 nmol/L DHT.B, transient transfection of AKR1C2 was able to significantly inhibit 1 or 10 nmol/L DHT-dependent cellular proliferation (P < 0.01) compared with vehicle vector control cells.Comparable proliferation rates were also observed in cells transfected with AKR1C2 and treated with either 1 or 10 nmol/L R1881.The ability of AKR1C2 to inhibit proliferation of both 1 and 10 nmol/L DHT compared with AKR1C1 suggests greater catalytic activity than AKR1C1 for DHT.C, transfection with AKR1C3 was unable to inhibit either 1 or 10 nmol/L DHT-dependent proliferation, confirming that DHT is a poor substrate for AKR1C3.No inhibition of R1881dependent proliferation was also observed.D, AKR1C1 and AKR1C2 transfected cell respectively showed f80% and 30% of the relative content of DHT remaining in the media after 4 h of incubation in vehicle vector-or AKR1C3-transfected cells.
Figure 4 .
Figure 4. Effects of AKR1Cs on DHT-dependent activation of AR-dependent MMTV promoter.DHT-and AR-dependent reporter activity was determined with PC-3 cell lines stably expressing AKR1C1, AKR1C2, or AKR1C3 and were compared with cells stably transfected with vehicle vector.A, expression of AKR1C2 was able to significantly reduce AR-dependent MMTV reporter activity by DHT (P < 0.01).AKR1C1 was also able to reduce AR-dependent MMTV reporter activity (P < 0.05) but not to the same extent as AKR1C2.AKR1C3 was unable to reduce DHT-dependent activation of the MMTV promoter.B, increased expression of AKR1C2 in PC-3 cells was able to significantly inhibit AR-dependent MMTV reporter activity with 100 pmol/L DHT (P < 0.01) compared with vehicle vector control.No inhibition of reporter activity was observed when R1881 was used.C, in PC-3 cells expressing the least amount of AKR1C2, increasing concentrations of DHT were able to significantly overcome the inhibition of AR-dependent reporter activation.
Table 1 .
DHT levels compared with relative expression fold changes of AKR1Cs, SRD5A1, SRD5A2, KRT8, and SM1 in paired human prostate tumor tissues versus prostate benign tissues NOTE: DHT levels in paired human prostate tumors tissues versus prostate benign tissues were measured and compared with relative gene expression fold changes of AKR1Cs, SRD5A1, SRD5A2, KRT8, and SM1.Positive values of gene expression fold changes refer to increased expression in tumors whereas negative values represent decreased expression in tumors compared with paired benign tissues.P is the outcome of a paired t test with two tails.Abbreviations: NA, not available in both normal and cancer tissues; ND, not detectable in both normal and cancer tissues; PA, prostatic adenocarcinoma; BMPA, bilateral moderate prostatic adenocarcinoma; BPDPA, bilateral poorly differentiated prostatic adenocarcinoma; BM-PDPA, bilateral multifocal to poorly differentiated prostatic adenocarcinoma; PT, prostate tumor; PN, prostate normal tissue; W, weight.
|
v3-fos-license
|
2021-09-24T05:19:52.906Z
|
2021-09-15T00:00:00.000
|
237606084
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2021/5525319",
"pdf_hash": "6dd44365c79bdd1138f8a4df79ba7328d41022eb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46732",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "6dd44365c79bdd1138f8a4df79ba7328d41022eb",
"year": 2021
}
|
pes2o/s2orc
|
Synthetic Mesh Reconstruction of Chronic, Native Quadriceps Tendon Disruptions following Failed Primary Repair
Case Two patients presented with chronic knee extensor mechanism disruption after failed primary repairs. Both patients had minimal ambulatory knee function prior to surgical intervention and were treated with a synthetic mesh reconstruction of their extensor mechanism. Our technique has been modified from previously described techniques used in revision knee arthroplasty. At the one-year follow-up, both patients had improvement in their active range of motion and had returned to their previous activity. Conclusion Synthetic mesh reconstruction of chronic extensor mechanism disruption is a viable technique that can be utilized as salvage for the persistently dysfunctional native knee.
Introduction
Quadriceps tendon (QT) ruptures occur most frequently in middle-aged males [1] and typically can be successfully treated with primary surgical repair [2,3]. While worse surgical outcomes are associated with delays to primary repair, the overall rate of repair failure or rerupture of acute injuries remains low (approximately 2%) [2,4,5]. In the acute setting, QT injuries are typically repaired with direct tissue apposition, transosseous tunnels, or suture anchors, depending on whether the injury occurs midsubstance or at the osseotendinous interface [2,4,5]. Treatment of chronic ruptures or reruptures of prior repairs represents a greater surgical challenge, with no clear gold standard for reconstruction. Described options for surgical reinforcement include the use of allograft [6] and autograft tissue [7][8][9].
We present two cases of chronic, reruptured QT injuries in native knees treated with synthetic mesh reconstruction. QT reconstruction using this technique, typically reserved for post total knee arthroplasty (TKA) knees, resulted in favorable outcomes in both patients at the final follow-up.
Statement of Informed Consent
Both patients signed informed consent permitting us to report on their deidentified cases.
Case Presentation and Surgical Technique
2.1.1. Case 1. An 82-year-old male with baseline function of daily jogging and a past medical history of chronic kidney disease presented with right knee pain and dysfunction. He had failed two attempts at primary quad tendon repair, first with suture anchors 4 months prior to presentation and subsequently with transosseous tunnels one month later. At presentation, he had a palpable defect just proximal to the superior patellar pole and was unable to actively straight leg raise (MRI and X-ray shown in Figure 1).
His passive ROM was 0-120, and active ROM was 70-120°(i.e., 70°extensor lag). His Knee Society Score (KSS) was 35. The patient was able to ambulate with his knee locked in extension with a compensatory circumduction gait.
At the 12-month follow-up after mesh reconstruction (described below), he was ambulating without assistive devices and had an active knee range of motion (ROM) of 5-120°. He had resumed light running activities and achieved a KSS of 73.
Case 2.
A 58-year-old male with a remote history significant for a left, traumatic above-knee amputation presented with right knee pain and dysfunction 1 year following primary QT repair complicated by a fall and rerupture post-operatively. He previously ambulated unassisted with a prosthesis. The patient was wheelchair-bound and had passive ROM of 0-120°and active ROM between 75 and 120°. His KSS was 31. His preoperative X-ray was significant for patellar baja and no fracture (Figure 2).
At the 12-month follow-up after extensor reconstruction (see below), he had returned to unassisted ambulation with his prosthesis. His passive ROM was preserved, and his active ROM had improved to 10-120 degrees and achieved a KSS of 71.
Surgical Technique.
The patient is positioned supine on a regular surgical bed, and a tourniquet is used. A midline incision that incorporates or excises the prior surgical scar is made extending between the tibial tubercle to the distal Atrophic tendon ends are debrided to healthy tissue, with large gaps expected (10-15 cm in our cases). The mesh (Covidien macroporous polypropylene mesh, 45 × 30 cm) is tubularized as previously described [10], measuring 2 cm × 30 cm. We employed two distal fixation techniques. In case 1, a subperiosteal tunnel was created over the anterior surface of the patella (Figure 3). Distally, the paratenon overlying the patellar tendon (PT) was incised and reflected. The mesh is then passed subperiosteally over the anterior patella and incorporated onto the PT with Krakow suture fixation (Figures 4(a) and 4(b)). The paratenon layer is then repaired over the mesh, similar to prior reports [11]. In case 2, the mesh captures the patella distally using a transverse tunnel through the PT, 1 cm distal to the inferior pole of the patella ( Figure 5). The mesh is passed through the tunnel ( Figure 6) in a loop fashion and sutured to each side of the quadriceps tendon proximal to the patella, using a Krakow suture technique (Figure 7). These techniques differ from previously described techniques in knee arthroplasty that relied on intraosseous graft fixation in the tibia for distal fixation [10].
Proximally, an intrasubstance, longitudinal tunnel is made in the remnant quadriceps tendon stump, and the mesh is secured (after reapposition and tensioning) to the tendon with running, Krakow suture technique ( Figure 8). For all cases, the QT-mesh unit is tensioned tightly with the knee in full extension.
Deep wound closure should ensure complete coverage of the synthetic mesh when possible. Proximally, this includes mobilization of the vastus medialis and lateralis myofascial units for coverage, as previously described [10]. Distally, the closure includes closure of the paratenon (case 1) or retinaculum (case 2).
Postoperative Protocol.
Postoperatively, patients are weightbearing as tolerated in a removable hinged knee brace locked in extension for three months, after which flexion limits are increased (via the brace) in 30°per 2-week intervals. Upon achieving 90°of flexion, ROM is progressed to tolerance. It is important to avoid active or passive knee flexion for a prolonged time.
Discussion
QT ruptures exceed patella fractures and PT ruptures in their incident disruption of the knee extensor mechanism 3 Case Reports in Orthopedics [12]. When not treated acutely or after failure of primary repair, QT tears become increasingly difficult to treat. In this series, we highlight two successful reconstructions in QT deficient native knees utilizing a synthetic mesh augmentation for reconstruction. While many methods have been described to reconstruct irreparable QTs, our small series demonstrates a reliable alternative to soft tissue reconstructions. Classically, techniques such as the Codivilla or Scuderi advancement techniques are effective for reconstruction of smaller QT gaps than we encountered (10-15 cm) [13]. Despite the array of autograft, allograft, and synthetic surgical augmentations, outcomes remain suboptimal [14][15][16][17][18][19]. Unlike customizable synthetic grafts, auto-and allografts have unique risks that include graft-host mismatch [20], reliance on graft tissue quality, donor site morbidity (in the case of autograft), risk for delayed creep failure, allograft availability, and disease transmission. However, when possible, utilization of autograft represents the most cost-effective source for extensor mechanism reconstruction tissue.
Given concerns over the risk-success profile of soft tissue graft reconstruction, we adapted a TKA reconstruction technique [10], applying a synthetic mesh augmentation for the reconstruction of chronic extensor mechanism disruptions with good outcomes [21][22][23][24]. This augmentation technique was previously modified to augment acute, native QT repairs with good success [25]. It has also been described to augment an allograft chronic QT reconstruction [13] but has never been portrayed in isolation. Monofilament mesh is well-studied in general surgical hernia repairs 26 and functions by inciting a robust inflammatory fibrotic reaction that promotes host/graft integration [27,28]. A TKA retrieval study demonstrated similar histological findings [29]. The use of mesh in this technique is technically uncomplicated and affordable, and polypropylene mesh has favorable biomechanical properties [13,29]. As such, there is growing interest in its use in the traditionally tenuous reconstruction of relatively devitalized post-TKA extensor mechanism ruptures [10,21,30,31].
We selected this method for these two patients given their unique circumstances: chronicity, kidney disease in a high-functioning patient (case 1), and high-demand knee reliance in a contralateral amputee (case 2). Thus, both patients demanded a reliable method for recalcitrant chronic QT ruptures. As such, reconstruction in this setting likely
Case Reports in Orthopedics
represents an approximate worst-case scenario [22], and success would seem relatively promising for adaptation to similarly exacting pathology [24,32].
The success of this technique relies heavily on distal mesh fixation with described techniques including screw and cement fixation in the tibial plateau [10,21,24] and suture fixation into the PT [11,23]. While most prior literature describes fixation in postarthroplasty knees, our technique demonstrates practicality in the native knee with chronic QT disruption. While the proximal fixation has limited flexibility, we demonstrate two successful distal fixation methods, including a self-retaining sling and direct tendon onlay through a subperiosteal tunnel. We believe that avoiding any knee flexion for a prolonged time period after surgery is critically important. This can be achieved with a cylinder cast [22] or with a knee immobilizer in a reliable patient.
Conclusion
Chronic QT tears that have failed primary repair have notoriously poor results. Multiple options exist, but synthetic mesh has emerged as an option for reconstruction in the arthroplasty patient. This report demonstrates its viability in the native knee and offers a technical description of two distal fixation methods. Longitudinal investigations should quantify and compare the efficacy of this novel technique; however, we hope it provides a usable alternative to graft reconstruction for chronic tendon injuries given our success here and in prior descriptions after TKA.
Conflicts of Interest
The authors declare that there are no relevant conflicts of interest pertaining to this work.
|
v3-fos-license
|
2023-05-31T15:08:04.915Z
|
2023-05-29T00:00:00.000
|
258980199
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.3390/cancers15112962",
"pdf_hash": "8f4b1f605cbbe238aba4bd95c75161f36efe56cf",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46733",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "0099d7b0a8dbdc74e8a9fdd78f88a5d2be541cfe",
"year": 2023
}
|
pes2o/s2orc
|
Treating Primary Node-Positive Prostate Cancer: A Scoping Review of Available Treatment Options
Simple Summary The best way to treat patients with prostate cancer who have positive lymph nodes is not clear. However, recent studies have suggested that intensifying treatment may help these patients and potentially cure their cancer. This review summarises the existing research that supports various treatment options investigated for these patients. For patients with positive lymph nodes that can be seen on pre-treatment imaging (clinically node-positive), the best treatment option is a combination of hormonal therapy and radiotherapy. Although intensifying the treatment seems promising, more research studies are needed to confirm its effectiveness. For patients with positive lymph nodes confirmed through pathology tests (pathologically node-positive), the treatment options depend on evaluating the risks based on factors such as the Gleason score, tumour stage, the presence of positive lymph nodes, and surgical margins. These patients should be closely monitored, and it is recommended to consider additional treatment with hormonal agents and/or radiotherapy. Abstract There is currently no consensus on the optimal treatment for patients with a primary diagnosis of clinically and pathologically node-positive (cN1M0 and pN1M0) hormone-sensitive prostate cancer (PCa). The treatment paradigm has shifted as research has shown that these patients could benefit from intensified treatment and are potentially curable. This scoping review provides an overview of available treatments for men with primary-diagnosed cN1M0 and pN1M0 PCa. A search was conducted on Medline for studies published between 2002 and 2022 that reported on treatment and outcomes among patients with cN1M0 and pN1M0 PCa. In total, twenty-seven eligible articles were included in this analysis: six randomised controlled trials, one systematic review, and twenty retrospective/observational studies. For cN1M0 PCa patients, the best-established treatment option is a combination of androgen deprivation therapy (ADT) and external beam radiotherapy (EBRT) applied to both the prostate and lymph nodes. Based on most recent studies, treatment intensification can be beneficial, but more randomised studies are needed. For pN1M0 PCa patients, adjuvant or early salvage treatments based on risk stratification determined by factors such as Gleason score, tumour stage, number of positive lymph nodes, and surgical margins appear to be the best-established treatment options. These treatments include close monitoring and adjuvant treatment with ADT and/or EBRT.
Introduction
Prostate cancer (PCa) is the second most commonly diagnosed cancer in men worldwide [1]. The treatment of patients with PCa depends on the stage and grade of the disease and often involves a multidisciplinary approach. The presence of nodal metastasis (N+) is an unfavourable prognostic factor that correlates with PCa recurrence, distant metastases, and survival [2]. Accounting for approximately 13% of newly diagnosed PCa cases, N+ PCa contributes significantly to the total number of deaths from PCa [1].
Historically, patients diagnosed with primary lymph-node-positive PCa received androgen deprivation therapy (ADT) alone as their primary treatment. Patients with nodal PCa metastases were not considered for further curative treatment due to the assumption that a cure was not feasible. This was based on the belief that patients with lymph-nodepositive disease suffer from systemic disease. However, these assumptions are being questioned today. Several studies have shown that a radical prostatectomy (RP) combined with extended pelvic lymph node dissection (LND) without adjuvant therapy can cure patients with one or two positive lymph nodes (pN1M0) (Cancer-Specific Survival (CSS) of >95% after 5 years) [3,4]. The same finding has been demonstrated for external beam radiotherapy (EBRT) applied to the prostate and regional lymph nodes in combination with concurrent hormone therapy [5,6]. Moreover, new molecular imaging techniques targeting prostate-specific membrane antigen (PSMA) detect smaller nodal metastases while reliably excluding distant metastases with a high positive predictive value [7]. As a result, more patients are diagnosed with node-positive PCa without distant metastases (cN1M0). These findings have increased confidence that N+ PCa patients may be curable and eligible for loco-regional therapy, which could be combined with systemic treatment [8]. These intensifications of treatment may be beneficial, but they can also increase the occurrence of adverse events and potentially impair quality of life. Due to the limited availability of randomised data, current guidelines are hesitant to recommend the implementation of these intensified treatments [2,9,10] (Table 1). Consequently, the management of this patient population varies widely in clinical practice, and the optimal treatment remains a topic of discussion. Table 1. Current guidelines on management of N1M0 PCa.
Guideline cN1M0 pN1M0
EAU [2] 1 Offer local treatment (either RP or EBRT) plus long-term ADT 2 Offer EBRT for prostate + pelvis in combination with long-term ADT and 2 years of abiraterone Abbreviations: cN1M0 patients = patients with clinically node-positive disease determined via any imaging modality; pN1M0 patients = patients with pathologically node-positive disease in postoperative setting either after initial treatment with radical prostatectomy (RP) and lymph node dissection (LND) or after staging LND; PCa = prostate cancer; EBRT = external beam radiotherapy; ADT = androgen deprivation therapy; PSA = prostate specific antigen.
This scoping review aims to provide an overview of the current evidence on the available treatments for men with primary diagnosed clinically and/or pathologically node-positive PCa. Our goal is to guide clinical care and further scientific research.
Materials and Methods
This scoping review was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline on scoping reviews [11]. Medline was searched for relevant clinical studies published in the English language from January 2002 to December 2022 using the following terms: "Primary Prostate Cancer AND Nodal Metastasis", "Primary Prostate Cancer AND Nodal Metastasis AND randomised controlled trials (RCT)", and "Primary Prostate Cancer AND Nodal Metastasis AND Systematic Reviews/Meta-Analyses". Clinicaltrials.gov was searched for ongoing clinical trials ( Figure 1).
Materials and Methods
This scoping review was performed in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline on scoping reviews [11]. Medline was searched for relevant clinical studies published in the English language from January 2002 to December 2022 using the following terms: "Primary Prostate Cancer AND Nodal Metastasis", "Primary Prostate Cancer AND Nodal Metastasis AND randomised controlled trials (RCT)", and "Primary Prostate Cancer AND Nodal Metastasis AND Systematic Reviews/Meta-Analyses". Clinicaltrials.gov was searched for ongoing clinical trials ( Figure 1). Primary-diagnosed node-positive PCa patients fall into two categories: patients with clinically node-positive disease (cN1M0) and patients with pathologically node-positive disease (pN1M0). For this scoping review, we focused on cN1M0 patients that were staged as node-positive on any imaging modality (i.e., computed tomography (CT)/magnetic resonance imaging (MRI)/bone scan/prostate-specific membrane antigen positron emission tomography (PSMA-PET)/CT)) without prior treatment and on pN1M0 patients that were staged as node-positive in a postoperative setting either after RP and LND or after staging LND.
The Population, Intervention, Comparator, Outcome, and Study (PICOS) design model was used as a guide for eligibility and was constructed twice due to the two patient categories. Both constructed models are shown in Tables 2 and 3. Primary-diagnosed node-positive PCa patients fall into two categories: patients with clinically node-positive disease (cN1M0) and patients with pathologically node-positive disease (pN1M0). For this scoping review, we focused on cN1M0 patients that were staged as node-positive on any imaging modality (i.e., computed tomography (CT)/magnetic resonance imaging (MRI)/bone scan/prostate-specific membrane antigen positron emission tomography (PSMA-PET)/CT)) without prior treatment and on pN1M0 patients that were staged as node-positive in a postoperative setting either after RP and LND or after staging LND.
The Population, Intervention, Comparator, Outcome, and Study (PICOS) design model was used as a guide for eligibility and was constructed twice due to the two patient categories. Both constructed models are shown in Tables 2 and 3. Reports were considered relevant to this scoping review if they involved patients with either cN1M0 disease ( Table 2) or pN1M0 disease (Table 3). Furthermore, reports were considered if they compared two different types of treatment to determine oncologic outcomes in terms of survival for either of the patient categories.
We selected studies according to the following criteria: (1) treatment and outcomes of cN1M0/pN1M0 PCa patients; (2) provision of original data, including RCTs and retrospective studies (no editorial notes); (3) reporting of consecutive cases (no case reports); and (4) provision of systematic reviews and meta-analyses.
The screening consisted of scanning titles and abstracts with regard to their relevance for inclusion. Afterwards, the remaining full-text original articles were retrieved. Abstracts and original articles were reviewed by one reviewer for eligibility. Table 4 lists the included studies evaluating the different treatments for cN1M0 disease, which have been divided into subcategories: (1) treatment with ADT alone, (2) treatment with only local therapy, (3) treatment with ADT combined with any form of local therapy, and (4) treatment with ADT and other forms of systemic treatment.
Treatment with ADT
ADT has long been considered as the standard of care and is still recommended in guidelines for cN1M0 disease [2,9,10] (Table 1). The timing of starting ADT has been investigated by Schröder et al. in the EORTC 30846 study [12]. Patients with lymph-nodepositive PCa confirmed after staging LND without local treatment were randomised to immediate ADT or ADT provided at the time of clinical progression, in which the delayed group received ADT for 2.7 years and the immediate group received ADT for 3.2 years. After a 13-year follow-up, the median overall survival (OS) was 7.6 years for the immediate ADT arm and 6.1 years for the delayed ADT arm. The intention-to-treat analysis did not show a statistically significant difference in survival between an immediate or delayed start of ADT (hazard ratio (HR) = 1.22; 95% confidence interval (CI): 0.92-1.62). Abbreviations: cN1M0 patients = patients with clinically node-positive disease on any imaging modality; ADT = androgen deprivation therapy; EBRT = external beam radiotherapy; RP = radical prostatectomy; LT= local therapy (i.e., EBRT or RP); RCT = randomised controlled trial; SEER = Surveillance, Epidemiology, and End Results; NCDB = National Cancer Database; OS = overall survival; CI = confidence interval; HR = hazard ratio; GS = Gleason score; yr = year; CSS = cancer-specific survival; PCSS = prostate-cancer-specific survival; FFS = failure-free survival; OM-free = overall mortality free; PSA = prostate-specific antigen; PCSM = prostate-cancer-specific mortality; ACM = all-cause mortality; DE = docetaxel and estramustine.
Treatment with Local Therapy
Two studies have been conducted on treatment with local therapy (LT) (i.e., EBRT applied to both prostate and lymph nodes or RP with LND), with or without the addition of ADT. The phase III study of RTOG 85-31 randomised patients undergoing treatment with EBRT either with the addition of ADT or without [13]. After ten years, the study group with combination treatment demonstrated an extended absolute survival compared to the local therapy group (p = 0.002). Tward et al. used the SEER database to retrospectively analyse patients treated without EBRT, with EBRT, or with EBRT plus brachytherapy [14]. Patients treated with EBRT had an increased 10-year cancer-specific survival (CSS) rate compared to patients not treated with EBRT (HR 0.66, 95% CI 0.54-0.82, p < 0.01).
Treatment with ADT ± Any Form of Local Therapy
Since the RTOG 85-31 study showed positive results in terms of absolute survival in favour of the combined treatment ADT + EBRT (49% ADT + EBRT vs. 39% EBRT alone, p = 0.002) [13], multiple trials have investigated the role of ADT in combination with definitive EBRT in node-positive PCa. Another four studies on definitive EBRT for node-positive PCa consistently showed improved OS and cancer-specific survival (CSS) compared to ADT only or conservative management only (Table 4) [15][16][17]19]. Most of these articles reported that the radiation fields included both prostate and lymph nodes; however, studies employing the SEER database did not provide specific details regarding radiation fields.
ADT in combination with any form of local treatment (i.e., RP or EBRT) has also retrospectively been investigated by Seisen et al., who reported a survival advantage for treatment with ADT in combination with any form of local therapy [18]. Patients who underwent RP were statistically younger (with a mean age of 61.3) compared to patients undergoing EBRT (with a mean age of 65.8) (p ≤ 0.001). However, this study showed no statistically significant differences between RP and EBRT.
Treatment with ADT ± Additional Systemic Therapy
It is known that combining ADT with docetaxel or second-generation hormone treatment improves the outcome of metastatic PCa [21][22][23][24]. However, until recently, none of these drugs have demonstrated a clear and consistent improvement in the survival of patients with non-metastatic PCa starting palliative ADT [25]. In three trials, one of which was conducted in the STAMPEDE platform protocol, another in the NRG Oncology/RTOG 0512 trial, and the third in the GETUG-12 trial, adjuvant docetaxel added to ADT prolonged time to relapse but not metastasis-free survival or OS. A meta-analysis of these adjuvant docetaxel trials incorporating N0/N1-M0 patients concluded that there was an 8% absolute 4-year survival advantage for docetaxel compared with ADT alone in terms of failure-free survival without an OS benefit (HR 0.7, 95% CI 0.61-0.81; p < 0.0001) [20]. Collectively, these results indicate that docetaxel does not offer a benefit in terms of OS for patients with cN1M0 disease.
More recently, a meta-analysis of two STAMPEDE platform phase III trials found that the addition of abiraterone acetate and prednisolone with or without enzalutamide to ADT was associated with improved metastasis-free survival in patients with high-risk nonmetastatic prostate cancer [8]. Thirty-nine percent of the patients (n = 774) presented a cN1 status determined via conventional imaging. Of these patients, around 85% received EBRT and ADT as a standard-of-care treatment. Metastasis-free survival events occurred for 180 patients in the combination groups vs. 306 in the control groups (HR 0.53; 95% CI 0.44-0.64, p < 0.0001). Death occurred in 147 patients in the combination groups vs. 236 in the control groups (HR 0.60; 95% CI = 0.48-0.73, p < 0.0001). Death due to PCa occurred in 73 patients in the combination groups vs. 142 in the control groups (HR 0.49; 95% CI = 0.37-0.65, p < 0.0001). These results indicate that the addition of abiraterone and/or enzalutamide may be a promising treatment option for cN1M0 patients, offering potential benefits for overall survival. However, since this is a post-hoc analysis and metaanalysis, it is crucial to approach these findings with caution when drawing conclusions. Table 5 lists the included studies evaluating the different treatments for pN1M0 disease, which have been divided into subcategories: (1) ADT as an adjuvant treatment, (2) ADT with or without EBRT as an adjuvant treatment, (3) EBRT as an adjuvant treatment, and (4) chemotherapy as adjuvant treatment.
ADT as Adjuvant Treatment
The ECOG 3886 trial is the only randomised trial that investigated the use of an adjuvant treatment with ADT. This trial randomised 98 patients who were proven to have a pN1 status after RP and LND for immediate ADT or delayed ADT [26]. After a median follow-up of 11.9 years, the trial showed that immediate ADT resulted in statistically better OS (HR 1. Abbreviations: pN1M0 patients = patients with pathologically node-positive disease in postoperative setting, either after initial treatment with radical prostatectomy (RP) and lymph node dissection (LND) or after staging LND; ADT = androgen deprivation therapy; EBRT = external beam radiotherapy; LT = local therapy (i.e., EBRT or RP); SEER = Surveillance, Epidemiology, and End Results; NCDB = National Cancer Database; OS = overall survival; CI = confidence interval; HR = hazard ratio; BCR = biochemical recurrence; CSS = cancerspecific survival; OM = overall mortality; PCSM = prostate cancer-specific mortality; CSM = cancer-specific mortality; PCSS = prostate-cancer-specific survival; bRFS = biochemical relapse-free survival; cRFS = clinical relapse-free survival; ACM = all-cause mortality.
ADT ± EBRT as Adjuvant Treatment
The role of adjuvant EBRT in combination with ADT for pN1M0 disease has been retrospectively investigated. Seven studies found improved survival compared to ADT alone, with an HR for OS ranging from 0.46 to 0.77, while other studies used different end points, such as 10-year CSS ranging from 70% to 86% [5,6,[27][28][29]31,32]. Contrarily, one study showed no benefit (p = 0.193) [30]. These poor-quality data emphasise the heterogeneity of this patient population, presenting different outcomes depending on the tumour grade or the number of positive lymph nodes.
Similarly, Van Hemelryk et al. performed a matched-case analysis of pN1M0 and pN0M0 to compare outcomes after EBRT + ADT and found promising survival rates, especially for patients with two or fewer positive lymph nodes (5 yr biochemical relapsefree survival (bRFS) of 65% (pN1) vs. 79% (pN0) (p = 0.08)) [33]. In addition, Abdollah et al. also found that this specific risk group of patients benefitted the most from adjuvant EBRT along with patients with a Gleason score ranging from seven to ten, a pathological tumour stage (pT) of 3b/4, positive surgical margins, or with a pelvic lymph node count of three to four [31].
One observational study compared the different types of local therapy (i.e., RP or EBRT) in the case of pN1M0 disease after staging LND [15]. Rusthoven et al. observed no statistically significant differences in survival between RP versus EBRT and RP with or without adjuvant EBRT.
EBRT as Adjuvant Treatment
The PROPER trial prospectively compared whole-pelvis radiotherapy (WPRT) to prostate-only radiotherapy (PORT) in the case of pN1M0 disease and found non-statistically significant differences in clinical relapse-free survival (cRFS), bRFS, and OS in favour of WPRT (p = 0.31, p = 0.08, and p = 0.61, respectively) [35]. The study included a total of 64 patients and was closed early due to poor accrual.
Chemotherapy as Adjuvant Treatment
The SPCG-12 trial randomised patients after RP into groups of either treatment with docetaxel or a surveillance group [37]. There was no improvement in biochemical disease-free survival (p = 0.06). Details on metastasis-free survival (MFS) or OS were not further described.
Discussion
The optimal treatment for clinically or pathologically node-positive PCa patients remains poorly defined. As there is mounting evidence that these patients may benefit from treatment intensification, additional treatment modalities are being increasingly implemented in conjunction with the standard ADT treatment alone. However, because there is limited evidence from randomised controlled trials, many of the recommendations in international guidelines are considered weak. Hence, we conducted a scoping review to summarise the current evidence on available treatments.
First, we have analysed the available studies on men with cN1M0 PCa. The combination of ADT and EBRT to treat both the prostate and lymph nodes in these patients has been well established. Several studies have shown that combined treatment with ADT and EBRT provides a greater survival benefit compared to treatment with ADT or EBRT alone. The optimal duration of ADT has not been well-defined, with data supporting 18 to 36 months, while in practice, 2 to 3 years are frequently recommended by the United States and European guidelines, respectively [2,9,38]. The treatment outcomes for men with cN1M0 PCa are comparable to those of patients with de novo metastatic hormone-sensitive PCa with low-burden disease. This assumption has been made because it is believed that most men with cN1M0 PCa have microscopic distant metastases. Based on the results of two randomised controlled trials [39,40] and one meta-analysis [41], international guidelines strongly recommend the combination of ADT plus EBRT applied to the prostate in de novo metastatic hormone-sensitive PCa with low-burden disease, according to the CHAARTED criteria [42]. Nevertheless, none of the EBRT studies included in this scoping review differentiated between WPRT or PORT. This differentiation has been studied in men with high-risk cN0 disease in the RTOG 9413, GETUG-01, and POP-RT trials. The RTOG 9413 and GETUG-01 trials did not show a statistically significant advantage of WPRT over PORT [43,44]. However, the POP-RT trial reported better outcomes in terms of biochemical failure-free survival (BFFS) and disease-free survival (DFS) for men who underwent WPRT [45]. A systematic review conducted by De Merleer et al. on elective EBRT suggested a simultaneous boost to PSMA-PET-positive nodes; nevertheless, there are scarce randomised data supporting this suggestion [46].
One of the most recent comparisons in the STAMPEDE trial evaluated the addition of abiraterone with or without enzalutamide to ADT in men with locally advanced hormonesensitive PCa in two separate studies [8]. Data from both trials were pooled together and published. Newly diagnosed patients were randomly assigned to receive 36 months of ADT with or without an additional 24 months of abiraterone treatment. The addition of abiraterone (plus/minus enzalutamide) to ADT and EBRT significantly improved MFS and OS. However, these results should be considered with caution due to their post-hoc and meta-analytical nature. The STAMPEDE trial incorporating high-risk non-metastatic hormone-sensitive PCa patients may change clinical practice by providing evidence for the addition of two years of abiraterone therapy to ADT and EBRT for men with newly diagnosed cN1M0 PCa. Based on the clinically significant benefits in terms of MFS and OS seen in the STAMPEDE trial, the EAU guideline panel recommends the addition of two years of abiraterone therapy to ADT as the standard of care for men receiving EBRT for high-risk disease as defined by the STAMPEDE criteria (cN+ or two of the following: a Gleason grade ≥8, a PSA level ≥40 ng/Ml, and/or ≥cT3) (Table 1). Nevertheless, it is crucial to account for the significant factors when considering abiraterone in addition to ADT and EBRT for men with cN1M0 PCa. Imaging in the STAMPEDE trial consisted of MRI, CT, and bone scans, whereas PSMA-PET/CT is now being increasingly used for staging purposes. Whether abiraterone or other androgen receptor agents may benefit a patient with <5 mm pelvic nodal disease on PSMA PET/CT imaging (i.e., not measurable on a conventional scan) is uncertain and requires further investigation in future studies. Since the combination of abiraterone and enzalutamide did not improve outcomes compared to abiraterone alone and the toxicity was greater with the combination, it is not recommended in these patients. Enzalutamide has shown to be beneficial in the treatment of metastatic castration-resistant PCa (mCRPC), especially in low-volume disease [21]. However, its usefulness in cN1M0 disease has not yet been thoroughly investigated. Nonetheless, there may be potential for enzalutamide to be useful in treating cN1M0 disease considering its positive results with respect to low-volume mCRPC. Further research is needed to fully understand the potential benefits of enzalutamide in cN1M0 disease. Results are pending from ongoing trials evaluating other novel hormonal agents (NCT 04134260) and the use of other treatment modalities such as lutetium (NCT 05162573). With the emergence of advanced diagnostic techniques such as PSMA PET-CT and the findings from recent studies, it is reasonable to anticipate a shift and update in the treatment of patients with cN1M0 PCa.
Second, we have studied the available treatments for men with pN1M0 PCa. A riskadapted strategy for selecting pN1 PCa patients for adjuvant or early salvage therapy may be the most effective approach. In a randomised controlled trial, Messing et al. showed that immediate ADT in pN1 patients improves the survival [26]. However, the study included long-term survivors among the patients in the control arm, which raises the question of whether adjuvant treatment, inducing significant side effects, is necessary for all patients. Several analyses have shown that the number of lymph node metastases is correlated with survival outcomes. Patients with limited lymph node metastases tend to have better outcomes [4,5,47]. Moreover, several studies have shown that RP combined with extended pelvic lymph node dissection without adjuvant therapy can be curative for some patients with lymph-node-positive PCa [4,5,47]. De Meerleer et al. recommended adjuvant WPRT plus ADT for pN1M0 patients with two to four positive lymph nodes [46]. This recommendation is mainly based on the findings presented by Touijer et al., who conducted a study comparing three different adjuvant treatment strategies (observation, ADT, or EBRT + ADT) among men with pN1M0. The study revealed that men who received EBRT + ADT had significantly better OS compared to those who were treated with observation or ADT after RP only [5].
Collectively, these findings underscore the importance of selecting patients based on their pathological characteristics and conducting regular follow-ups with PSA testing to determine whether adjuvant treatment is required. Patients with a Gleason score ≥8, positive surgical margins, pT ≥ 3b, or ≥3 positive lymph nodes benefit the most from adjuvant treatment. Conversely, patients who have a lower risk of recurrence may benefit from active surveillance and receiving treatment only if their PSA level rises. Evidence on the timing of post-prostatectomy EBRT has not been studied for pN1M0 disease, but randomised data in the case of localised disease have failed to show a survival benefit for adjuvant EBRT compared to salvage EBRT [48,49]. This might suggest that patient selection is also key in determining the timing of adjuvant or early salvage therapy.
The majority of the studies included in this review used conventional imaging techniques, such as CT, MRI, and bone scans, for staging. However, these methods have been shown to have limited sensitivity in detecting lymph node metastases, which can potentially lead to an underestimation of the disease's extent, especially for cN1M0 disease. Recent evidence suggests that PSMA-PET/CT is more accurate than conventional imaging for (re)staging PCa [7,50,51]. Therefore, the results of studies in which patients were staged using conventional imaging cannot be compared with recent studies in which staging was performed with the use of PSMA-directed tracers. Moreover, enhanced accuracy in staging, which can be facilitated by the utilisation of PSMA-targeted tracers, will result in improved patient selection. Hence, the outcomes of research utilising PSMA-targeted tracers to determine a patient's disease stage are eagerly anticipated as they have the potential to significantly influence treatment recommendations.
There are several significant limitations to this scoping review. First, it should be noted that this review is not a systematic review and, therefore, was not registered online. Articles were not double-selected, and data were not pooled or used for a meta-analysis. However, the PRISMA guidelines for scoping reviews were followed to ensure a systematic approach to conducting this scoping review.
Second, most studies were conducted in a retrospective setting and thus were subject to all the inherent limitations that come with this approach. Additionally, national databases (Surveillance, Epidemiology, and End Results (SEER) and the National Cancer Database (NCDB)) were used by multiple retrospective studies, which not only resulted in overlapping cohorts of patients but also led to low granularity and heterogeneity in exposure ascertainment.
Third, this scoping review does not delve into the discussion of treatment toxicities. The decision was made to focus primarily on oncological outcomes due to the extensive scope of the papers being reviewed. Nonetheless, considering the potential usefulness of treatment, exploring the toxicity profiles, particularly in combination therapies, can provide valuable insights. Furthermore, an in-depth exploration of patients' comorbidities in relation to treatments could have shed light on whether certain treatments may act as protective factors or risk factors for the progression of oncological disease.
Trials including novel staging modalities, biomarkers, new antiandrogen drugs, or other treatment modalities are currently underway and are necessary for providing highquality evidence to guide treatment decisions. It is probable that these studies will provide additional insights into the treatment of patients with clinically or pathologically nodepositive PCa.
Conclusions
This study presents a scoping summary of the evidence on the treatment of clinically and pathologically node-positive PCa patients. Combined treatment with EBRT applied to the prostate and lymph nodes, along with ADT, is a well-established and effective treatment for cN1M0 disease. There is evidence that treatment intensification can be beneficial, but further randomised studies are needed to confirm this more conclusively.
In the case of pN1M0 disease, the corresponding oncological control and survival rates are encouraging, as a significant percentage of patients remain disease-free. Based on retrospective studies, adjuvant EBRT combined with ADT has been shown to improve so-treated patients' overall survival compared to men who were treated with observation or ADT alone. Patient selection is crucial in this case, in which patients with a Gleason score of ≥8, positive surgical margins, pT ≥ 3b, or ≥3 positive lymph nodes benefit the most from adjuvant treatment.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2007-01-01T00:00:00.000
|
15619558
|
{
"extfieldsofstudy": [
"Computer Science",
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/1471-2105-8-6",
"pdf_hash": "4555640a44a9198e4803678c9eb813557ba9c581",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46734",
"s2fieldsofstudy": [
"Biology",
"Computer Science"
],
"sha1": "4555640a44a9198e4803678c9eb813557ba9c581",
"year": 2007
}
|
pes2o/s2orc
|
Phylogenetic tree information aids supervised learning for predicting protein-protein interaction based on distance matrices
Background Protein-protein interactions are critical for cellular functions. Recently developed computational approaches for predicting protein-protein interactions utilize co-evolutionary information of the interacting partners, e.g., correlations between distance matrices, where each matrix stores the pairwise distances between a protein and its orthologs from a group of reference genomes. Results We proposed a novel, simple method to account for some of the intra-matrix correlations in improving the prediction accuracy. Specifically, the phylogenetic species tree of the reference genomes is used as a guide tree for hierarchical clustering of the orthologous proteins. The distances between these clusters, derived from the original pairwise distance matrix using the Neighbor Joining algorithm, form intermediate distance matrices, which are then transformed and concatenated into a super phylogenetic vector. A support vector machine is trained and tested on pairs of proteins, represented as super phylogenetic vectors, whose interactions are known. The performance, measured as ROC score in cross validation experiments, shows significant improvement of our method (ROC score 0.8446) over that of using Pearson correlations (0.6587). Conclusion We have shown that the phylogenetic tree can be used as a guide to extract intra-matrix correlations in the distance matrices of orthologous proteins, where these correlations are represented as intermediate distance matrices of the ancestral orthologous proteins. Both the unsupervised and supervised learning paradigms benefit from the explicit inclusion of these intermediate distance matrices, and particularly so in the latter case, which offers a better balance between sensitivity and specificity in the prediction of protein-protein interactions.
Background
Protein-protein interactions play a key role in cellular functions, and thus, to complement the experimental approaches [1,2], many computational methods have recently been developed in systems biology for predicting whether two proteins interact, based on what is already known about these proteins. One type of data used for prediction is the phylogenetic profile of a protein -a string of ones and zeros encoding respectively the presence and absence of the protein in a group of genomes, conserved operons, gene fusions, etc. [3][4][5][6]. The rationale is that interacting proteins tend to co-evolve, and therefore should have similar phylogenetic profiles. Recently, to enhance the prediction accuracy, the focus has been given to using the similarity of phylogenetic trees to infer interactions between receptors and ligands [6][7][8].
Of particular interest is the so-called mirror tree method by Pazos and Valencia [6]. The mirror tree method predicts protein-protein interactions under the assumption that the interacting proteins show similarity in the molecular phylogenetic protein trees because of the co-evolution caused by the interaction. However, it is difficult to directly evaluate the similarity between a pair of molecular phylogenetic trees. Instead, the mirror tree method compares a pair of distance matrices by calculating the Pearson correlation coefficient for the corresponding elements in the two matrices, and uses the correlation coefficient as a measure to evaluate the extent of coevolutionary behavior between two proteins.
To address the issue of high rate of false positives with the mirror tree method, recently, Sato et al [9] suggested that the information about the phylogenetic relationships of the host genomes be excluded by a projection operation, and only the residual information in the distance matrices be used for the calculation of the correlation coefficient between proteins. As a result, significant improvement in prediction specificity was achieved, though at a cost of losing some sensitivity. A similar yet more sophisticated approach is proposed in Pazos et al [10] to correct the distance matrices based on the phylogenetic tree, which incorporates information on the overall evolutionary histories of the species (i.e., the canonical "tree of life"). In addition to adjusting the distance matrices by excluding the expected background similarity due to the underlying speciation events, this tree of life mirror tree (tol-mirrortree) method can also detect non-canonical evolutionary events, in particular horizontal gene transfers. While both Pazos et al's tol-mirrortree method and Sato et al's projection approach are concerned with -and quite successful at -removing some background from the inter-matrix correlation, like the original mirror tree method they do not directly address the intra-matrix correlations, which can be informative and critical in revealing co-evolution. For example, in some recent related studies, the columns and rows of the distance matrices are reshuffled in an attempt to discover maximal similarity between two matrices in order to predict interaction specificity when paralogs are involved [11][12][13].
In this work, we propose a novel, simple method to extract the intra-matrix correlational information with reference to the species tree of the host genomes and to represent said information in a way that is conducive to a supervised learning paradigm. We tested our method on the same dataset used in [9], which consists of interacting proteins from E. coli, where these interactions are experimentally verified. The results from a series of leave-oneout cross validation experiments showed that the prediction accuracy was greatly increased with our data representation method.
Dataset
We selected the same data set as used in [9], so that the performance of the different methods can be compared. The 13 pairs of interacting proteins are from E. coli, and the interaction within each pair has been experimentally verified, as documented in the Database of Interacting Proteins (DIP) [14], and no interaction outside the pairing is known. So, these 26 proteins make up 26 × 25/2 = 325 distinct pairs but only 13 of them contain truly interacting partners. For each of these 26 proteins, its putative orthologs from 41 bacterial genomes are selected from KEGG/KO database [15], and a 41 × 41 distance matrix is constructed, giving the genetic distance between any pair of these 41 orthologs. The genetic distances were calculated using the PROTDIST module in the PHYLIP package [16] and the score table by Jones, Taylor, and Thornton [17], from a multiple alignment of these 41 orthologous proteins, constructed using MAFFT [18] software. The 13 pairs of proteins and the 41 source organisms are listed in Tables 1 and 2 respectively.
The phylogenetic tree for these 41 reference bacterial genomes was built from the 16S rRNA sequences using the neighbor-joining module in PHYLIP package. The 16S rRNA sequences were downloaded from the KEGG/ GENES database [15] and the Ribosomal Database Project-II Release 9 [19].
Phylogenetic vectors and correlations
The original mirror tree method is proposed by Pazos and Valencia [6] to infer protein-protein interaction from correlated evolutions. The hypothesis is that two proteins should have a higher chance to share correlated evolutionary history if they interact with each other than if they do not. As the evolutionary history for a protein can be represented as a phylogenetic tree (let's call it a protein tree to distinguish from the species tree), it makes sense to compare the two protein trees to reveal any correlation between their evolutionary history. Instead of comparing two trees directly, which is a highly nontrivial task in terms of both algorithmic implementation and biological interpretation, the mirror tree method uses as a surrogate the distance matrices that store the genetic distance between the protein and its orthologs in a group of genomes. It is from these distance matrices that the proteins trees are typically reconstructed using well known algorithms such as Neighbor-Joining [20]. For two proteins A and B, the mirror tree method compares their distance matrices D A and D B , by examining how the corresponding elements are correlated. Because the distance matrices are symmetric, only the elements in the upper (or lower) triangle of the matrices are needed to calculate the correlation, which is measured as the Pearson correlation coefficient ρ as defined below: where Ave and Var represent the average and the variance of the elements in the upper triangle of a distance matrix, respectively. To apply this method for prediction, the Pearson correlation coefficient ρ is calculated for all distinct pairs of proteins, and these pairs are then ranked in a non decreasing order of ρ. With a threshold preset on ρ, the pairs with a higher correlation coefficient are predicted to be interacting pairs.
In two recent works [9,10], the measurement of correlations is refined by excluding the information about the phylogenetic relationships in order to overcome the problem of high rate of false positives reportedly present in the predictions using the mirror tree method. The high rate of false positives is believed to be caused by a high correlation between non-interacting proteins, which can be attributed to some common background shared by the two corresponding distance matrices, because they all are derived from orthologous proteins in the same set of n source organisms. That is to say, these protein trees bear some resemblance to the species tree. Sato et al therefore propose to exclude the species tree resemblance from the distance matrices before comparing them. Specifically, a distance matrix R is computed for the 16S rRNA sequences of these 41 genomes, from which the species tree can be reconstructed. For convenience, all the rows in the upper triangle of this 41 × 41 distance matrix are concatenated, producing a vector of dimension 820, which we refer to as |u 16s >. Similarly, all the distance matrices for the protein trees can be transformed into a vector form, of the same dimension 820, which is termed the phylogenetic vector. Let |v i > (i = 1 to 26) be a phylogenetic vector for one of the 26 proteins in the dataset, then the resemblance of |v i > to |u 16s > is measured by the projection < u 16s |v i > (i.e., the inner product between |v i > and |u 16s >), which is then subtracted from |v i >, giving a residue vector |ε i > defined as follow: Then, the Pearson correlation coefficient (is calculated for any pair of vectors |ε i > and |ε j >: where stands for the k-th component of vector |ε i >, Ave and Var represent the average and variance of elements in a vector. It is shown in [9] that the specificity of predictions using the subtracted vectors is significantly improved, though at a cost of losing sensitivity. In [10], phylogenetic trees (protein trees) are first reconstructed from the multiple sequence alignments of orthologous proteins using the neighbor-joining algorithm implemented in ClustalW. The protein distance matrices are then derived from these trees by summing the length of the branches connecting each pair of orthologous proteins, which are represented as tree leaves. New distance matrices for the proteins are obtained by subtracting from each value the distance between the corresponding species in the 16S rRNA distance matrix, termed as R. If we transform the matrices into vectors in the same way as used in [9], then the element-wise subtraction of 16S rRNA distance matrix R from a distance matrix P for a protein is equivalent to subtraction of two corresponding vectors, |p'> = |p> -|r> (2') where |p> is the phylogenetic vector derived from the upper triangle of matrix P, and |r> is the vector from the upper triangle of matrix R. The difference between Eq(2) and Eq(2') can be seen more clearly as depicted geometrically in Figure 1. It is noted that the resulting vector |ε> derived from Eq(2) is guaranteed to be orthogonal to the phylogenetic tree orientation, whereas the resulting vector from Eq(2') may still have non-zero projection along the phylogenetic tree orientation, which can become minimal when the two vectors are properly rescaled using a "molecular clock" to about the same length [10]. It is also worth noting that although having phylogenetic vectors totally orthogonal to the phylogenetic tree orientation may be mathematically sound and attractive, it by no means necessarily leads to better learning and classification, as many other factors may affect the similarity between a pair of phylogenetic vectors, e.g., when there are horizontal gene transfers as pointed out and dealt with in [10].
Super-phylogenetic vectors via TreeSec method
We propose a novel method to utilize the distance matrices and the species tree for predicting protein interactions. There are two major changes to the mirror tree method. First, we augment the phylogenetic vectors with extra bits that encode the topological information of the protein tree with reference to the species tree. Second, in contrast to the unsupervised learning scheme of the mirror tree method, we adopt a supervised learning paradigm, specifically the support vector machines, [20][21][22] to further tap into the prior knowledge about interacting and non interacting protein pairs. As all proteins are already represented as the phylogenetic vectors of the same dimension, it is convenient to concatenate the two vectors for any pair of proteins and use the concatenated vector to represent the pair. All pairs thus represented are then split into to two subsets -one subset is used for training and the other for testing.
The key contribution of our method comes with the data representation, in which we augment the phylogenetic vector, used in both the original and Sato et al's modified version of mirror tree, with some organizational information about the elements in the distance matrix; such information reflects in a somewhat explicit way how the protein tree is reconstructed from the distance matrix, with reference to the species tree. In the mirror tree method, all the elements in the distance matrix are treated And each element in the distance matrix is treated independently and equally. However, to a very large degree, it is the intra-matrix correlations among the elements that determine the protein tree, as manifested in the distance based phylogenetic tree reconstruction algorithms such as UPGMA and Neighbor-Joining [20]. For example, as shown in Figure 2, if (i, j) corresponds to two host genomes i and j closely positioned in the tree and (i', j') to a pair of distantly related genomes i' and j', then it makes when contributing to the correlation ρ AB in Eq(1). When measuring matrix similarity, not only may elements in a matrix contribute differently, but the very fact that a tree can be reconstructed out a distance matrix imparts a clear indication of some embedded "intra-matrix" correlations among matrix elements. Therefore it is reasonable to hypothesize that the matrix elements need to be regrouped in a certain way such that the hierarchical relationships among matrix elements can be unraveled and flattened to achieve the effect of "weighted" Pearson correlation between two matrices. This is somehow similar to the ideas in [11][12][13] where, to predict interaction specificity among paralogous proteins, the rows and columns of the distance matrices are reshuffled in order to find maximal similarity measured as inter-matrix correlation. As the species tree bestows a hierarchy of relationships among the host genomes, weighting the matrix elements in order to reflect the intra-matrix correlations can become very complicated. Here we propose a simple, novel way to account for the intra-matrix correlations.
Specifically, we use the species tree (reconstructed by neighbor-joining from a distance matrix of the 16S rRNA sequences of the 41 host genomes) to generate a hierarchical clustering of the genomes, which correspondingly gives a hierarchical clustering of the indices of the protein distance matrices. Given a tree with the root at the top, a "section" cut across the tree will give rise to clusters of leaves, i.e., leaves within the same branch at the section will belong to the same cluster. The number of clusters is equal to the number of branches at the section, and is determined by the tree and the height where the section is cut -the higher the cut is, the fewer the number of clusters is. For example, in Figure 3, section1 generates 4 clusters: α, β, γ and δ. Given a protein distance matrix D, for each section, an intermediate distance matrix between all pairs of clusters is derived from the original distance matrix as follows.
where |C α | and |C β | are the size for the clusters C α and C β respectively. That is, the distance between two clusters C α and C β is equal to the average distance between pairs of orthologous proteins from each cluster. This definition of distance between two clusters is the same as defined in UPGMA during the tree reconstruction [20]. The intermediate distance matrix gives a "snapshot" of the evolutionary history about these orthologous proteins at the time, marked as the tree height, where the section is cut. The snapshot -rather the intermediate matrix derived from it -carries the information about how the hypothetical ancient ancestors at that time are related to one another in terms of evolutionary distance, from the perspective of the protein being studied. Since the matrix is symmetric, only the upper (or lower) triangle is needed, which can be transformed into a vector form and concatenated to the original phylogenetic vector in the mirror tree method. The final representation of a protein is the original phylogenetic vector concatenated with all "snapshot" vectors, which is called super-phylogenetic vector. Figure 3 gives a schematic illustration of the procedure for generating the super-phylogenetic vectors. The number of "snapshots" is a free parameter in our method. One simple way to remove this parameter is to use the full spectrum of sections, i.e., having a section made at each branching point in the tree. The difficulty with this full spectrum approach is that the number of sections is large, and many of the neighboring sections are very similar to one another, and therefore not adding much useful information to the phylogenetic vectors; rather it may inflate the dimension of the resulting super-phylogenetic vectors up to 15,000 or higher, which is beyond the capacity of the classifier used in the study. Instead, a value of 6 is used as the number of snapshots taken in the experiments, which yields a dimension of 2386 for super-phylogenetic vector pairs versus the 1640 for the original phylogenetic vector pairs. One refinement made while generating the super-phylogenetic vector is to first adjust the distances, as defined in neighbor-joining algorithm to remove the molecular clock constraint assumed by the UPGMA: d(i, j) = D(i, j) -(r i + r j ) (5) and r i = ∑ k D(i, k)/(|L| -2) (6) where |L| is the dimension of the matrix D. Another refinement is to use the same projection procedure introduced in Sato et al's modified mirror tree method Eq (2), only now that |v i > and |u 16s > are substituted with the super-phylogenetic vectors. It should be noted that since the background subtraction in tol-mirrortree method and Sato et al's method also utilizes the phylogenetic tree, combining the TreeSec procedure and background subtraction may introduce some redundancy.
The use of the species tree instead of individual protein trees for hierarchical clustering has a twofold effect. One effect is more theoretical; it is to reveal how individual protein trees (embedded in the distance matrices) would differ from the underlying species tree of the host genomes, in the same spirit of subtracting the common background as in [9,10]. The other effect is more prag-matic; it ensures that the super-phylogenetic vectors thus obtained have the same dimension for all proteins, and therefore can be readily used as input to the support vector machine.
SVM
The classifier used here is a support vector machine. As a powerful statistical learning method, support vector machines (SVMs), originally proposed by Vapnik [21,22], have recently been applied with remarkable success in bioinformatics problems, including remote protein homology detection, microarray gene expression analysis, and protein secondary structure prediction [24].
There are a couple of reasons to use SVMs. First, the data are already in the vector form, particularly suitable as inputs for SVMs. Second, SVMs have been used to predict protein-protein interaction in previous works [25,26], though there the different properties of proteins are used. We plan to have a comprehensive study of using SVM on data from different sources, and more importantly, how to combine them for better prediction. Third, SVMs have some inherent advantages over other classifiers, including: 1. quadratic programming to avoid local minima, 2. geometric intuition, 3. lower Vapnik-Chervonenkis dimension leading to better generalization, and 4. amicability with small training samples, which all contribute to its popularity as a classifier adopted in many applications.
The basic idea of SVMs is simple; it is to find a hyperplane that separates two classes of objects, as represented as points in a vector space, with the maximum margin to the boundary lines. Such a hyperplane ensures good generalization and unseen data are then classified according to their location with respect to the hyperplane. The power of SVMs comes partly from the data representation, where an entity, e.g., a pair of proteins, is represented by a set of attributes. However, how those attributes contribute to distinguishing a true positive from a true negative may be quite complex. In other words, the boundary line between the two classes, if depicted in a vector space, can be highly nonlinear. The SVMs method will find a nonlinear mapping that transform the data from the original space, called input space, into a higher dimensional space, called feature space, where the data can be linearly separable.
In general, the mapping can be quite complex and the dimension can be very (even infinitely) high in order for Geometric interpretation of subtracting phylogenetic background Figure 1 Geometric interpretation of subtracting phylogenetic background. Panel A corresponds to Eq(2), where a phylogenetic vector |v> is subtracted by the background vector |u 16s >, the resulting vector |ε> is guaranteed to be orthogonal to |u 16s >. Panel B corresponds to Eq(2'), where a phylogenetic vector |p> is subtracted by the background vector |r>, the resulting vector |p'> may still have residual components along the orientation of |r>. Panel C shows that the resulting vector |p'> may become as nearly orthogonal to |r> when the length of the vector |r> is properly rescaled.
the mapped data to be linearly separable. The trick of SVMs is the use of kernel functions, which define how the dot product between two points in the feature space, which is the only quantity needed to solve the quadratic programming problem for finding the maximum margin hyperplane in the feature space. The use of kernel functions avoids explicit mapping to high dimensional feature space; high dimensionality often poses difficult problems for learning such as over-fitting, thus termed the curse of dimensionality. Polynomial kernel and Gaussian kernel are the two most commonly used generic kernels -linear kernel is not really useful in most cases except when the data are linearly separable. For vectors x and y, Gaussian RBF is defined as K(x, y) = exp[-(|x-y| 2 /c)], (7) and the polynomial kernel is defined as (8) where c, s and d are parameters adjustable in the software package SVMLight [27]. Both kernels are experimented with the default values for c, s and d, and the Gaussian kernel yielded the best results reported in this paper. Because the polynomial kernel performs significantly worse with the default setting, we also tested with changing the polynomial degree d from the default value (d = 3). We found that the performance is quite sensitive to the degree. Details are given in the next section. Besides using the separation of training and testing as a mechanism to alert us to overfitting, another mechanism built into SVMLight for avoiding overfitting is the use of "soft" margin, i.e., to Illustration of how the elements of the distance matrices correspond to distances between leaves on the phylogenetic trees Figure 2 Illustration of how the elements of the distance matrices correspond to distances between leaves on the phylogenetic trees. In matrix A, element (i, j) corresponds to a pair of neighboring genomes, whereas element (i', j') to a pair of genomes that are distantly positioned in the protein A tree, which can be reconstructed from matrix A using the standard methods, such as neighbor-joining algorithm. Likewise, elements (i, j) and (i', j') in matrix B have similar interpretation as corresponding to the respective pairs of genomes in the protein B tree. When comparing two proteins A and B by calculating the Pearson correlation coefficients between the two corresponding matrices, the elements (i, j) and (i', j') should be weighted according to their "importance" dictated by the positions in the trees. It is noted that although the two protein trees shown here have different branch lengths but the same topology, in more complicated cases the tree topologies can also be different. In this study, however, the indices of the two matrices are mapped to the same tree, the species tree. The justification and effect of using the species tree is explained in the text.
allow for misclassification for some outlier training data points, and a cap on the number of iterations to stop the optimization process during the training even if the preset error rate is not reached. And we have used SVMLight's default setting for our experiments.
It is worth noting that, overall our method can be viewed as a hybrid that employs in tandem both an explicit mapping, from phylogenetic vectors to super-phylogenetic vectors, and the use of a generic kernel.
Results and discussion
We test our TreeSec method in a series of leave-one-out cross-validation experiments on the data set described above. To prepare an experiment, one of the 13 interacting pairs is selected and reserved as the positive testing example, and 48 non interacting pairs that contain one protein from tion experiments, and the average performance is reported.
For each experiment, the training examples are taken as input to train a support vector machine. The implementation of the support vector machine is adopted from the SVMLight package [27]. The two commonly used kernel functions -polynomial, and RBF -are experimented with the default parameter settings, and the Gaussian RBF kernel function scored the best performance, which is reported in Table 3. With the SVM trained, the 49 testing examples are then input to it for prediction. A score with real value between -1 and +1 is assigned by the SVM to each testing example. Ideally, a positive score indicates a predicted positive, whereas a negative score indicates a predicted negative. This implies a perfect cutoff score at zero. In practice, the cutoff score may be set at a different value, other than zero. Indeed, its actual value does not matter, as long as the predicted positives (i.e., with a score higher than the cutoff) are true positive, and the predicted negative (i.e., with a score lower than the cutoff) are true negative. To evaluate the performance, we use the receiver optical characteristic (ROC) score, which is the normalized area under a curve that plots the number of true positives as the number of false positives when a moving cutoff score scans from +1 to -1 [28]. The ROC score is 1 for a perfect performance, whereas a random predictor, which will uniformly mix up positives and negatives, is expected to get a ROC score 0.5. Some ROC curves for our experiments are shown in Figure 4.
The ROC scores of the mirror tree method and our TreeSec method, with a few variations, are reported in Table 3.
Since the mirrortree is an unsupervised learning method, to be fair, we first use TreeSec in an unsupervised learning manner, and compare the two. In this case, since there is no training necessary, for each leave-one-out experiment, only the testing examples are ranked by their Pearson cor-relation coefficients as if they were scores output from a classifier. The mirror tree method using the phylogenetic vectors prepared via Sato et al's procedure receives a ROC score 0.6587. A slightly higher ROC score (0.6731) is obtained when everything is kept the same except for substituting the phylogenetic vectors with the super-phylogenetic vectors prepared via TreeSec method.
The advantage of TreeSec method becomes more obvious when used in supervised learning. In this case, TreeSec and MirrorTree are compared for their capability of representing proteins in a way which is more conducive for classification. Once proteins are represented as super-phylogenetic vectors via TreeSec or as phylogenetic vectors via MirrorTree, they are fed into the same classifier, in this case, a SVM. As we see in Table 3, while the performance of the phylogenetic vectors (MirrorTree) also improves (ROC score 0.7212 with a degree 2 polynomial kernel), the super-phylogenetic vectors prepared by TreeSec obtain a significantly better ROC score (0.8446 with a default setting Gaussian kernel). Because of the significantly worse performance for polynomial kernel with a default degree d = 3, we tested with changing the degree to different values and found that the performance is significantly better for even values than odd values of d. This phenomenon may be an indication of the parity of the hyperplane in the feature space: symmetric with respect to changing the sign of the coordinates. Overall in Table 3 better performance has been noted for "TreeSec × 10" when the "snapshots" are taken at a more spacious interval by multiplying the tree height with a factor of 10. Because the distances obtained from the PHYLIP software are typically small fraction numbers, dividing the distances at the "cutting" points tend to yield rounding errors, and a re-scaling of the distances in the tree to bigger values proved to be helpful with avoiding such a problem. In Table 4, the effect of the re-scaling factor of the learning performance is given, showing that Gaussian kernel is more affected by the rescaling than is the polynomial kernel.
The dimension of the super-phylogenetic vectors from TreeSec is obviously higher than that of the phylogenetic vectors in MirrorTree, since the former is derived by concatenating extra bits of information to the latter. Although this may raise concerns with a judicious reader about the fairness for comparing the two approaches if they have different sizes of data, it should not be a problem in our case, because we use the same amount of input data as the Sato et al's approach -the same distance matrices with the same size for proteins and the same distance matrix for species (based on 16S rRNA sequences). The extra bits of information are not really extra; they are the result of how we unravel the information embedded in the input data. In a sense, our method is a hybrid of combining the explicit mapping (to higher dimension) and the use of kernels, which may explain why our method bodes well with the learning task. Nonetheless, care should be taken to not let the increase of dimension go unchecked, as redundancy may arise and lead to overfitting and bad generalization. That is part of the reason that only "snapshots" of evolutionary history are incorporated.
It is not surprising that the neighbor-joining distance adjustment is essential; without it as shown in Table 3 the performance decreases significantly (0.69). To verify that the better performance indeed arises from incorporating the intra-matrix correlations, we run the same experiments on the data prepared using a random species tree, we can see in Table 3 that the ROC scores are consistently low at around 0.5 for cases when a random tree is used.
To further examine the performance, in Figure 4, ROC curves are shown for those ROC scores reported in Table 3 for the unsupervised learning based on Pearson correlation coefficient and for the supervised learning with a Gaussian kernel SVM. Given the Y-axis as the true positives and X-axis as the false positives, the higher a curve means more true positives are identified at cost of a given number of false positives. It is consistent to note that "TreeSec × 10" corresponds to the top curve overall. Also remarkable is the steep slope for the two ROC curves corresponding to the unsupervised learning (CC:MirrorTree and CC:TreeSec) at small false positive rates (X < 0.1). This explains the high specificity for these unsupervised learning based on correlation coefficient, and is consistent with what is reported in [9,10]. As moving to the right (i.e., when more false positives are made), these two curves quickly lose the upward momentum (i.e., identify fewer true positives), an indication of low sensitivity. Therefore, the supervised learning using the SVMs in these experiments offers a better balance between sensitivity and specificity.
It is worth noting that, the highly skewed learning problem is likely a reflection of situations in the real world, i.e., there are far more negatives than positives. In our case, given n proteins that uniquely interact with only one other member, there are only n/2 positive pairings among the (n 2 -n)/2 possible pairings of these n proteins. That is, the interacting network, with nodes representing the proteins and edges representing the interactions, is quite sparse, but our method is still applicable when there are more edges. Because every possible pair of nodes is assigned a score in our method, predictions can be made by going down a list of all pairs that are ranked by their scores in decreasing order. So, regardless the number of the actual edges in the network (sparse or not), the method works, and perfectly so as long as the true inter-acting pairs are ranked higher than non interacting pairs in the prediction. Indeed, this scheme is also widely used in predicting protein interaction networks in general, both in an unsupervised paradigm such as the original mirror tree method, and in a supervised learning paradigm.
Conclusion
To summarize, in this work we developed a novel, simple method to explore the intra-matrix correlational information embedded in the distance matrices and incorporate such information into a data representation which is conducive to supervised learning. Three methods recognize the importance of the phylogenetic tree, both Sato et al's projection method [9] and Pazos et al's tol-mirrortree [10] try to "subtract" from the similarity (measured as correlation coefficients) the effect due to speciation rather than the interaction pressures, whereas our method seeks to "unravel" the intrinsic structure of the distance matrices using the species tree as a guide and then "concatenate" these snapshots of the evolutionary history to the current view (i.e., the original ortholog distance matrices) of the proteins. That is, the main difference between subtracting and adding is that the former is more appropriate for removing background noise so as to reduce false positive and the latter is more appropriate for disentangling intramatrix correlations so as to aid a supervised learner.
As future work, we will study how the reconciliation between protein trees and species tree can be more explicitly represented and how to associate selection pressure imposed by the interaction to specific evolutionary events, e.g., horizontal gene transfers.
|
v3-fos-license
|
2022-07-25T07:03:37.290Z
|
2022-07-24T00:00:00.000
|
251018958
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "7285be194be171969659885c842028b4c903eea7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46735",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "d9589862823874907bdcfa77ff516f773ee92aa3",
"year": 2022
}
|
pes2o/s2orc
|
Reemergence of pathogenic, autoantibody-producing B cell clones in myasthenia gravis following B cell depletion therapy
Myasthenia gravis (MG) is an autoantibody-mediated autoimmune disorder of the neuromuscular junction. A small subset of patients (<10%) with MG, have autoantibodies targeting muscle-specific tyrosine kinase (MuSK). MuSK MG patients respond well to CD20-mediated B cell depletion therapy (BCDT); most achieve complete stable remission. However, relapse often occurs. To further understand the immunomechanisms underlying relapse, we studied autoantibody-producing B cells over the course of BCDT. We developed a fluorescently labeled antigen to enrich for MuSK-specific B cells, which was validated with a novel Nalm6 cell line engineered to express a human MuSK-specific B cell receptor. B cells (≅ 2.6 million) from 12 different samples collected from nine MuSK MG patients were screened for MuSK specificity. We successfully isolated two MuSK-specific IgG4 subclass-expressing plasmablasts from two of these patients, who were experiencing a relapse after a BCDT-induced remission. Human recombinant MuSK mAbs were then generated to validate binding specificity and characterize their molecular properties. Both mAbs were strong MuSK binders, they recognized the Ig1-like domain of MuSK, and showed pathogenic capacity when tested in an acetylcholine receptor (AChR) clustering assay. The presence of persistent clonal relatives of these MuSK-specific B cell clones was investigated through B cell receptor repertoire tracing of 63,977 unique clones derived from longitudinal samples collected from these two patients. Clonal variants were detected at multiple timepoints spanning more than five years and reemerged after BCDT-mediated remission, predating disease relapse by several months. These findings demonstrate that a reservoir of rare pathogenic MuSK autoantibody-expressing B cell clones survive BCDT and reemerge into circulation prior to manifestation of clinical relapse. Overall, this study provides both a mechanistic understanding of MuSK MG relapse and a valuable candidate biomarker for relapse prediction. Supplementary Information The online version contains supplementary material available at 10.1186/s40478-022-01454-0.
Reemergence of pathogenic, autoantibody-producing B cell clones in myasthenia gravis
Supplement Figure 9 Distance-to-nearest plots used to identify the threshold required for assigning clonal members in the BCR sequencing data. Table 1 Study subject clinical, laboratory, and demographic data.
Supplement Table 2
Radioimmunoassay-based testing of the 2E6 and 6C6 mAbs.
Supplement Table 3
Characteristics and analysis status of serial samples from patients MuSK MG-1 and MuSK MG-4.
Supplement Table 4
Counts of reconstructed V(D)J sequences by isotype and clones from sequencing of bulk BCR repertoires and 10x.
Supplement Figure 2
Supplement Figure 2. Flow cytometry gating strategy for isolation of MuSK-specific B cells. A representative example of the gating strategy featuring the fluorescently labeled MuSK ectodomain reagent is shown. After B cell enrichment using negative selection beads, single cells were gated using the forward (FSC) and side (SSC) scatter. Dead cells were excluded, then CD3 neg CD14 neg CD19 + IgD neg CD27 + IgM neg MuSK-reagent + cells were single cell sorted for subsequent B cell culture and expansion. Figure 3. Cell-based assay contour plots showing dilution series of mAbs 2E6 and 6C6. Binding to MuSK was tested over a series of ten two-fold dilutions of each mAb ranging from 10-0.02 µg/ml. The x-axis represents GFP fluorescence intensity and, consequently, the fraction of HEK cells transfected with MuSK. The y-axis represents Alexa Fluor 647 fluorescence intensity, which corresponds to the secondary anti-human IgG Fc antibody binding and, consequently, primary antibody binding to MuSK. Hence, transfected cells are located in the right quadrants and cells with MuSK antibody binding in the upper quadrants. The MuSK-specific human mAb MuSK1A was used as a positive control and the AChR-specific human mAb 637 used as a negative control.
Supplement Figure 4. Staining murine neuromuscular junctions with MuSK mAbs 2E6 and 6C6.
Immunofluorescent staining of mouse neuromuscular junctions (NMJ). Tibialis anterior muscles were cut longitudinally in cryosections and fixed with PFA. AChRs were stained with Alexa Fluor 648 αbungarotoxin (shown in red). The mAb 637 was used a positive control, to identify the location of the AChR. Binding of mAbs was detected with goat anti-human IgG Alexa Fluor 488 (IgG, shown in green).
Supplement Figure 5. Binding properties of unmutated common ancestors from MuSK mAbs 2E6
and 6C6. Representative cell-based assay (CBA) contour plots are shown (left) for the unmutated common ancestors (UCA) of 2E6 and 6C6. The x-axis represents GFP fluorescence intensity and, consequently, the fraction of HEK cells transfected with MuSK. The y-axis represents Alexa Fluor 647 fluorescence intensity, which corresponds to secondary anti-human IgG Fc antibody binding and, consequently, primary antibody binding to MuSK. Hence, transfected cells are located in the right quadrants and cells with MuSK antibody binding in the upper quadrants. The plots show testing with a mAb concentrations of 10 and 1.25 µg/ml. Binding to MuSK was tested over a series of ten two-fold dilutions of each mAb ranging from 10-0.02 µg/ml (right). The MuSK1A mAb was used as the positive control and AChR-specific mAb 637 as the negative control. The ∆MFI was calculated by subtracting the signal acquired by testing non-transfected cells from the signal acquired by testing transfected cells. Each data point represents the mean value from three independent experiments. Symbols represent means and error bars SDs. Values greater than the mean + 4SD of the negative control mAb at 1.25 µg/ml (indicated by the horizontal dotted line) were considered positive. Supplement Figure 9. Distance-to-nearest plots used to identify the threshold required for assigning clonal members in the BCR sequencing data. Distance-to-nearest plots used to identify a common threshold to use for hierarchical clustering-based grouping of V(D)J sequences from high throughout sequencing of BCR repertoires. Red dashed lines correspond to the threshold used for assigning clonal clusters. Grey bars represent the distribution of intra-subject distance-to-nearest distances. Table 3. Characteristics and analysis status of serial samples from patient MuSK MG-1 and MuSK MG-4. These longitudinally collected samples were used for investigating whether clones or clonal variants of mAbs 6C6 and 2E6 were present. Antibody titer was either measured by CBA in our laboratory or at the Mayo Clinic Laboratory. Samples of MuSK MG-4 measured at Mayo Clinic Laboratory are indicated by an (*); the unit is nmol/L. The cut off for negativity for samples measured at Mayo Clinic Laboratory is ≤ 0.02 nmol/L. The autoantibody titers of MuSK MG-1 measured by our CBA was performed using 10 two-fold dilutions ranging from 1:20 to 1:10240. TOC = time of collection; Pred = Prednisone
|
v3-fos-license
|
2022-07-23T06:17:25.853Z
|
2022-07-21T00:00:00.000
|
250953036
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "0b74d00b31bbcae3ddbaac417eeb9ecc86e4cd6a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46736",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"sha1": "ef8ea39440977387db267d81a2344f138f2c452b",
"year": 2022
}
|
pes2o/s2orc
|
Carboxylic acids derived from triacylglycerols that contribute to the increase in acid value during the thermal oxidation of oils
Acid value (AV), is a widely used indicator of oil degradation that, by definition, measures the free fatty acids formed via the hydrolysis of triacyclglycerols. However, based on observations made in previous studies, we hypothesized that the oxidation of triacylglycerols leads to the formation of carboxylic acids with a glycerol backbone which are also calculated as AV. In this study, we aimed to identify such carboxylic acids and prove the above hypothesis. Heating a canola oil at 180 °C for 6 h without the addition of water resulted in an increase in AV from 0.054 to 0.241. However, the contribution of free fatty acids to this increase in AV was minimal; free fatty acid-derived AV before and after heating was 0.020 and 0.023, respectively. Then, via mass spectrometric analyses, we identified two 8-carboxy-octanoyl (azelaoyl) -triacylglycerols (i.e., dioleoyl-azelaoyl-glycerol and oleoyl-linoleoyl-azelaoyl-glycerol) in the heated oil. Azelaoyl-triacylglycerols-derived AV before and after heating the oil was 0.008 and 0.109, respectively, demonstrating that azelaoyl-triacylglycerols contribute to AV. Such an increase in AV by azelaoyl-triacylglycerols was also observed in an oil used to deep-fry potatoes (i.e., an oil with a relatively high water content). These results suggest that AV is also an indicator of the thermal oxidation of triacylglycerols.
www.nature.com/scientificreports/ in the AV of high-oleic safflower oil by heating the oil at 180 °C and argued that this was due to the formation of free fatty acids 9 . However, as we discuss in later sections, simply heating an oil without the addition of water presumably does not provide enough water to significantly hydrolyze triacylglycerols. Furthermore, in the same study, the increase in AV caused by heating was suppressed by lowering atmospheric oxygen concentrations. Hence, another possible explanation for the increase in AV is that the triacylglycerols in the oil were oxidized, leading to the formation of carboxylic acids which were calculated as AV. However, to the best of our knowledge, no study has verified that the oxidation of triacylglycerols induces an increase in AV. Moreover, compounds other than free fatty acids that contribute to AV have not been identified. Based on the above background, we hypothesized that the oxidation of triacylglycerols leads to the formation of carboxylic acids with a glycerol backbone (Fig. 1), and these carboxylic acids can be calculated as AV. Hence, the aim of this study was to prove this hypothesis and to identify and quantitate the carboxylic acids that contribute to the heating-induced increase in AV. Consequently, we first confirmed that heating a vegetable oil (i.e., canola oil) without the addition of water results in a significant increase in AV. Then, carboxylic acids with a glycerol backbone that contributed to this increase in AV were identified by ultra-high performance liquid chromatography time-of-flight mass spectrometry (UPLC-Tof/MS). Quantification of the carboxylic acids was performed by gas chromatography-mass spectrometry (GC-MS). The identified carboxylic acids were also found in an oil used to deep-fry potatoes (i.e., an oil with a relatively high water content), suggesting their further contribution to AV. These results suggested that AV is not only an indicator of triacylglycerol hydrolysis but also an indicator of triacylglycerol oxidation. These findings should contribute to improving the quality of various oils and foods.
Results and discussion
Confirmation that heating an oil without the addition of water leads to an increase in AV. As described in the introduction, AV is an important indicator to evaluate the quality of vegetable oils. AV, by definition, measures the amount of free fatty acids formed by the hydrolysis of triacylglycerols. Nevertheless, several studies have observed an increase in AV by simply heating vegetable oils without the addition of water [8][9][10][11][12][13] . The reasons underlying this phenomenon have not been evaluated in previous studies. In this study, we hypothesized that the oxidation of triacylglycerols leads to the formation of carboxyl acids that possess a glycerol backbone (Fig. 1), and these carboxylic acids are calculated as AV.
Firstly, we aimed to confirm that the heating of a vegetable oil induces an increase in AV even when no water is added to the oil. Fresh canola oil (10 g) was heated in a 51 mm stainless steel dish at 180 °C for 6 h. Heating the oil caused a significant increase in AV from 0.054 to 0.241. Other parameters (color and viscosity) were also increased ( Table 1). These findings were in good agreement with the results of previous studies [8][9][10][11][12][13] .
As it was confirmed that heating an oil without the addition of water induces an increase in AV, we then evaluated the contribution of free fatty acids to this increase in AV. Free fatty acids contained in the heated oil www.nature.com/scientificreports/ were fluorescently labeled with the 9-anthryldiazomethane (ADAM) reagent and quantified by high-performance liquid chromatography fluorescence detection (HPLC-FLD). As a result, heating the oil for 6 h at 180 °C induced only a slight increase in the amount of free fatty acids (palmitic acid, stearic acid, oleic acid, linoleic acid, and linolenic acid) contained in the oil. This slight increase in free fatty acids may be due to the hydrolysis of triacylglycerols by water that remains slightly in the oil even after heating (80-300 ppm; data not shown). We then calculated how this slight increase in free fatty acids affected AV. As such, "free fatty acid-derived AV" was calculated based on the above quantification of free fatty acids. The following formula was used (where 56.11 corresponds to the molecular weight of KOH (g/mol)): Free fatty acid-derived AV before and after heating the canola oil was 0.020 and 0.023, respectively, demonstrating an increase of only 0.003. Meanwhile, as mentioned above, the actual AV (determined by titration) increased from 0.054 to 0.241, demonstrating an increase of 0.187. Hence, these results strongly suggest that carboxylic acids other than free fatty acids were formed during the heating of the oil, and these carboxylic acids were measured as AV.
Identification of carboxylic acids other than free fatty acids that contributed to AV. The above results suggested that carboxylic acids other than free fatty acids were formed during the heating of oils. We anticipated that these carboxylic acids would also react with the ADAM reagent, and their structures could be identified by analyzing the resultant ADAM derivatives. Hence, the above oil, heated for 6 h without addition of water, was derivatized with the ADAM reagent and analyzed with mass spectrometry. Collision induced dissociation of ADAM derivatives is known to yield a characteristic product ion of m/z 191, corresponding to an anthryl group [14][15][16] . Thus, we attempted to identify carboxylic acids in the ADAM-derivatized oil by searching for the precursor ions that afforded the fragment ion of m/z 191. This search was conducted using the UPLC-Tof/ MS E mode which simultaneously obtains the MS spectrum and product ion spectrum without selection of the precursor ion (i.e., data-independent MS/MS analysis; Fig. 2) [17][18][19] .
The extracted ion chromatogram of m/z 191.0861 (calcd. for C 15 H 11 , 191.0861; anthryl group) in the product ion chromatogram demonstrated two major peaks at the retention times of 12.56 min and 13.22 min (peak I and peak II; Fig. 2c Fig. 2e), respectively. These peaks were hardly detected before heating (data not shown), suggesting that these ions were the ADAM derivatives of carboxylic acids formed during the heating of the oil. In addition to the above data-independent MS/MS analysis, product ion scan with the selection of precursor ions (m/z 1001.6832 and m/z 1003.6985) was conducted (Supplementary Fig. S1 online). The anthryl group-characteristic fragment ion (m/z 191) was observed in this product ion spectrum, confirming that m/z 1001.6832 and m/z 1003.6985 were ADAM derivatives.
Under the assumption that m/z 1001.6832 and m/z 1003.6985 each possess only one anthryl group (C 15 H 11 ) in their structures, we considered that the molecular formula of each ion before ADAM derivatization was C 48 Since C 9 H 16 O 4 must possess a free carboxyl group that can react with the ADAM reagent, we assumed that it was nonanedioic (azelaic) acid, a saturated 9-carbon dicarboxylic acid. Similarly, the product ion scan of m/z 787.6088 afforded product ions corresponding to oleic acid, linoleic acid, and nonanedioic acid (Fig. 4b). From these results, we identified 8-carboxy-octanoyl (azelaoyl)-triacylglycerols, namely, dioleoyl-azelaoyl-glycerol (C 48 H 86 O 8 ) and oleoyl-linoleoyl-azelaoyl-glycerol (C 48 H 84 O 8 ), as the main carboxylic acids contained in the heated canola oil that were not free fatty acids.
Quantification of azelaoyl-triacylglycerols in the heated oil. We confirmed above that the heated canola oil contained dioleoyl-azelaoyl-glycerol and oleoyl-linoleoyl-azelaoyl-glycerol. To quantify these azelaoyltriacylglycerols, the heated oil was subjected to methyl esterification, and dimethyl nonanedioate (azelate), the expected product, was analyzed by GC-MS. As a result, dimethyl azelate was clearly detected from the heated oil ( Supplementary Fig. S2 online). Other dimethyl dicarboxylates were either detected only in trace amounts or were not detected. The concentration of dimethyl azelate increased significantly from 1.4×10 2 to 1.9×10 3 nmol/g by heating the oil for 6 h. We then calculated how this significant increase in azelaoyl-triacylglycerols affected AV. Hence, "azelaoyltriacylglycerols-derived AV" was calculated based on the above quantification of dimethyl azelate. Azelaoyltriacylglycerols-derived AV was calculated using the following formula: Azelaoyl-triacylglycerols-derived AV before and after heating the oil for 6 h was 0.008 and 0.109, respectively, demonstrating an increase of 0.101. Considering that the actual AV increased from 0.054 to 0.241 (an increase of 0.187), the increase in azelaoyl-triacylglycerols-derived AV (0.101) accounted for 54.0% of the increase in the Azelaoyl -triacylglycerols -derived AV = dimethyl azelate concentration nmol/g oil ×56.11/10 6 . www.nature.com/scientificreports/ actual AV. Meanwhile, as mentioned above, free fatty acid-derived AV increased by only 0.003 (corresponding to 1.6% of the increase in actual AV). Hence, the contribution of azelaoyl-triacylglycerols was considerably larger than that of free fatty acids. To the best of our knowledge, this is the first study to confirm that a compound other than a free fatty acid contributes to AV.
Azelaoyl-triacylglycerols and their contribution to AV in an oil used to deep-fry potatoes.
The water content of oils is known to increase during the cooking of foods (e.g., deep-fry) as the water contained in foods are transferred to the oil 26 . When an oil is heated in the presence of such water, triacylglycerols hydrolyze into free fatty acids, leading to an increase in AV. Many studies in fact have demonstrated that the AV of oils increase when oils are used to cook food (e.g., deep-fry potatoes) 11,13,[27][28][29][30][31] . In addition to such an increase in free fatty acids, the results of the current study suggested that azelaoyl-triacylglycerols may also contribute to AV. Hence, we evaluated the contribution of azelaoyl-triacylglycerols to the AV of an oil used to deep-fry potatoes (i.e., an oil with a relatively high water content). As such, a deep-frying test using frozen French fries was conducted. The total heating time of the oil was 270 min and the total frying time was 126 min. During frying, the average oil temperature was about 170 °C, and the water content was about 1200-2500 ppm. The AV before and after the test, determined by titration, was 0.054 and 0.284, respectively. This increase was comparable to that observed in previous studies where oils were used to cook foods 11,13,[28][29][30] . Based on the quantification of free fatty acids and dimethyl dicarboxylates, free fatty acid-derived AV and azelaoyl-triacylglycerols-derived AV were calculated. As a result of deep-frying French fries, the free fatty acid-derived AV increased from 0.020 to 0.058, demonstrating an increase of 0.038. Meanwhile, azelaoyl-triacylglycerols-derived AV increased from 0.008 to 0.078, demonstrating an increase of 0.070. Considering that the actual AV (determined by titration) increased by 0.230 (from 0.054 to 0.284), free fatty acids and azelaoyl-triacylglycerols accounted for 16.5% and 30.4% of the increase in AV, respectively. These results indicate that the formation of azelaoyl-triacylglycerols can also occur during the cooking of foods (i.e., oils with a relatively high water content), and both free fatty acids and azelaoyl-triacylglycerols contribute to AV. Therefore, AV is not only an indicator of free fatty acids formed by the hydrolysis of triacylglycerols (as defined in ISO 660) but also an indicator of azelaoyl-triacylglycerols formed via the oxidation of triacylglycerols. Moreover, the progress of oil deterioration may be evaluated with higher accuracy by measuring free fatty acids and azelaoyl-triacylglycerols in addition to AV. Gly: glycerol Ole: oleic acid
Conclusion
This study identified azelaoyl-triacylglycerols as compounds that contribute to AV. Azelaoyl-triacylglycerols were found to contribute to the AV of an oil that was heated without addition of water and an oil that was used to cook French fries. Although AV, by definition, is an indicator of triacylglycerol hydrolysis, the results of the current study suggest that AV is also an indicator of azelaoyl-triacylglycerols formed by the oxidation of triacylglycerols. Further research on AV, based on these perspectives, should lead to a reduction in food loss by extending the life cycle of frying oils.
Materials and methods
Materials. Canola oil was manufactured by J-OIL MILLS, Inc. (Tokyo, Japan; Supplementary Table S1).
Aquamicron AX and Aquamicron CXU were purchased from Mitsubishi Chemical Corporation (Tokyo, Japan). ADAM reagent was obtained from Funakoshi Co., Ltd. (Tokyo, Japan). Oleic acid, dimethyl nonanedioate, and methyl heptadecanoate were purchased from FUJIFILM Wako Pure Chemical Corporation (Osaka, Japan). Undecanoic acid was purchased from Tokyo Chemical Industry Co., Ltd. (Tokyo, Japan). Azelaic acid was obtained from Sigma-Aldrich (St. Louis, U.S.A.). Leucine-enkephalin was obtained from Waters (Milford, MA, U.S.A.). Frozen French fries were purchased from a market in Kanagawa, Japan. Other reagents were of the highest grade available.
Analysis of canola oil heated without the addition of water. Fresh canola oil (10 g) was placed in a 51 mm stainless steel dish and heated on a digital heat block (Dry Thermo Unit DTU-2C, TAITEC, Tokyo, Japan) at 180 °C for 6 h. AV was measured according to the official method of the American Oil Chemists' Society (AOCS, Cd 3d-63) 32 . Moisture content in the oil was measured by Karl Fischer titration with Aquamicron reagents using a CA-310 Moisture meter (Mitsubishi Chemical Analytech, Tokyo, Japan). The color of the oil was measured using a Lovibond PFXi-880/L (The Tintometer Limited, Amesbury, England) according to the official method of the AOCS (Cc 13e-92) 33 . Viscosity was measured by placing an oil sample (1.2 mL) between the cone and plate of a VISCOMETER TV-25 (Toki Sangyo, Tokyo, Japan). The measurement was started at 30 °C, and data was recorded every 30 seconds until 2 min. The average data was used as the viscosity. Free fatty acids were analyzed by ADAM derivatization. Quantification was performed according to previous studies 14,34,35 and the manufacturer's instructions as follows. Oil (200 mg) and undecanoic acid (internal standard, 0.3 mg) were dissolved in 10 mL of acetone. ADAM reagent (1 mg/mL in acetone, 100 µL) was added to 50 µL of this acetone solution, and the mixture was allowed to react for 16 h at room temperature in the dark. After the reaction, the solution was diluted 10-fold with acetone. The reaction mixture (5 µL) was analyzed by HPLC-FLD using an LC-20 series HPLC system equipped with a fluorescence detector (FLD-20A, Shimadzu, Kyoto, Japan). Separation was carried out on a Lichrosorb RP-8 column (4.0 mm I.D., 250 mm, 5.0 um, Merck, Darmstadt, Germany) at 40 °C. The flow rate of the mobile phase (A, water; B, acetonitrile) was set to 1.0 mL/ min. The gradient was as follows: 60% of mobile phase B for 15 min, 60-90% of mobile phase B between 15 and 30 min. The excitation and emission wavelengths were set at 365 nm and 412 nm, respectively. A calibration curve was prepared using the area ratio between oleic acid and the internal standard 35 . The calibration curve was used to quantitate the concentration of each free fatty acid (palmitic acid, stearic acid, oleic acid, linoleic acid, and linolenic acid).
Analysis of carboxylic acids other than free fatty acids contained in the heated canola oil. Carboxylic acids produced during the heating of canola oil were identified with UPLC-Tof/MS using an ACQUITY UPLC H-class system equipped with a Zevo G2-S qTOF/MS (Waters, Milford, MA, U.S.A.). Heated canola oil was derivatized with the ADAM reagent as described above. To search for candidate compounds containing the anthryl group, the reaction mixture (1 μL) was analyzed using the UPLC-Tof/MS E mode (Condition 1 in Supplementary Table S2). The MS spectrum and product ion spectrum were obtained simultaneously without the selection of the precursor ion (i.e., data-independent MS/MS analysis) [17][18][19] . Product ion scan analysis with the selection of precursor ions was performed using Condition 2 described in Supplementary Table S2. Next, to identify the chemical structures of the candidate compounds, the heated canola oil, without ADAM derivatization, was analyzed by UPLC-Tof/MS. Heated canola oil was diluted 1000-fold with acetone, and 1 μL of the diluted sample was analyzed. The MS spectrum (Condition 3 in Supplementary Table S2) and the product ion spectrum with the selection of a precursor ion (Condition 4 in Supplementary Table S2) were obtained. UPLC separations were performed using a CORTECS C18 column (2.1 mm I.D., 100 mm, 1.6 μm, Waters, Milford, MA, U.S.A.) at 55 °C. The flow rate of the mobile phase (A, methanol/water (1:1, v/v) containing 0.1% formic acid and 10 mM ammonium acetate; B, 2-propanol containing 0.1% formic acid and 10 mM ammonium acetate) was set to 0.2 mL/min. The gradient was as follows: 40% to 100% of mobile phase B between 0 and 15 min, 100% of mobile phase B between 15 and 20 min. MS parameters were optimized using the MassLynx v4.1 software (Waters, Milford, MA, USA). Leucine-enkephalin was used as the M.W. standard in the LockSpray mode. These systems provide a resolution of > 30,000 (full width at half maximum). The mass extraction window was set to ± 5 mDa. Elemental compositions were predicted based on accurate masses using the MassLynx v4.1 software.
Quantification of azelaoyl-triacylglycerols. GC-MS (Agilent 7890A gas chromatograph coupled with an Agilent 5975C MS system, Agilent, Little Falls, DE, USA) was used to determine the total amount of azelaoyltriacylglycerols. Azelaoyl-triacylglycerols were methyl-esterified according to the official method of the AOCS (Ce 2-66) 36 and quantified in the form of dimethyl azelate. The hexane layer containing fatty acid methyl esters was diluted 10-fold with hexane and analyzed by GC-MS. Methyl heptadecanoate was used as an internal stand- www.nature.com/scientificreports/ ard. GC separation was performed using a DB-WAX GC column (0.25 mm I.D., 60 m, 0.25 μm film thickness, GL Science, Tokyo, Japan). The GC oven was programmed as follows: the initial oven temperature was 40 °C for 5 min, increased to 190 °C at 3 °C/min and held for 5 min, and then increased to 240 °C at 10 °C/min and held for 30 min. The helium flow rate was kept constant at 1.2 mL/min. The electron ionization mode and the scan monitor mode were used to analyze dimethyl dicarboxylates. The peaks were identified with reference to previous studies 25 with some modifications. Extracted ion chromatograms at m/z 152 and 143 were used to analyze dimethyl azelate and methyl heptadecanoate, respectively. A calibration curve was constructed based on peak area ratios (dimethyl azelate/internal standard) and applied to calculate the concentration of dimethyl azelate.
Deep-frying test. A stainless steel pan was filled with fresh canola oil (600 g) and heated to 180 °C. Frozen French fries (100 g) were deep-fried at 180 °C for 7 min starting at 10:00 am. The fries were removed, and after an interval of 8 min, the next frozen fries were fried at 180 °C for 7 min. This process was repeated until six groups of French fries were fried. The heating of the oil was stopped at 11:30 am. The same oil was used to fry French fries in the same manner on the next day and the day after.
Data availability
The datasets used and/or analyzed during the current study available from the corresponding author on reasonable request.
|
v3-fos-license
|
2023-02-09T15:19:45.153Z
|
2022-02-28T00:00:00.000
|
256690762
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41467-022-28471-w.pdf",
"pdf_hash": "bfbd1c9313c144ae68820bdc4deb05fe7eb9833b",
"pdf_src": "SpringerNature",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46737",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "bfbd1c9313c144ae68820bdc4deb05fe7eb9833b",
"year": 2022
}
|
pes2o/s2orc
|
Bacterial N4-methylcytosine as an epigenetic mark in eukaryotic DNA
DNA modifications are used to regulate gene expression and defend against invading genetic elements. In eukaryotes, modifications predominantly involve C5-methylcytosine (5mC) and occasionally N6-methyladenine (6mA), while bacteria frequently use N4-methylcytosine (4mC) in addition to 5mC and 6mA. Here we report that 4mC can serve as an epigenetic mark in eukaryotes. Bdelloid rotifers, tiny freshwater invertebrates with transposon-poor genomes rich in foreign genes, lack canonical eukaryotic C5-methyltransferases for 5mC addition, but encode an amino-methyltransferase, N4CMT, captured from bacteria >60 Mya. N4CMT deposits 4mC at active transposons and certain tandem repeats, and fusion to a chromodomain shapes its “histone-read-DNA-write” architecture recognizing silent chromatin marks. Furthermore, amplification of SETDB1 H3K9me3 histone methyltransferases yields variants preferentially binding 4mC-DNA, suggesting “DNA-read-histone-write” partnership to maintain chromatin-based silencing. Our results show how non-native DNA methyl groups can reshape epigenetic systems to silence transposons and demonstrate the potential of horizontal gene transfer to drive regulatory innovation in eukaryotes. Eukaryotic DNA can be methylated as 5-methylcytosine and N6-methyladenine, but whether other forms of DNA methylation occur has been controversial. Here the authors show that a bacterial DNA methyltransferase was acquired >60 Mya in bdelloid rotifers that catalyzes N4-methylcytosine addition and is involved in suppression of transposon proliferation.
M odification of nucleobases without changes in the underlying genetic code offers unmatched opportunities for "writing" extra information onto DNA, the primary carrier of hereditary material. Covalent association of modifying groups with DNA provides advantages over more easily removable carriers of epigenetic information, such as RNA or proteins, for potential transmission across cell divisions and generations. In bacteria and archaea, DNA modifications are first and foremost associated with restriction-modification (R-M) systems acting to discriminate and destroy the invading foreign DNA, although multiple "orphan" methyltransferases (MTases) may perform regulatory functions 1,2 . Eukaryotes mostly use base modifications for regulatory purposes, with the predominant form of epigenetic modification in eukaryotic genomes being C5-methylcytosine (5mC) and its derivatives 3,4 . Often called "the fifth base", 5mC plays an important role in genome defense against mobile genetic elements, and is often associated with transcriptional silencing, establishment of the closed chromatin configuration, and repressive histone modifications 5 . The 5mC mark is introduced by C5-MTases, DNMT1 and DNMT3, thought to have originated from bacterial C5-MTases in early eukaryotes via fusions with additional domains interacting with proteins and DNA 6 , while DNMT2 acts primarily on tRNA 7,8 . Recently, another modified base, N6methyladenine (6mA), gained attention as a possible novel form of epigenetic modification in diverse eukaryotes, although its role remains controversial [9][10][11] . In 6mA, a methyl group is added to an exocyclic amino group of adenines by amino-MTases, some of which are related to RNA-modifying MTases 12,13 . However, the third type of DNA methylation naturally occurring in bacteria, N4methylcytosine (4mC), has not been demonstrated to act as an epigenetic mark in eukaryotes, and most claims of eukaryotic 4mC lack confirmation by orthogonal methods and do not identify the enzymatic component 14 . Here, we combine multiple lines of evidence to establish that 4mC modification can be recruited as an epigenetic mark in eukaryotic genomes, and to characterize the underlying enzymatic machinery. We focus our attention on epigenetic silencing phenomena that involve DNA and histone modifications, without expanding into broader areas involving nuclear organization or post-transcriptional silencing. Our work demonstrates how a horizontally transferred gene can become part of a complex regulatory system maintained by selection over tens of millions of years of evolution.
Presence of 4mC and 6mA marks in genomic DNA. We next sought to find out whether recruitment of a horizontally transferred bacterial MTase resulted in the establishment of bacterial epigenetic marks in bdelloid genomic DNA (gDNA). A strong indication that N4CMT could interact with chromatin to add 4mC to gDNA comes from the presence of a eukaryotic chromodomain from the HP1/chromobox subfamily of methylated lysine-binding Royal family of structural folds 34 at the C-terminus of the bacterial N6_N4_MTase moiety in sequenced bdelloids (Fig. 1a, Supplementary Fig. 2).
To detect 4mC/6mA marks in bdelloid genomes, we extracted gDNA from the A. vaga laboratory reference strain (hereafter Avref) 17 fed with methyl-free Escherichia coli (Supplementary Table 3), and performed immuno-dot-blotting with anti-4mC and anti-6mA antibodies (Methods). We also extracted gDNA from the natural A. vaga isolate L1 (hereafter AvL1; Supplementary Movie; Fig. 1g), which was caught in the wild and identified as A. vaga through morphological criteria and mtDNA phylogeny, but represents a distinct morphospecies within the A. vaga species complex, as its gDNA is only 88% identical to Avref 35 . Figure 1c shows that gDNA from Av-ref and AvL1 reacts positively with both antibodies, suggesting the presence of 4mC and 6mA marks. Control DNAs isolated from the dam-/dcm-, DH5α and Top10 E. coli strains, or from E. coli M28 strain used as food (Supplementary Table 3), did not react with anti-4mC antibodies (Fig. 1c), and neither we observe cross-reactivity of the ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-022-28471-w anti-4mC antibody with 5mC-containing human DNA. Also consistent with the presence of modified cytosines were the results of treatment of total A. vaga gDNA with the McrBC endonuclease, which cleaves at any methylated cytosines (5mC, 5hmC, 4mC) 36,37 ( Fig. 1d; see also Fig. 3b below). Together with the absence of C5-MTases, the similarity of N4CMT to bacterial N4C-MTases (Fig. 1e), and the lack of 5mC deamination signatures in gDNA from observed/expected CpG ratios (Supplementary Fig. 3a), our data support the hypothesis that cytosines in bdelloids are modified at the N4-rather than C5- positions. Still, signals in gDNA may originate from residual methylated bacterial DNA from sources other than food. Thus, we sought to examine the distribution of 4mC marks over annotated genomic features in bona fide eukaryotic contigs.
Genome-wide analysis of 4mC and 6mA by DIP-seq. We exploited immunoreactivity of bdelloid DNA with anti-4mC and anti-6mA antibodies to assess the genome-wide distribution of these methylation marks by DIP-seq (DNA immunoprecipitation followed by sequencing, also called MeDIP-seq; see Methods). After read mapping to Av-ref, MACS peak-calling tool identified 1008 and 1735 DIP-seq peaks (p-value < 1e−5) for 4mC and 6mA, respectively, which were broadly distributed throughout the assembly. To uncover biologically relevant patterns behind peak distribution, we compared average coverage densities for 4mC and 6mA near annotated genomic features, such as gene coding sequences (CDS) and transposable elements (TEs). We visualized the distribution of 4mC and 6mA sites near TEs by aligning TEs at the 5′ end (profiles) or aligning TE bodies from 5′ to 3′ end at a fixed distance (metaprofiles), and plotting the IP occupancy, which shows the relative number of DIP-seq reads against the total number of TEs for each bin size within a pre-determined upstream and downstream window. DIP-seq data for 4mC show elevated density near TE insertions in comparison with 6mA ( Fig. 2a, left and right), suggesting that TE insertions could be an important 4mC modification target. For gene annotations, IP occupancy (Fig. 2a, center) does not show an increase in density at the transcription start site (TSS) seen in TEs. After peak calling with MACS, we also compared relative numbers of peaks with intersected annotations: about one-half of 4mC peaks (468 out of 1008) and a quarter of 6mA peaks (430 out of 1735) are close to TEs, and more 6mA peaks (1261 out of 1735) than 4mC peaks (398 out of 1008) are close to gene annotations. Genometric spatial correlation analysis (Supplementary Note 1; Supplementary Table 4; Supplementary Fig. 4) further shows that DIP-seq 4mC marks are closer to TEs than would be expected from a uniform distribution.
The presence and distribution of 4mC and 6mA DNA modifications in the AvL1 strain were similarly interrogated by DIP-seq. We generated DIP-seq reads and mapped them onto AvL1 assembly (Methods). After peak calling with MACS, we identified 1473 and 1385 peaks (p-value < 1e−05) for 4mC and 6mA, respectively. To further understand methylation patterns in AvL1, we performed initial gene and TE annotations with fully automated training methods for gene prediction, using genomic and RNA-Seq data (Braker2; see Methods) (Supplementary Table 5). AvL1 repeat library was constructed de novo, manually curated, and used to annotate TEs (Methods). Initial analysis showed that 4mC-DIP-seq and 6mA-DIP-seq have similar distribution profiles in the assembly ( Supplementary Fig. 5a, b) with enrichment of both marks towards genes and transposons, part of which may be due to the undetected presence of unknown types of low copy-number TEs in gene annotations. Nevertheless, cluster analysis showed an increase in 4mC being detected in a subset of transposons (clusters 1 and 2, Supplementary Fig. 5d). After peak calling, we found 1097 4mC peaks (out of 1473) and 1042 6mA peaks (out of 1385) close to TEs, while 863 4mC and 813 6mA peaks are close to genes (excluding TEs). Genometric correlation analysis in AvL1 showed that both 4mC and 6mA modification peaks (Supplementary Note 4; Supplementary Table 4; Supplementary Fig. 4) display a small absolute positive correlation with TEs, being closer than expected to TEs than to gene models as reference features (Jaccard and permutation test). Together, DIP-seq data in both Av-ref and Av-L1 suggest preferential localization of 4mC over TEs.
Modification analysis at single-base resolution by SMRT-seq. While immuno-dot-blots and differential gDNA digestion suggested the presence of 4mC in bdelloid gDNA, one cannot fully eliminate gDNA from commensal bacteria, even using methylfree E. coli food strains and applying starvation/antibiotic treatments prior to DNA extraction (Methods). Hence, we chose not to use mass-spectrometry (MS) as a method to confirm the presence of 4mC in bdelloids, especially considering that unknown MS-peaks can comigrate with 4mC 14 . Further, the low resolution of the DIP-seq method limits the power of correlation analyses to the length of DNA fragments used for antibody binding (250-450 bp), not to mention residual IgG binding to non-modified fragments inherent to the method 38 . Thus, we chose to examine the genome-wide distribution of modified bases by single-molecule real-time (SMRT) sequencing, which provides single-nucleotide resolution and allows validation of metazoan/bacterial contigs (Methods).
SMRT-based detection exploits kinetic signatures of polymerase passage through modified vs non-modified bases and is quantified in terms of inter-pulse duration (IPD) ratios. It is best suited for the detection of 4mC and 6mA, characterized by strong kinetic signatures, which require~10-fold lower coverage than 5mC detection (Pacific Biosciences Methylome Analysis Technical Note) and is widely used in bacterial methylome analyses 32,39 . We obtained PacBio reads (15 SMRT cells, totaling 9.87 Gb) from gDNA extracted from AvL1 eggs and analyzed the kinetic profiles with SMRT ® Portal (Methods). Prior to quantification of modified bases, we bioinformatically removed residual bacterial contigs, which show high methylation density.
SMRT-analysis detected 4mC modifications on 21,016 cytosines (0.0643% of the total cytosines in the assembly) and 6mA modifications on 17,886 adenines (0.0236% of total adenines) using a minimum cutoff PacBio coverage defined in Fig. 2f (see Supplementary Table 6 for comparison of 10× and 20× coverage levels). As with DIP-seq, SMRT-seq shows a broad distribution of both modifications across AvL1 assembly. Comparison of DIP-seq and SMRT-seq modification patterns shows considerable overlap, with 36% of 4mC peaks and 32% of 6mA peaks overlapping with 4mC and 6mA identified by SMRT analysis, respectively, indicating that many peaks are conserved between eggs and adults. Following normalization of SMRT-seq methylation fraction values per modified base (see Methods), it is seen that 4mC and 6mA DIP-seq peak summits overlap with modified regions for PacBio 4mC and 6mA, respectively; plotting only 4mC-marked CpG sites shows a similar increase towards DIP-seq 4mC peaks ( Supplementary Fig. 6a). The peak overlap is quite substantial, given the modest proportion of modified bases in the genome, and might reflect the general lack of methylation reprogramming during development in protostomes, known at least for 5mC 40 .
In contrast to the predominantly symmetric patterns of 5mC deposition at CpG doublets in eukaryotes, AvL1 shows mostly asymmetric patterns of methylation for both 4mC and 6mA, i.e., only one strand is usually modified (Fig. 2b displays typical examples). At 4mC sites, CpG and CpA dinucleotides are the most prevalent, making up 74% of modified doublets. For better identification of sequence preferences, we extracted different sequence windows (5, 10, and 20 bp) upstream and downstream from 4mC sites and searched for significant motif enrichment with MEME-ChIP (Methods) (Fig. 2d). For 4mC, three motifs with CG or CA dinucleotides were most significantly enriched (from p = 2.8e−593 to p = 1.4e−513). For 6mA, a similar approach yielded three significantly enriched short motifs (from p = 7.3e−656 to p = 4.3e−420); increasing the motif length yielded GA embedded in an A-rich region (p = 2.4e−1243). However, none of these matched the RRACH motif found at m 6 A sites in RNA 41 , arguing against RNA contamination. The dinucleotide GA is the most prevalent at 6mA sites, and the most common triplets AGG or GAA, when combined, compose 34% of all 6mA triplets. These findings parallel the known 6mA motif preferences in metazoans but differ from unicellular eukaryotes and early diverging fungi, in which 6mA methylation is symmetric and targets ApT dinucleotides (Supplementary Table 1).
Methylation density in TRs deserves special mention. Figure 2g shows that the average counts of 4mC and 6mA sites in TRs are elevated in comparison with TEs and genes. According to TR annotation (Methods), only a small fraction (0.84%) of the AvL1 assembly is composed of TRs. Inspection of SMRT-seq modification data identified two repeats with a very high density of methylated sites, located mainly on contigs 1882 and 785 adjacent to large Athena retroelements 42 . Such extra-high modification density, approaching that in bacterial contigs, mostly accounts for over-representation of modified bases in TRs, leaving other TRs virtually unmethylated.
In subsequent experiments, we took advantage of the high methylation susceptibility of these repeats (see below).
In genes, the PacBio methylation tag density is much lower than that in TEs and TRs (Fig. 2g). Still, genic regions cover slightly over one-half of the AvL1 genome, attracting a sizeable proportion of 4mC and 6mA modifications (52% of 4mC and 54% of 6mA). To correlate methyl marks with gene structure, we examined 4mC and 6mA distribution using more refined features: gene bodies, promoters within 2 kb upstream of the TSS, and intergenic regions which may include TEs and TRs, with gene bodies further subdivided into CDS (exons excluding 5′ and 3′ UTRs), introns, 5′ and 3′ UTRs (Fig. 2h). Altogether, base modifications are found in all features (CDS, promoters, and intergenic regions); when the density per average feature size is compared, CDS regions carry more 4mC than introns (Fig. 2h), reminiscent of 5mC patterns in mammals 43 , but introns carry as many 6mA marks as CDS, minimizing the possibility of m 6 A carryover from RNA.
In AvL1, DIP-seq shows relative enrichment with 4mC and 6mA within TE bodies ( Supplementary Fig. 5b, d). PacBio 4mC sites display a trend for enrichment near the 5′ TE boundaries, while 6mA sites show a local depletion (Fig. 2c), which is visible even though TE promoters are located near TE 5′-ends but not necessarily at the boundary, and is not due to a local change in base or dinucleotide composition ( Supplementary Fig. 3b). Moreover, 4mC and 6mA marks are predominantly found over full-length or nearly full-length TE copies and are practically absent from shorter TE fragments spanning less than one-half of TE consensus length, suggesting that active TE copies are preferentially targeted (Fig. 2i). The lack of 4mC and 6mA marks in shorter TE copies, together with a concentration of 4mC near 5′ TE boundaries, suggest that their deposition is associated with transcriptional activity.
To visualize 4mC and 6mA densities in TRs, TEs, and genes on representative contigs, we built Circos plots (Supplementary Fig. 7a-d), in which the PacBio modification layer is plotted as modification fraction (from 0 to 1) for each modified base. In agreement with Fig. 2e, highly methylated 4mC sites dominate in most locations, while 6mA sites are distributed over a much wider methylated fraction range and across a wider feature range. Importantly, higher densities of modified bases are not necessarily correlated with areas of higher PacBio read coverage, indicating that over-representation of methyl marks over TEs and TRs is not due to excess coverage in these regions (e.g. mtDNA at 127x coverage displays very few marks) ( Supplementary Fig. 7e). Supplementary Fig. 7c, d shows that long copies of Vesta and Athena retrotransposons attract methyl marks, but short copies do not. Supplementary Fig. 8 presents a more detailed view of selected contigs, including TRs, retroelements, and DNA TEs. Of note, an inspection of 36 high-density 4mC regions lacking annotations showed that one-half correspond to TEs unrecognized during annotation, independently confirming TEs as N4CMT targets (Supplementary Fig. 7f N4CMT acts as 4mC-methyltransferase in E. coli. The domain structure of N4CMT cannot be taken as evidence of its N4C-MTase activity, since the N6_N4_MTase domain repeatedly evolved 6mA or 4mC specificities 44 . However, N4CMT function cannot be disrupted in vivo, as the tools for genetic manipulation in bdelloids are yet to be developed. We, therefore, sought to investigate the activity of the recombinant N4CMT protein in a heterologous system. To this end, we PCR-amplified N4CMT from A. vaga cDNA to obtain intronless versions (Methods; Supplementary Table 7). Amplicons were cloned into pET29b expression vector with the N-terminal S-tag and the C-terminal Fig. 2 Genome-wide distribution of 4mC and 6mA methylation in A. vaga. a Distribution of DIP-seq 4mC and 6mA sites around TEs (metaprofile), genes (TSS), and 5′-end TE profiles in Av-ref, showing IP occupancy in 25-bp bins within ±2.5 kb of each feature. In metaprofiles, the body size feature, representing genes or TEs, is automated and normalized (0-100% length). TE 5′-profile shows 4mC and 6mA sites near 5′ boundaries, aligning transposons at the 5′ end. b IPD ratios in AvL1 SMRT-seq data at four representative 4mC and 6mA modification sites. Purple and orange bars, Watson and Crick strands. c SMRT-seq 4mC and 6mA occupancy in 5 and 25-bp bin sizes within ±0.5 and ±2.5 kb of 5′ TE boundaries. d MEME-ChIP motif analysis of regions around SMRT-seq 4mC and 6mA sites. Windows of ±5, ±10, and ±20 bp were extracted and searched for significant motif enrichment. Significance was assessed by Fisher's exact test; p-value generated by MEME-ChIP is shown under each motif. e Methylation fraction distribution at modified sites detected by SMRT-seq. Most 4mC sites are fully methylated (fraction = 1); average methylation level of 6mA sites is 0.74. f PacBio read coverage distribution by base modification sites. The minimal threshold coverage limit applied for calling 4mC and 6mA methylated sites to calculate methylation fraction per site in (e) is shown by a dashed line. g Average numbers of 4mC and 6mA base modifications in protein-coding genes, TEs, and tandem repeats. Average is calculated as the total number of modified sites divided by the total number of annotations (unique IDs) in each feature and divided (normalized) by the genome fraction covered by such annotation in the genome (genes, 0.533; TE, 0.021; TR, 0.0084). h Distribution of SMRT-seq 4mC and 6mA sites within genic features (CDS, intron, 5′ UTR, 3′ UTR, 5′-promoter region) and intergenic regions by average feature size (bp). i DNA methylation density vs. TE copy integrity. Bar height indicates average number of 4mC or 6mA SMRT counts; error bars represent standard deviation for full (n = 321), medium (n = 305), and short (n = 8623) TE copies.
6×His-tag and expressed in E. coli. We examined two A. vaga allozymes A and B, differing by six amino acids (aa): three in the N6_N4_MTase domain and three in the chromodomaincontaining C-terminus (Supplementary Table 8; Supplementary Fig. 9a). We also tested two inter-allelic recombinants swapping the rightmost substitution near the C-terminal His-tag, which may have arisen during rotifer cultivation or PCR amplification, as well as two 3′-truncated derivatives lacking the chromodomain.
To assess plasmid-borne N4CMT activity in vivo, its expression was induced by IPTG, and gDNA was extracted 4 h postinduction (Methods). Figure 3a shows the immuno-dot-blot of membrane-immobilized gDNAs probed with anti-4mC and anti-6mA antibodies, with 4mC signal observed from full-length N4CMT allozymes in the absence of signal from the untransformed host strain. As expected in the dam + background, 6mA methylation was detected in all samples, serving as an internal DNA control. Not surprisingly, removal of the chromodomain, which yields a core MTase equal in length to its bacterial counterparts, did not reduce activity and even showed an increase in signal intensity due to better solubility of the 33-vs. 45-kDa enzyme (Fig. 3a, N4CMT-ΔCbx). The N4CMT_A allozyme mostly showed weaker activity, suggesting that substitutions in the presumed TRD region of the N6_N4_MTase domain affect protein solubility or interaction with target DNA. These findings were corroborated by digestion of corresponding gDNAs with the endonuclease McrBC, which cleaves DNA at any modified cytosines. DNAs extracted from Rosetta 2(DE3) transformed with six N4CMT-expressing plasmids and the control human DNA were readily digested with McrBC, while DNA from the untransformed dcm-strain was not (Fig. 3b).
To ensure that the observed activity is directly attributable to N4CMT, we created N4CMT mutants in which the catalytic SPPY motif was replaced with APPA (Supplementary Table 8). Figure 3c shows that 4mC addition is abolished after substitution of the catalytic Ser and Tyr residues with Ala, indicating that N4CMT is responsible for adding N4-methyl groups to cytosines in dsDNA with SPPY as the catalytic motif, justifying our initial N4CMT designation. Further investigation of purified recombinant N4CMT activity on preferred substrates in vitro revealed that it acts de novo on unmethylated dsDNA, and that a conserved sequence motif in the TR mediates sequencespecific mode of substrate recognition (Supplementary Note 2; Supplementary Fig. 9).
Base modifications and histone modifications. In the context of eukaryotic chromosomal DNA environment, any intrinsic target preferences of N4CMT manifested in vitro, while apparently yielding high 4mC densities in certain TRs, would not necessarily be required for 4mC deposition in other genomic regions, which may instead be facilitated by the C-terminal chromodomain of the chromobox type (CBX) 34 . CBX is expected to recognize methylated lysines K9 and K27, the best-studied heterochromatic marks embedded in the ARKS motif at the N-terminus of histone H3, which are associated with transcriptionally silent chromatin and in non-mammalian systems frequently overlap, not because of antibody cross-reactivity but due to a similar function in TE repression [45][46][47][48] . To associate DNA methylation marks with specific histone modifications, we performed chromatin immunoprecipitation followed by deep sequencing (ChIP-seq) on A. vaga chromatin with anti-H3K9me3 and anti-H3K27me3 antibodies (Methods). For contrasting comparisons with active chromatin, we used an anti-H3K4me3 antibody, which recognizes the modification associated with active TSS 45,49 . After validating antibodies by immuno-dot-blotting (Methods), we profiled the distribution of three H3K modifications in Av-ref and AvL1 strains by ChIP-seq. We found that H3K9me3, a mark for constitutive heterochromatin, often co-localizes with H3K27me3 NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-022-28471-w ARTICLE known to characterize facultative heterochromatin, but not with H3K4me3 which marks active genes (Supplementary Table 10). As expected, host genes display significant H3K4me3 enrichment, which typically covers 1-2 kb around the TSS and shows a characteristic bimodal peak in both strains (Fig. 4a, c top). In contrast, H3K9me3 and H3K27me3 enrichment is observed mostly over TEs and covers the entire TE body, often extending upstream and downstream from a TE insertion, which may be indicative of spreading (Fig. 4b, d top).
To explore the association of 4mC and 6mA with active or repressive histone marks, we used ChIP-seq data for the euchromatic mark (H3K4me3) and two heterochromatic marks (H3K9me3 and H3K27me3) as a proxy for active and silent chromatin, respectively. The low resolution of DIP-seq precludes genome-wide extrapolations in Av-ref, allowing only initial comparisons. For 6mA-DIP-seq peaks, 13.6% intersected with regions bearing euchromatic histone modifications (H3K4me3), while only 4.4% overlapped with heterochromatic histone modifications (H3K9me3 and H3K27me3 combined). For 4mC-DIP-seq peaks, 6.5% intersected with regions bearing heterochromatic histone modifications (H3K9me3 and H3K27me3), but only a minor fraction (1.5%) overlapped with H3K4-marked regions. Following normalization and aggregation of aligned reads in ChIP-seq datasets and comparison with chromatin DNA input (log2 ratio with bamCompare; see "Methods"), we found that DIP-seq peak summits (4mC and 6mA) overlap with H3K9me3 and H3K27me3 ChIP-seq covered regions, while little if any overlap is seen with H3K4me3 (Fig. 4e).
In AvL1, for 4mC-DIP-seq peaks, 42.3% intersected with regions bearing heterochromatic histone modifications (H3K9me3 and H3K27me3 combined), but only 6.6% overlapped with H3K4me3marked regions. Similarly, for 6mA-DIP-seq peaks, 42.9% overlapped with heterochromatic histone modifications (H3K9me3 and H3K27me3 combined), but only 6.3% intersected with regions bearing euchromatic H3K4me3 modifications. After normalization of aligned reads in the ChIP-seq dataset and comparison with chromatin DNA input, we confirmed that DIP-seq peak summits (4mC and 6mA) are strongly correlated with H3K9me3 and H3K27me3 heterochromatic ChIP-seq reads, as seen in Fig. 4f. Examples of co-localization may be seen in Fig. 4h and Supplementary Fig. 8. Thus, the presence of DNA methyl marks is preferentially associated with silent chromatin in both strains. A similar pattern is observed in AvL1 SMRT analysis, where the 4mC and 6mA marks are more frequently associated with inactive chromatin domains marked by H3K9me3 and especially H3K27me3 (Fig. 4g). Collectively, these results support the view that, in addition to any intrinsic target preferences of N4CMT, its action in the genome may be directed by the CBX moiety, targeting MTase activity to chromatin regions with repressive histone marks.
Methylomes and transcriptomes in the chromatin context.
To associate histone marks with transcriptionally active or repressed genes in A. vaga, we plotted our RNA-seq data for genes co-localizing with either active or repressive H3K-me3 histone marks ("Methods"). As expected, genes near H3K4me3 have significantly higher RPKM (reads per kilobase of transcript per million mapped reads) values (ANOVA p-val < 0.01) than genes with heterochromatic histone marks (H3K9me3 and H3K27me3) or no marks (Fig. 5a). AvL1 displays the same pattern ( Supplementary Fig. 10a). The tentative designation of 6mA modification as an active epigenetic mark 9,13 prompted us to similarly explore its correlation with gene transcription. The A. vaga gene dataset, after removing TE-derived genes, was divided into two groups, with and without the presence of 6mA peaks within a window size of ±500 bp of each gene ID, and RPKM values were counted in both groups. We found that genes with 6mA depositions tend to have higher RPKM than genes without 6mA (t-test p-val: 2.2E−16, Fig. 5b bottom). For genes with 4mC modifications, no significant differences in expression were seen with or without 4mC marks (Fig. 5b top). A detailed analysis of 6mA distribution in genes and their promoters, which shows that only a subset of genes is affected, and rules out contribution of m 6 A from RNA, is presented in Supplementary Note 3 and Supplementary Figs. 11 and 12 (see also Source Data 1).
A different picture was observed for TEs upon examining the association of transcript levels of TE-related genes with DIPseq peaks ("Methods"). While TE-related genes with or without 6mA did not show much difference in RPKM values, TE-related genes with 4mC marks showed a decrease when compared to those without 4mC (t-test p-val: 6.8E−8, Fig. 5c, top). Thus, in expressed TEs 4mC may be regarded as a repressive mark. Note that co-localization of 4mC and 6mA is compatible with repression, as 6mA was reported to form an adversarial network preserving Polycomb silencing 23 . Alternatively, some of the 6mA marks co-localizing with 4mC-rich regions may represent a "bleed-through" signal from the nearby 4mC in SMRT-seq data, as was inferred for 5mC-rich regions in mammals 11 . Regardless of 6mA role, the transcriptionally repressed state of TEs is corroborated by a measurable overlap with small RNA profiles, observed for 4mC but not for 6mA (Supplementary Note 4; Supplementary Fig. 13c). Small RNAs play a prominent role in transcriptional repression of A. vaga TEs 50 , and bdelloids show a dramatic expansion of RNA-mediated silencing machinery, with dozens of Piwi/Ago and RdRP copies 17,51 .
Interpreting the 4mC marks. To identify possible readers of bacterial marks, we searched for candidate proteins capable of discriminating between methylated and unmethylated cytosines. All known DNA methyl groups protrude from the major groove of the B-form double helix and can be recognized as epigenetic marks. In eukaryotes, several protein domains can read 5mC (SRA/SAD/YDG; MBD/TAM; Kaiso) or 6mA (HARE-HTH; RAMA) 6,12 , usually in a preferred sequence context. We used profile-HMM searches to find candidate methyl readers in Adineta genomes. No homologs were found for the SAD_SRA domain (PF02182), which recognizes hemimethylated CpGs by embracing DNA and flipping out the methylated cytosine 52 . However, we saw the drastic expansion of MBD/TAM-containing proteins, which do not require base-flipping: 14 different alleles (originating from three quartets, Q1-Q3, plus a segmental duplication) encode seven SETDB1 variants, as opposed to only one in monogonont rotifers or other invertebrates (Fig. 6a, b; Supplementary Fig. 14a; Supplementary Data 2). These proteins share the same domain architecture, with the MBD sandwiched between the N-terminal triple-Tudor domains and the C-terminal pre-SET/SET/post-SET domains, present in all SETDB1/egglesslike H3K9me3 histone lysine MTases (KMTs) (Fig. 6a). All seven proteins are transcribed in each Adineta spp. (Supplementary Fig. 15). Additional MBD/TAM domains of BAZ2A/TIP5-like remodelers, which form heterochromatin on rDNA and satellites 53 , comprise only one quartet in A. vaga (Supplementary Fig. 14c; Supplementary Data 3).
To find out whether other KMTs are similarly expanded, we performed an inventory of SET domain-containing A. vaga proteins, especially those known to methylate H3K9/H3K27 (Supplementary Data 3). In addition to seven pairs of SETDB1 homologs acting on H3K9, we detected two quartets of E(z)/EZH/ mes-2-like orthologs (KOG1079, Transcriptional repressor Ezh1) known to methylate H3K27. More distantly related SET-domain proteins showed domain architectures characteristic of H3K4, H3K36, and H4K20 KMTs (Trx-G/Ash1/Set1/MLL, SETD2, SETD8) and were not expanded, comprising either a quartet or a pair. Interestingly, we found six stand-alone SET-domain homology regions resembling H3K4/H3K36 KMTs (PRDM9/7/ set-17), which were not predicted in the annotated gene set, nontranscribed, and lacked additional domains (KRAB_A-box, SSXRD) characteristic of PRDM9/7 proteins involved in localizing meiotic recombination hotspots and in male-specific expression 54,55 . Unexpectedly, we failed to identify two known KMT types acting on H3K9 or K9/K27: Su(var)3-9/SUV39H1/ set-25/Clr4, a "histone read-write" architecture consisting of chromo-and SET-domains, mediating constitutive heterochromatin formation 56 ; and G9a/EHMT2/KMT1C (ankyrin repeats plus SET), which initiates de novo methylation and silencing of repeats and developmentally regulated genes 57 . These domain architectures may have been lost and/or replaced by vastly expanded SETDB1-like variants.
We next sought to determine whether SETDB1 is similarly amplified in all bdelloids. Six species in the genus Rotaria from the family Philodinidae 19 possess the same seven variants as do Adinetidae, indicating that SETDB1 amplification occurred prior to divergence of the major bdelloid families (Fig. 6b). An unusual SETDB1 divergence pattern is seen in the bdelloid Didymodactylos carnosus, which forms the deepest-branching sister clade to other known bdelloids 51 and lacks N4CMT. While in three cases Dcar_SETDB1 forms sister clades to variants from other bdelloids, preceding quartet formation, the Q1 quartet lacks Dcar_SETDB1 homologs; moreover, an ortholog of Av_s314 shows clear evidence of loss, detected as a small 170-aa C-terminal fragment (Supplementary Data 3). This natural gene knockout is associated with an increase in LINE elements to the levels seen in monogononts, which agrees well with high concentration of 4mC over LINEs (Supplementary Fig. 16), but was not prevented by high copy number of Ago/Piwi proteins (Fig. 6c) 51 . Notably, LINE elements, due to their mostly vertical transmission, are expected to be more deleterious if sex is rare or absent 58 .
The role of MBD as a universal discriminator of 5mC marks in DNA is questioned by the presence of SETDB1 orthologs in species lacking 5mC, such as Drosophila melanogaster and Caenorhabditis elegans 6 , and many MBD proteins do not bind 5mC ( Supplementary Fig. 14c). However, the structure of human MBD1 shows its unique potential for recognizing 5mC in the major groove without encircling DNA, which makes MBD an ideal candidate for interacting with nucleosome-bound DNA without interference from core histones 59,60 . Moreover, three of the seven bdelloid SETDB1-like variants display two conserved arginines in the MBD involved in the recognition of cytosines in the DNA backbone, potentially accounting for CpG preference ( Supplementary Fig. 14a, b). However, they show extensive variation in the length of the antiparallel β1-β2 loop, which reaches across the major groove and interacts with one of the methyl groups. Since the overall structure is compatible with recognition of an asymmetrical DNA methyl group in the nucleosomal context, we sought to find out whether some of the seven SETDB1 variants may have adapted to preferentially recognize a novel methyl mark in the major groove.
To this end, we synthesized seven recombinant plasmids carrying tagged versions of the corresponding MBD/TAM domains ( Supplementary Fig. 17a). We tested these proteins in electrophoretic mobility shift assays (EMSA) with the 451-bp repeat fragment (sAvL1-451, see above), which was either unmethylated or 4mC-methylated by N4CMT in vitro, to ensure sufficient methylation density and favorable position of methyl marks. As MBD/TAM is a generic DNA-binding domain, most AvMBD's are capable of binding both unmethylated and methylated DNA fragments ( Supplementary Fig. 17b, c). We chose AvMBD_s314 to assess its binding preference for 4mC-methylated DNA since the loss of its ortholog in D. carnosus (see above) is associated with a notable increase in LINE retrotransposon content 51 . We tested four AvMBD_s314 concentrations (2.38, 3.23, 3.75, and 4.14 nM) in EMSA with 32 P-labeled sAvL1-451 and four concentrations of the unlabeled sAvL1-451 competitor, which was either unmethylated or 4mC-methylated by N4CMT_B in vitro. This approach provides a more adequate comparison than measurement of dissociation constants (K d ) for two labeled probes, as in vitro methylation is variably efficient. We observed a clear preference of s314 for binding 4mC-methylated DNA, with p < 0.05 in a one-tailed Student's t-test in four independent experiments (Source Data 2), when using >10× excess of non-labeled competing methylated or unmethylated DNA (p = 0.044 for 40×; p = 0.018 for 100×). Figure 6d shows a representative EMSA gel for the 3.75 nM s314 protein concentration, which yielded 88.3% protein-bound DNA with 0.05 nM 32 P-labeled sAvL1-451 fragment. This protein concentration was tested twice, and the average change in the amount of unbound DNA over increasing concentrations of unlabeled competitor DNA (Fig. 6e) shows that upon the increase of competitor concentration, the shift from DNA-protein complex to unbound DNA occurs faster for 4mC-modified DNA than for unmethylated DNA, indicating a preference for 4mC target.
Discussion
Here, we identify and characterize the N4-mC base modification in rotifer DNA, expanding the repertoire of methylated bases in Metazoa with a modification known so far only in bacteria. We confirm its presence in bdelloid rotifers, combining multiple lines of investigation and accounting for artifacts inherent to each modification detection method 38,61,62 and for bacterial contaminations. In agreement with the absence of Dnmt1/Dnmt3-like MTases, we failed to detect 5mC in bdelloids, while 4mC and 6mA were detectable by several orthogonal methods. We identified N4CMT, a horizontally transferred enzyme of bacterial origin, as responsible for the addition of 4mC marks to DNA. Expression of recombinant N4CMT in E. coli results in 4mC addition to DNA, as follows from immuno-dot-blot analysis and methyl-sensitive digests of DNA from N4CMT-expressing bacteria vs. methyl-free strains. Not surprisingly, chromodomain is not required for 4mC deposition either onto bacterial DNA in vivo, which is not packaged into chromatin, or onto preferred DNA substrates by recombinant N4CMT in vitro. However, in the context of eukaryotic chromatin, ChIP-seq and DIP-seq distributions reveal strong correlations between H3K9/27me3 silent chromatin marks and DNA methyl marks. Thus, N4CMT may contribute to epigenetic homeostasis, whereby deposition of repressive chromatin marks is ensured by passive preservation of 4mC via covalent linkage to DNA in the absence of active enzymatic demethylation, helping to maintain TE repression in eggs and adults. Over-representation of 4mC at the 5′ TE boundaries, i.e., near TE promoter regions, may affect transcription factor binding near promoters and cause transcriptional interference, as previously seen for 5mC 63 .
While the lack of candidate 4mC erasers supports 4mC role in maintaining TE silencing, other important components of epigenetic systems are the "reader" proteins, which could interpret the 4mC mark to form a regulatory loop, as is known for 5mC and 6mA 64 . The N4CMT architecture is reminiscent of plant chromomethylases (CMT), "histone-read-DNA-write" enzymes with a C5-MTase-embedded chromodomain, which reads H3K9me marks and deposits similar marks at nearby non-CG's. Together with "DNA-read-histone-write" architecture provided by KYP, an H3K9-KMT with the 5mC SRA reader domain, the CMT3-KYP pair forms a mutually reinforcing loop reading each other's epigenetic marks 64 . The crosstalk between mCpG and H3K9me in animals and plants is even more complex, requiring multiple protein factors 5 . In bdelloid N4CMT, a very simple "histone-read-DNA-write" architecture, with the chromodomain reading the repressive H3K9/27me3 marks and MTase writing the atypical 4mC marks onto DNA in the absence of an eraser, has the potential to link histone and DNA layers through a reinforcement loop which feeds back onto silent chromatin via "DNA-read-histone-write" SETDB1-like KMTs to help maintain repressive marks on histone tails throughout cell divisions for continuous TE silencing (Fig. 6f). Association of 4mC with fulllength TEs capable of transcription and the overlap of 4mC and small RNA distribution patterns further suggest that the loop may be triggered by pi-like RNAs from transcribed TEs, which are known to initiate transcriptional silencing on nascent RNAs via Piwi and perhaps SFiNX-like protein complexes [65][66][67] , or may directly affect methylation, as in mice 68 . In this scenario, epigenetic inheritance relies on overriding the normally occurring H3K9me erasure by KDM4/JMJD2 69 , which is present in A. vaga. Our finding that an amplified bdelloid-specific SETDB1-like variant prefers binding to 4mC-methylated DNA in vitro suggests that 4mC stimulates more efficient binding of SETDB1 in the nucleosomal context, linking N4CMT-mediated 4mC deposition to re-establishment of H3K9me3 that helps to preserve silent chromatin marks on TEs and other repeats.
Notably, bdelloids exhibit some of the lowest TE content among metazoans, while members of the sister class Monogononta, which lack cytosine methylation and encode a singlecopy SETDB1, show reduced ability to contain TE proliferation, which can double their genome size 70 . Earlier, we reported drastic expansion of Ago/Piwi and RdRP genes in bdelloids, which are extremely TE-poor, in contrast to the acanthocephalan Pomphorhynchus laevis (Rotifera) with 66% TE content and no expansion of Ago/Piwi 17,51 , underscoring the importance of RNA silencing pathways in TE control. Notably, the bdelloid D. carnosus, despite Ago/Piwi expansion, does not show the dearth of retrotransposons typical of other bdelloids, displaying an elevated content of LINE elements matching that of Brachionus monogononts and shifting the average bdelloid LINE content upwards 51 . Here, we find that D. carnosus lacks N4CMT and specific SETDB1 variants which may have evolved to interact with the 4mC mark, suggesting that the genome defense system in D. carnosus is missing an important layer for preventing TE expansion. The elevated LINE content in this natural knockout of the 4mC-preferring variant highlights the importance of crosstalk between genome defense layers for efficient TE control, since Adineta and Rotaria during their evolutionary history experienced a strong decrease in retrotransposon content (Fig. S3c, h, i in ref. 51 ), which coincided with the emergence of N4CMT and of 4mC-preferring SETDB1 variants.
Collectively, our findings help to unravel a fascinating evolutionary puzzle: How can a bacterial enzyme decorating DNA with non-metazoan modifications penetrate eukaryotic gene silencing systems and become preserved by natural selection for tens of millions of years? Given the importance of similar processes at the dawn of eukaryotic evolution, when MTases were recruited to create the extent epigenetic systems, the bdelloid case spans a unique time interval in the evolutionary history, when its advantages have been fully manifested and validated by natural selection, but its resemblance to bacterial counterparts has not yet been completely erased. Losses of DNA methylation have occurred multiple times throughout the eukaryotic tree of life; however, de novo recruitment of a bacterial mark into an existing epigenetic system has not been observed in more recent metazoan history. A synthetic "DNA read-write" 6mA system in cultured human cells, based on E. coli Dam MTase and bypassing chromatin states through artificial targeting, has been created 71 , however, such a "shortcut" is unlikely to persist over evolutionary time scales. In the liverwort Marchantia polymorpha, a recently duplicated 4mC methyltransferase of bacterial origin was recruited in spermiogenesis to modify over one-half of all CpG sites, however without additional N-or C-terminal domains it acts genome-wide, without recognition of specific features 72 . Our system helps to discern the selectively advantageous features in epigenetic control systems and emphasizes that the addition of a DNA epigenetic layer to the histone layer demands enhanced inter-connection of components between layers via acquisition of extra domains for efficient operation. Finally, it demonstrates that horizontal gene transfer, the role of which in eukaryotic regulatory evolution is a subject of intense debate 73,74 , can re-shape complex regulatory circuits in metazoans, thereby driving major evolutionary innovations that include epigenetic control systems.
Additional discussion can be found in Supplementary Discussion.
Methods
Rotifer cultures. A clonal culture of A. vaga, started in 1995 from a single individual, was maintained continuously in filtered spring water and fed with E. coli M28. Rotifers were grown in 150 × 20 mm untreated Petri dishes and transferred into new ones until the desired biomass was reached. The A. vaga L1 natural isolate 35 was collected in 2012, and the clonal culture was maintained in the laboratory under the same conditions. Plasmid construction. N4CMT ORFs from scaffold_23 (GSADVT00006927001, allele N4CMT_A) and scaffold_179 (GSADVT00035445001 allele N4CMT_B) (http://www.genoscope.cns.fr/adineta/cgi-bin/gbrowse/adineta/) were amplified from cDNA to eliminate introns. The first exon in the annotation is variable in different bdelloids, thus it was omitted from primer design, so that the N-terminus coincides with that used by bacterial MTases. Briefly, RNA was extracted from adult rotifers starved for 24 h, using Direct-zol ™ RNA Miniprep kit (Zymo Research), and cDNA was synthesized from 2 µg of RNA with SuperScript ® IV Reverse Transcriptase (Invitrogen) and random hexamers, following the manufacturer's protocols. N4CMT was then amplified by PCR from 5% of cDNA reaction with Q5 ® Hot Start High-Fidelity DNA Polymerase (NEB). All primers used in this study are listed in Supplementary Table 7. PCR fragments were cloned into pET29b(+) vector (Novagen) using BamHI and XhoI sites and were propagated in E.coli NEB5-alpha (NEB). Catalytically inactive mutants were obtained using Gen-Edit ™ site-directed DNA mutagenesis kit (First Biotech). To obtain substrate plasmids pUC19-m97 and pUC19-m119, the insert sequence was amplified from AvL1 genomic DNA with primers A11motif-Hind3-F and A11motif-BamH1-R (Supplementary Table 7) and OneTaq ® Hot Start DNA Polymerase (NEB). Amplicons were treated with HindIII (Anza ™ 16) and BamHI (Anza ™ 5) in 1× Anza ™ Red Buffer (Thermo Fisher Scientific) and purified through 1.5% agarose gel using Zymoclean Gel DNA Recovery kit (Zymo Research). The pUC19 vector was prepared in the same way, ligated with insert using Instant Sticky-end Ligase Master Mix (NEB), and transformed into NEB5α competent cells (NEB). Plasmid purifications were done with Zyppy Plasmid Miniprep (Zymo Research). Inserts were verified by Sanger sequencing on the ABI3730XL at the W.M. Keck Ecological and Evolutionary Genetics Facility at the Marine Biological Laboratory. Expression plasmids carrying AvMBD inserts in pET29b(+) vector were synthesized by GenScript. All DNA sequences were optimized with Gen-Smart™ service to yield soluble recombinant proteins in E. coli.
Protein expression and purification. Recombinant proteins were expressed in E. coli Rosetta 2(DE3) (Novagen) in LB medium, Miller formulation (Amresco) supplied with 50 μg/ml kanamycin (Fisher Scientific), and 34 μg/ml chloramphenicol (Acros Organic). First, cells were grown at 37°C, 200 rpm until OD = 0.4. After that, cultures were heat-shocked as follows: 10 min at 42°C, 20 min at 37°C, 30 min on ice, and 20 min at 37°C. After the final OD check, expression of recombinant proteins was induced by supplying the growth medium with IPTG (Gold Bio) to 500 μM, and the culture was grown for an additional 4 h at 32°C, 350 rpm for N4CMT versions or for an additional 3 h at 34°C, 300 rpm for AvMBD's. Bacteria were pelleted by centrifugation at 4°C, 4000g for 30 min and stored at −80°C. Induction of recombinant proteins was confirmed by sodium dodecyl sulphate-polyacrylamide gel electrophoresis (SDS-PAGE) followed by Western blotting as we described in ref. 75 . For protein purification, cellular lysates were prepared using xTractor ™ Buffer (Clontech), supplemented with lysozyme (Sigma), DNase I (Promega) or Benzonase ® Nuclease (Sigma), and Roche cOmplete ™ EDTA-free Protease Inhibitor Cocktail (Sigma), according to the manufacturers' instructions. Soluble proteins were separated from insoluble debris by centrifugation at 4°C, 4000g for 30 min. Recombinant N4CMT were purified using TALON ® Single Step Columns (Clontech), following the manufacturer's protocol. Proteins were concentrated using Pierce ™ 9 K MWCO Protein Concentrators (Thermo Scientific), and the buffer was exchanged to 50 mM phosphate buffer, 300 mM NaCl, pH 7.0, supplemented with Roche cOmplete ™ EDTA-free Protease Inhibitor Cocktail (Sigma). Protein concentrations were equalized based on concentration of the full-length His-tagged protein, as detected by Western blotting with His-tag-specific antibodies (Aviva Systems Biology OAEA00010), using Image Studio ™ Lite 5.2.5 software (LI-COR). Purified proteins were stored at 4°C for up to 2 weeks. Recombinant AvMBD's were purified on ÄKTA Pure M2 with HiTrap TALON 1 ml columns (Cytiva), concentrated with Pierce ™ 3 K MWCO Protein Concentrators PES (Thermo Scientific), supplied with EDTA, glycerol, and protease inhibitors to the final buffer composition of 40 mM sodium phosphate, pH 7.4; 240 mM NaCl; 102 mM imidazole; 20% glycerol; 4 mM EDTA; 1× cOmplete protease inhibitor cocktail; 1× Halt protease inhibitor cocktail; pH 7.4. Proteins were stored in single-use aliquots at −80°C. Proteins were quantified using Micro BCA ™ Protein Assay Kit (Thermo Scientific), and their purities were verified by SDS-PAGE in 15% resolving gel followed by staining with InstantBlue Protein Stain (Expedeon) and Western blotting with S-tag (Sigma-Aldrich 71549-3) and His-tag (Aviva Systems Biology OAEA00010) specific antibodies, both at 1:5000 dilutions, as we described in ref. 75 .
DNA substrate preparation for methylation assays. The A. vaga cultures were maintained as above but fed with dam−/dcm− E. coli (C2925, NEB) strain instead of M28 for a month. Genomic DNA was extracted from adult rotifers starved for 48 h, following the standard phenol-chloroform extraction protocol 76 . To obtain control DNA from different E. coli strains (Supplementary Table 3), bacteria were grown overnight in LB medium Miller formulation (Amresco) at 37°C and 200 rpm, and total DNA was extracted using UltraClean ® Microbial DNA Isolation Kit (MoBio Labs).
For N4CMT in vivo activity assays, plasmids carrying N4CMT inserts were transformed into Rosetta 2(DE3) strain. Bacteria were grown as above, pelleted and stored at −80°C until expression of recombinant proteins was confirmed by Western blot hybridization with His-tag-specific antibodies. After that, bacterial pellets were incubated in lysis buffer (10 mM Tris, pH 8.0, 100 mM NaCl, 5 mM EDTA, 120 µg/ml Proteinase K (ThermoFischer), 0.6% SDS) at 53°C overnight. Total DNA was purified using the standard phenol-chloroform extraction protocol 76 , including treatment with RNaseONE (Promega). DNA quantity and quality were inspected by agarose gel electrophoresis and NanoDrop 2.0 measurements. Cleavage of gDNA by McrBC (NEB) was performed overnight at 37°C as recommended by the manufacturer, followed by separation in 0.8% TAEagarose gel electrophoresis. Plasmids (pUC19, pBlueScript SK+, etc.) for methyltransferase assays were transformed into methyl-free C2925 competent cells (NEB) and purified using Zyppy Plasmid Miniprep (Zymo Research). To obtain 4mC-positive control for immunoassays, pUC19 was methylated with M.BamHI MTase (NEB). To obtain a positive control for 6mA, pUC19 was purified from NEB5α (dam+) E. coli strain. Oligonucleotides were ordered from Eurofins Genomics and annealed in 1× annealing buffer (10 mM Tris, pH 7.5, 50 mM NaCl, 1 mM EDTA) as follows: the mix was incubated at 95°C for 3 min and allowed to cool down to RT for 1 h. Other dsDNA substrates were obtained by PCR and purified using Monarch PCR clean-up kit (NEB) or Zymoclean Gel DNA Recovery kit (Zymo Research).
In vitro methyltransferase activity assays. Reactions were carried in 1× M.BamHI Methyltransferase Reaction Buffer (NEB) supplemented with 80 µM S-adenosyl-L-methionine (SAM) provided with the buffer. Optimal results were obtained with 500 µg/ml as a final concentration of N4CMT recombinant proteins. Reactions were initially incubated at 25°C for 4 h, and incubation was continued for another 16 h after supplementing with additional 80 µM SAM.
DNA dot blot immunoassays. Samples were spotted on BioTrace ™ NT Nitrocellulose Transfer Membrane (Pall Corporation), air-dried and UV-cross-linked with 120,000 μJ/cm 2 exposure using Spectrolinker ™ XL-1500 UV crosslinker (Spectronics Corporation). The cross-linked membrane was blocked in 3% non-fat milk in TBST (containing 0.05% v/v Tween) and incubated with 1:40,000 anti-N4methyl-C antibody or with 1:60,000 anti-N6-methyl-A antibody at 25°C for 1 h. Rabbit primary antibodies raised against 4mC-or 6mA-modified DNA 77 were a kind gift from Dr. Iain Murray (NEB), and were re-checked for the absence of cross-reactivity, as well as for lack of reactivity with 5mC on human DNA. The membrane was washed three times with TBST, incubated with 1:10,000 goat antirabbit horseradish peroxidase (HRP) antibody (Sigma A0545) at room temperature for 1 h, washed three times with 1× TBST, and developed using SuperSignal ™ West Dura Extended Duration Substrate (Thermo Fisher Scientific). Chemiluminescence was detected using the Amersham Imager 600 (GE Healthcare).
Electrophoretic mobility shift assays. sAvL1-451 DNA were 5′-end-labeled with [γ−32P]dATP (PerkinElmer) using T4 polynucleotide kinase (NEB) and purified from excess of radioactive nucleotides using Oligo Clean & Concentrator kit (Zymo Research) following the manufacturer's protocols. Binding reactions were set up in 10 µl total volume in a buffer with final concentrations 100 mM KCl, 10 mM Tris, pH7.4, 0.1 mM EDTA, 0.1 mM DTT, supplied with 500 ng LightShift ™ Poly (dI-dC) (Thermo Scientific). Addition of 2.5 µl of AvMBD proteins provided 5% glycerol per reaction. Proteins were first pre-incubated with non-radioactive DNA for 15 min at RT. Then, 32 P-labeled DNA was added to a final concentration of 0.05 nM, and reactions were incubated for additional 30 min at RT. After supplying with 6× EMSA gel-loading solution (Thermo Scientific), samples were loaded onto 6% DNA Retardation gels. Samples were run at 90 V in 0.5× TBE buffer (44.5 mM Tris-HCl, pH 8.3, 44.5 mM boric acid and 1 mM EDTA) at 4°C for 90 min. Gels were dried using Model 583 Gel Dryer (BioRad), exposed with phosphorimaging plate (Fujifilm), scanned on Typhoon FLA 7000, and analyzed using Image Quant TL v8.1 software.
DNA extraction for DIP-seq. For genomic DNA extraction, animals were starved for 48 h and treated with ampicillin and tetracycline antibiotics at final concentrations of 10 mg/ml and 0.5 mg/ml, respectively, for 24 h, then harvested as described in ref. 17 . Total DNA was extracted with DNeasy Tissue kit (Qiagen); eluates were checked by agarose gel electrophoresis and final concentrations were measured by Nanodrop. The isolated genomic DNA was diluted to~250 ng/µl using TE buffer and sonicated on the 130 µl scale (Covaris microtubes) to 200-400 bp using Covaris S220 focused ultrasonicator (10% duty cycle, 175 W peak, 200 cycles, 180 s, 6°C). After measuring concentration and size distribution with Bioanalyzer High Sensitivity DNA chip (Agilent), 100 ng of fragmented DNA was used for library construction with NuGen Ovation Ultra-Low System v2.
DIP-seq (MeDIP-seq). After adaptor ligation and purification steps (NuGen Ovation Ultra-Low System v2 protocol), DNA fragments were combined with 0.5 µg of anti-4mC or anti-6mA antibodies (see above) in 500 µl of 1× IP buffer and incubated at 4°C for 6 h. In parallel, 40 µl of Protein A magnetic beads were prepared as in ref. 78 . Protein A beads were added to DNA-antibody mixture and incubated at 4°C overnight with rotation. Beads were washed four times with 1× IP buffer on a magnetic rack. Proteinase K (20 µl of 20 mg/ml solution) was used to release the methylated DNA with 3 h of incubation at 50°C. The final eluate was purified using 2× phenol-chloroform-isoamyl alcohol (25:24:1) extraction and ethanol precipitation. DNA was resuspended in 35 µl H 2 O, followed by library amplification and bead purification (NuGen RNAClean XP magnetic beads). Quality control and concentration measurement were performed using Bioanalyzer DNA 1000 chip (Agilent) and Qubit sDNA HS Assay kit (Thermo). Libraries were sequenced using the Illumina HiSeq 2500 platform (50-bp SR) at the Brown University Sequencing Core Facility. Base-calling was performed with the standard Illumina pipeline (Casava 1.8.2). Illumina adaptors were trimmed with cutadapt v1.9.2 79 , as well as any sequence with low quality score (<Q20) and/or <16 nucleotides in length (FASTX v0.0.13 Toolkit). Reads were aligned to Av-ref 17 and AvL1 assembly (see below) using Bowtie v1.1.0 80 , with parameters permitting less than one mismatch in the first 30 bases. MACS v1.3 81 was used to locate enriched regions for 4mC and 6mA in both genomes, using nomodel nolambda parameters.
Genome assembly. The initial A. vaga L1 draft assembly was generated with high quality paired-end Illumina MiSeq reads using SPAdes assembler to yield N50 of 18.125 kb 35 . However, the published AvL1 assembly filtered any sequences without blastn matches to Av-ref, which may include recent horizontal transfers and strainspecific TEs. To improve the initial assembly, DNA was extracted from rotifer eggs as in 42 , and a 20-kb library was constructed using BluePippin selection to sequence 15 SMRT cells on a PacBio RS II sequencer (Pacific Biosciences) at the Johns Hopkins University Deep Sequencing and Microarray Core facility with P6-C4 chemistry (accession number PRJNA558051). We used PBJelly from PBSuite 15.8.24 82 with PacBio filtered subreads to improve the initial AvL1 assembly. A total of 890,504 PacBio subreads with N50 read length of 16,294 bp was used after SFilter (Pacific Biosciences) and spike-in control removal. The improved hybrid assembly was filtered from contaminants using bacterial single-copy genes, GC-content, k-mer frequencies (k = 4), and DNA coverage values (both Illumina and PacBio), as in 83 . Assembled contaminant contigs, mostly of bacterial origin, were filtered out to yield a final assembly totaling 217.1 Mb in 9856 contigs (Supplementary Table 5), which is very close to the 218-Mb Av-ref assembly 17 and improves by 20 Mb the Illumina-only assembly, increasing N50 from 22.1 to 87.4 kb. We also identified 12 chimeric contigs, listed in Supplementary Data 4, which were mostly eukaryotic with an attached small stretch of bacterial DNA displaying high methylation density. The AvL1 assembly used in this work has been deposited at DDBJ/ENA/GenBank under the accession JAGENE000000000. The version described in this paper is version JAGENE010000000. Its accession number is GCA_021403095.1 in the NCBI Assembly database. Although a chromosome-scale A. vaga assembly is expected to become available soon 84 , the current bdelloid assemblies display adequate levels of contiguity to examine nearly all genomic regions.
PacBio modification analysis. We examined genome-wide distribution of modified bases in SMRT-seq data 85 with SMRT Analysis Software 2.3.0. Raw data from 15 AvL1 SMRT cells were filtered by SFilter (Pacific Biosciences) to remove reads containing adapters, short reads and low-quality reads with cutoffs for read quality ≤ 0.75, read length ≤ 50 nt, and subread length ≤ 50 nt. Filtered reads were aligned to the AvL1 assembly using RS_Modification_Detection.1 protocol (Pacific Biosciences). Briefly, the cleaned reads were aligned to AvL1 curated genome assembly using blasr v1.3.1 86 . The polymerase kinetics information was processed and reported as IPD ratio, with its fraction (the methylated portion of reads mapped) at each site. The 4mC and 6 mA base modifications were identified, and the final report was extracted as csv and gff files for posterior processing. Filtering was performed by selecting only 4mC and 6mA marks with 20× coverage and mQv ≥ 22 (Supplementary Table 6); any sites with coverage <10× were removed.
Although SMRT analysis may sometimes erroneously identify 5mC as 4mC, as occurred for the fig genome 87 , which has a full complement of plant 5mC-MTases but no N4C-MTases, we are confident that multiple orthologous methods applied to A.vaga, which lacks 5mC-MTases but has the N4C-MTase, validate our SMRTseq cytosine modification calls as 4mC. Methylation fraction values were converted into bigwig file format and plotted with deepTools2 88 . Methylation fractions for DIP-seq peak summits and transposons were represented per annotation, with the y-axis as "Mean normalized fraction". Additional analyses were done with custom scripts for plotting results with R. We separated 4mC and 6mA according to their methylation levels: low-fraction sites (0.1-0.5), moderately methylated (0.5-0.8), and highly methylated (0.8-1). The upstream and downstream 10-bp sequences from 4mC and 6mA modification sites were extracted for motif identification in each group by MEME-ChIP v5.4.1 89 . The nucleotide adjacent to the methylated sites was pulled out for counting the proportion of doublets.
Dot-blot immunoassays for histone marks. We first assayed, by dot-blot analysis, the reactivity of A. vaga histone methylation marks with Premium ChIP-seq grade affinity-purified rabbit polyclonal antibodies H3K4me3, H3K9me3 and H3K27me3, raised against synthetic peptides with the corresponding trimethylated lysines (Diagenode C15410003, C15410056 and C15410195, respectively). These antibodies display reactivity with a wide range of species including vertebrates, Drosophila, C. elegans and plants, and have been tested by ChIP-seq, IF, Western blotting, and ELISA. The H3 N-terminal residues 1-31 display 100% identity between A. vaga and humans; although formally cross-reactivity of K9/27 cannot be excluded for A. vaga, none was observed in human peptide arrays spanning the identical aa sequence (Diagenode). Protein extracts from Av-ref and AvL1, resuspended in 0.5 v of extraction buffer (10 mM Hepes, 5 mM MgCl 2 , 2 mM DTT, 10% glycerol and cOmplete protease inhibitor tablets (Roche)), were spotted on BioTrace™ NT Nitrocellulose Transfer Membrane (Pall Corporation), air dried and blocked in 5% BSA in TBST (containing 0.05% v/v Tween) for 1 h at RT and incubated with 1:10,000 anti-H3K4me3, H3K9me3 or H3K27me3 antibodies at RT for 1 h. The membrane was washed three times with TBST, incubated with 1:10,000 goat anti-rabbit HRP antibody (Sigma A0545) at room temperature for 1 h, washed three times with 1x TBST, then once with TBS and developed using SuperSignal ™ West Dura Extended Duration Substrate (Thermo Fisher Scientific). Chemiluminescence was detected using the Amersham Imager 600 chemiluminescence imager (GE Healthcare).
ChIP-seq. Chromatin immunoprecipitation (ChIP) was performed based on the C. elegans protocol 90 with minor modifications. Briefly, rotifers were starved for 48 h before collection, and live animal pellets were washed with PBS, followed by another round with protease inhibitor (cOmplete Roche tablet). The 1-ml pipette tip was used to drip the mix into a porcelain mortar containing liquid nitrogen, and the frozen rotifer "popcorn" was ground to fine powder with a pestle. Nuclear proteins were cross-linked to DNA by adding 1.1% formaldehyde (Thermo) in PBS + 1x protease/phosphatase inhibitors (Halt TM Protease & Phosphatase Inhibitor Cocktail, Thermo) for 10 min at room temperature on a rocking platform. Cross-linking was stopped by adding glycine to a final concentration of 0.125 M and incubating for 5 min at RT. The medium was removed, and the cells were washed twice with ice-cold PBS containing 1 mM PMSF. The cells were then collected in FA lysis buffer (FA buffer + 0.1% sarkosyl + protease/phosphatase inhibitors); FA buffer: 50 mM HEPES/KOH pH 7.5, 1 mM EDTA, 1% Triton ™ X-100, 0.1% sodium deoxycholate, 150 mM NaCl. Subsequently, the chromatin was isolated, sonicated (Covaris S220: 2% Duty Cycle, 105 W Peak, 200 Cycles, 360 s, 6°C), and immunoprecipitated with 1 µg anti-H3K4me3 antibody, anti-H3K27me3 antibody, or anti-H3K9me3 antibody (all from Diagenode as above) or no antibody (input control). After reverse-cross link (overnight at 65°C), DNA was purified by using 2x phenol-chloroform-isoamyl alcohol (25:24:1) extraction and ethanol precipitation. DNA was resuspended in 35 µl 10 mM Tris-Cl, pH 8.5. The ChIP DNA and input DNA were used to construct ChIP-seq libraries using NEBNext Ultra II DNA Library Prep Kit (NEB) following the manufacturer's procedure. Libraries were sequenced on Illumina NextSeq 500 platform for 75 bp single-end HT at the W.M. Keck Sequencing Facility at the MBL. After demultiplexing and adapter trimming (bcl2fastq software, Illumina), raw reads were cleaned up to obtain high-quality reads (see parameters in IP-seq). Clean reads were mapped to Av-ref and AvL1 assemblies using bowtie2 v2.2.5 91 with default parameters. Genomic regions associated with histone modification were identified using Model-based Analysis of ChIP-Seq (MACS2 v2.1.0) 81 using default parameters. and -max-intron-length 100. Aligned reads were counted by genomic feature with HTSeq-count v0.6.1 93 , using default parameters.
RNA-seq. For
For AvL1 transcriptome, RNA extraction was performed following 17 for the fully hydrated A. vaga L1 cultures containing animals at all life stages. Rotifers were collected by centrifugation at 4000g. After removal of the supernatant (spring water), total RNA was extracted with Trizol (Invitrogen) followed by ethanol precipitation. After DNaseI treatment (DNA-free, Ambion), 1 µg of total RNA was shipped for QC, library preparation (eukaryotic mRNA protocol), and Illumina sequencing (HiSeq x PE150 bp) to Novogene Co., Ltd. Raw reads (~3.3 Gb) from two lanes as technical replicates were processed (see parameters in IP-seq), and properly paired reads were aligned to the AvL1 assembly using TopHat v2.1.1 94 , using default parameters and -max-intron-length 100. Mapped reads were counted within each feature with HTSeq-count 93 using default parameters, which were used to calculate RPKMs of annotated genes.
Prediction of protein-coding genes. BRAKER v2.1.2 95 , a combination of GeneMark-ET 96 and AUGUSTUS 97 , was used to predict protein-coding genes in the AvL1 genome using aligned RNA-seq data. TopHat alignments were used to generate UTR training examples for AUGUSTUS to train UTR parameters and predict genes. This procedure was done with -soft masking enabled, after masking the genome with RepeatMasker v4.0.7 (see Repeat annotation). Total predictions comprised 74,569 gene models originating from 74,233 loci. Initial predictions were filtered from TE genes using AvL1 TE annotations (RepeatMasker) and BLAST homology search to known TE proteins. BLAST searches were performed with 74,569 gene predictions using blastp (blast+) and blastx (diamond blast) onto nr and uniref90 databases, respectively. BLAST descriptions with TE-related terms ("transposon", "transposable", "integrase", "reverse transcriptase", "pol", "gag") were considered as TE-associated proteins. A total of 977 genes were classified as AvL1 TE-related. A further quality check of gene annotations filtered incomplete genes. Annotations at the contig boundaries were removed (n = 5205), along with CDS that carried a premature stop codon (n = 282) or without appropriate termination codon at the CDS end (n = 2748, which mostly fall on contig boundaries). A final filter was applied to remove annotations with no BLAST homology (neither nr nor uniprot) and for which CDS sequence was under 300 bp. A final gene set of 65,934 annotations was used for downstream analysis.
Repeat annotation. We used the REPET package with default settings for initial AvL1 de novo TE identification and annotation 98 . The automated library of TE families was subjected to extensive manual curation, as was previously done for Avref 17 , and used as a database for searching and annotating TE copies in the AvL1 assembly with RepeatMasker 99 . We used RMBlast (National Center for Biotechnology Information Blast modified for use with RepeatMasker) as a search engine. Initial RepeatMasker output was filtered for copies covering less than 5% of reference TE length and converted into gff3 format for subsequent analysis. TE annotation was intersected with gene models to eliminate any duplication events spanning both databases and to obtain a list of TE-encoded genes for further analysis. For TR identification, AvL1 assembly was uploaded to TRs Database 100 . We generated an initial set of TRs by analyzing the sequence of each contig using Tandem Repeats Finder v4.09 101 with default parameters (match = 2, mismatch = 7, indels = 7, minimal alignment score = 50). Further searches with modifications in the alignment score (size of the repeat unit) were performed, and manual correction was carried out when necessary.
Small RNA analysis. A. vaga sRNA-seq data (SRA accession no. SRP070765) for two wild-type small RNA replicas were mapped to Av-ref genome as described in ref. 50 . Heatmaps of sRNA-seq data for genes, TEs, and DIP-seq and ChIP-seq peaks were generated with deepTools2 88 for each annotation. Reads normalized to 1× sequencing depth (RPGC, reads per genomic content) were used for normalization in heatmaps.
Methylation data processing and visualization. For generation of heat maps and profile plots, the deepTools2 88 computeMatrix, bamCoverage, bamCompare, plo-tHeatmap and plotProfile scripts were used with specific parameters: RPGC normalization, bin size 10, effective genome size (Av 213837663 and AvL1 217117546), extendReads (IP-seq 50, ChIP-seq 75, sRNA-seq 50), interpolationMethod nearest. Methylation profiles for DIP-seq/ChIP-seq were represented per annotation as mean normalized tag signal, with the y-axis labeled "IP/ChIP occupancy". While using input DNA for comparison, the profile is represented as mean normalized log2 ratio, with the y-axis labeled "log2 ratio". Methylation profiles for DIP-seq were represented per annotation, with the y-axis labeled "IP occupancy". The annotatePeaks function from HOMER Tools v4.11 102 was used to obtain methylation profiles of selected regions of interest, using different window and bin sizes (parameters given in figure legends). Overlapping values of different annotated features (DIP/ChIP-seq peak, base modification) were estimated with bedtools v2.27.1 103 , whether they are intersecting (bedtools intersect) or after providing a specific size window (bedtools window). Genome-wide 4mC/6mA visual representations were generated using Circos v0.69-6 104 as follows: Av-ref reads were plotted from two genomic Illumina libraries (SRP020364) with different insert size (450 and 862 bp); AvL1 reads were plotted from Illumina (SRR8134454) and PacBio (SRX6639068); RNA-seq reads were plotted from SRP228822 (Av-ref and AvL1); and Av-ref small RNA reads from SRP070765. Additional visual representations for selected contigs were obtained with IGV viewer 105 .
Collinearity analysis. Syntenic regions within and between genomes were identified using MCScanX v0.8 106 after blastp all-versus-all (e-val = 1e−10, maximum number of target sequences = 5) of the protein annotations from both genomes (Av-ref and AvL1). We searched for collinear block regions with at least 3 homologous genes and 20 maximum gaps allowed. The Ks and Ka (synonymous and nonsynonymous substitution, respectively) values between pairs of collinear genes were calculated with the script add_kaks_to_MCScanX.pl (https:// zenodo.org/badge/latestdoi/92963110). We also searched for collinearity breaks between adjacent homologous blocks, defined as regions where homologous blocks could not be aligned along scaffolds without some rearrangements.
Phylogenetic analyses. MTase homologs in bdelloids were identified by tblastn searches of GenBank WGS databases at NCBI, checked for the presence of metazoan genes in the vicinity, translated with validation of exon-intron structure, and used in blastp searches of REBASE (http://rebase.neb.com/rebase/) 1 to obtain MTases with known recognition sequences. Multiple sequence alignments were performed with MUSCLE v.3.8.31 107 and manually adjusted when necessary. Amino acid sequences were clustered by neighbor-joining, as MTases are not amenable to conventional phylogenetic analysis due to hypervariability of the target recognition domain, and the tree was visualized in MEGA 108 . MBD-containing bdelloid proteins were identified by profile HMM search 109 with the MBD query (PF01429). Av-ref SETDB1 homologs from the Genoscope annotation were manually re-annotated to improve quality, and full-length proteins were used as queries in blastp searches of refseq_protein database at NCBI to obtain additional orthologs from 10 bdelloid species and representative protostome taxa. Maximum likelihood phylogenetic analysis was done with IQTREE v1.6.11 110 using bestfitting model selection and 1000 ultrafast bootstrap replicates. Ago/Piwi counts in AvL1 were done as in ref. 51 .
|
v3-fos-license
|
2024-01-09T16:03:36.750Z
|
2024-01-01T00:00:00.000
|
266867390
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://assets.cureus.com/uploads/original_article/pdf/207161/20240107-22911-2kvkrj.pdf",
"pdf_hash": "708915218dc9b38db894a032db4fb2cfb3e99651",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46743",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "f089194ab86f4002014e86bc244db8e5bf44e5e7",
"year": 2024
}
|
pes2o/s2orc
|
Investigating the Cognitive Style of Patients With Substance Use Disorder: A Cross-Sectional Study
Background The causal attributions we make to the events in our lives reflect our Cognitive Style. The use of substances can be precipitated by stressful life events, and substance use can be a result of maladaptive coping to alleviate negative effects in stressful situations. So, individuals with substance dependence may infer situations differently. The inferences made about the cause of these stressful events can give an understanding of their cognition and can further help in therapeutic interventions. Purpose The present study aims to assess the cognitive style of young patients with substance use disorder. Methods A cross-sectional research design was used and a total of 50 participants were chosen through purposive sampling from the in-patient departments of Psychiatric Hospitals and De-addiction centers. The Alcohol, Smoking, and Substance Involvement Screening Test (ASSIST) was used to assess the specific substances used by the patients and the Cognitive Style Questionnaire-Short form (CSQ-SF) was used to assess the negative cognitive style of the patients. Results Results revealed a more negative cognitive style among young patients with Dual Substance Use than patients with Multiple Substance Use, indicating that patients with Substance Use Disorder tend to attribute stressful events to causes like internal (because of self), global (applicable to all domains of life) and stable (consistent), as well as the negative consequences (leading to other bad things) and self-worth implications (something wrong in self).
Introduction
Cognitive Style is a person's habitual, prevalent, or preferred way of thinking [1].It is an individual's approach applied while undergoing any cognitive task.It could be a stable indicator of the way of perceiving, interpreting information, and responding to the environment [2].Cognitive styles are assumed to influence people's values, attitudes and social interactions and are considered a personality feature that represents both nature and nurture effects.Cognitive styles are the "lenses" through which individuals usually process information and interpret their reality and they may determine the cognitive vulnerability or cognitive risk of an individual to develop psychopathology over time [3].Cognitive styles can prospectively predict the inferences one makes about the stressful events in one's life [4].As compared to low-risk individuals, highrisk individuals are likely to draw negative inferences about the causes, consequences and self-worth implications of an event [5].
The consequences of these stressful life events may initiate or maintain Substance Use Disorder because the use of substances is frequently used as a coping method, despite their negative impact on one's ability to fulfill one's roles and responsibilities [6].Substance use refers to the pattern of harmful or hazardous use of any psychoactive substance or drug, including alcohol and illicit drugs [7].The use of psychoactive substances involves acute intoxication, harmful use, dependence, withdrawal state, psychotic condition, and amnesic syndrome as given by International Classification of Diseases-10 [8].A study by Arora et al. (2016), revealed that 91% of the medical students engaged in substance abuse, were found to be aware of the ill effects of substance use and the most common reason of substance use was found to be psychological stress and occasional celebrations [3].
Continuous encounter with stressful events makes an individual look at things from a colored lens and give rise to negative attributions to the occurrences of events, thus making an individual stuck in the loop of disturbed cognitions.Emotional and cognitive styles were found to be the predictors of stress responses over time.Reduced use of cognitive reappraisal combined with stress, predicted an increase in alcohol consumption [9].Attributional style differs significantly among drug-de-addicted relapsers and nonrelapsers, wherein relapsers scored higher on the internality domain than non-relapsers [10].
Negative cognitive style is one such cognitive attribute, that reflects the way of thinking about stressful events that increases an individual's risk for mental disorders like depression, after a stressful event [11].A case-control study by Shakeri et al. (2021) [12], on patients with opioid use disorder, indicated a higher prevalence of cognitive avoidance as a coping mechanism than in the control group and Burnett et al. (2014) [5] indicated that males having external attributional styles engage in higher amounts of alcohol and drug use.People who draw negative conclusions from stressful events may attribute these events as stable and global, they see things as having widespread negative consequences and feel inadequate and hopeless about themselves, so consequently are more susceptible to depression [13].
Negative cognitive style was associated with greater depressive symptoms in undergraduate students [14].This increased susceptibility to depression, in turn, makes the individual self-medicate and hide the depressed feeling by engaging in substance abuse, as it is the most prevalent form of maladaptive coping.Many individuals use substances to deal with their negative affective states due to the severity of their mental health problems.This suggests that most people with substance use disorder are vulnerable to developing other psychiatric disorders or vice versa.Prior literature has indicated profoundly that alcohol and tobacco use among adolescents indicated that substance use is associated with more physical and psychological symptoms, worse relationships with teachers and peers, less family support, and lower future expectations [6].In a similar realm, a study conducted by Bravo et al. (2020) pressed on the relationship between negative effects and alcohol and marijuana use outcomes among dual users [4].It was reflected that stress was indirectly related to alcohol and marijuana use, and depressive and anxiety symptoms were indirectly related to alcohol use only.All three negative affect symptoms were also indirectly related to negative consequences.Therefore, substance use is considered a problem of altered cognition, not only from a neurological but also from a psychological perspective [15].It has been indicated that patterns of alcohol use have a significant relationship with impulsivity and locus of control [16].Relapses were more common in patients with a high external locus of control and impulsivity.To bridge the gap existent in the literature pertinent to cognitive style, the current study was undertaken.The present study aims to assess the cognitive style of patients with substance use disorder.
Objectives
To identify specific substances used by the patients with substance use disorder and to assess the cognitive style of the patients with dual substance use and multiple substance use.
Methodology
The study used the cross-sectional study design.The study population comprised individuals diagnosed with mental and behavioral disorders due to the use of psychoactive substances (according to the International Classification of Diseases (ICD)-10).Using the purposive sampling method 50 patients were selected from in-patient departments of psychiatric hospitals and de-addiction centers in Lucknow, Uttar Pradesh.Consenting and cooperative patients in the age range 18-45 years with minimum educational qualification till 8th standard who have been diagnosed with mental and behavioral disorders due to the use of psychoactive substances (according to ICD-10) by the treating psychiatrist/clinical psychologist were included in the study.Whereas, non-cooperative patients with a history of psychotic episodes or any other psychiatric or medical comorbidities were excluded.Patients in an intoxicated state or undergoing a withdrawal period were also excluded.
Data Collection
After obtaining ethical clearance from the Departmental Research and Ethics Committee of the Department of Clinical Psychology, Amity University Lucknow, data collection was started.The semi-structured sociodemographic and clinical data sheet was designed by the researcher, particularly for the current research.The data sheet included information about sociodemographic and clinical details such as age, gender, place of residence, education, occupation, marital status, annual income, type of substances used, duration of substance, reason to start using substance, and presence of any medical or psychiatric illnesses.In order to identify the specific substances used by the patients, the Alcohol, Smoking, and Substance Involvement Screening Test (ASSIST) by the World Health Organization, 2010 was administered [17].It is an eight-item measure of lifetime and current (past three months) use of 10 substances namely tobacco products, alcohol, cannabis, cocaine, amphetamine-type stimulants (ATS), sedatives, and sleeping pills (benzodiazepines), hallucinogens, inhalants, opioids, and other drugs.The ASSIST assigns a risk score to each substance.Each substance's score falls into one of the three risk categories: low, moderate, and high, determining an appropriate intervention for that degree of use, i.e., no intervention, brief intervention, and intensive treatment respectively.It is culture-neutral, and the internal consistency reliability of the test is 0.70 to 0.95.
To assess the cognitive style of patients, the Cognitive Style Questionnaire-Short Form (CSQ-SF) by Meins et al. (2012) was used [18].It consists of eight hypothetical scenarios in academics, achievement, employment, and interpersonal domains.These eight questions have nine different response items which are further scored on the five dimensions of cognitive style, i.e., Internality sub-scale, Globality sub-scale, Stability subscale, Negative consequences sub-scale, and Self-Worth Implications sub-scale.The participants rate the extent to which the cause of the events was internal (caused by something about self or something else), stable (the reason causing the same event to happen in the future), global (causing problems in other parts of life) as well as the consequences of the event (leading to other bad things in life due to the event) and self-worth implications (inferring to something wrong in self-due to the event).The scale has an internal reliability of 0.85 and a high face and construct validity.
Statistical Analysis
Data was collected from 50 male patients following the inclusion criteria using the semi-structured sociodemographic and clinical data sheet, ASSIST and CSQ-SF.The dataset collected through the questionnaires was analyzed for statistical outcomes using descriptive and inferential statistics including frequency, mean and t-test using Statistical Package for the Social Sciences (SPSS), version 20.0 (IBM Corp., Armonk, NY).The results of statistical analyses are elaborated further using adequate descriptions.
Results
The results were evaluated using descriptive statistics (frequency distribution) and t-tests to know the significant difference in cognitive style of patients with substance use disorder.Table 1 displays the frequency and percentage of substances used and the type of user (dual substance or multiple substances) among patients with substance use disorder.The results showed that 50% (N=25) of the patients use a combination of two substances (dual substance users) and 50% (N=25) of the patients use a combination of more than two substances (multiple users).
Discussion
The present study aimed at exploring the cognitive style of the patient with substance use disorder.Substance use was assessed using the Alcohol, Smoking, and Substance Involvement Screening Test (ASSIST) and cognitive style was assessed using the Cognitive Style Questionnaire-Short Form (CSQ-SF).Further appropriate statistics were used for data analysis.
In the present study, the age of the patients was from 18 to 45 years with a mean age of 31.82 years.All the participants in the study were male.However, in the last decade, there has been a shift in observing substance use and abuse as an exclusive adult male phenomenon than in other populations [19].
With reference to the specific substance used by the patients, 50% of patients used alcohol and tobacco only and 50% of patients used multiple substances, i.e., three or more than three.It was found that 20% of patients used tobacco, alcohol, and cannabis, 8% of patients used tobacco, alcohol, and opioids, 6% of patients used tobacco, cannabis, and opioids, 12% of patients used tobacco, alcohol, cannabis opioids, 2% patients used tobacco, alcohol, cannabis, inhalants and remaining 2% patients used tobacco, alcohol, cannabis, inhalants, opioids.Use of more than one substance is on the rise and often underreported due to primary use of substance being attributed to the 'high' experience by the individual and to the withdrawal symptoms experienced on abstinence.
The present study reveals that half of the patients (50%) with substance use disorder had more negative cognitive style than others.Having a negative cognitive style makes an individual more cognitively vulnerable, thus leading to a high probability of developing depression-like features.In the domains of cognitive style, the majority of the patients (88%) scored more than average on internality, 48% of patients scored more than average on globality, 40% of patients scored more than average on stability, 46% patients scored more than average on negative consequences and on self-worth implications.This reveals that most of the patients tended to attribute the negative life events to more internal, global, and stable factors, which may be due to insufficiency in self and may lead to negative consequences in the future, affecting all areas of life.Liu et al. (2013) suggested in their study that negative cognitive styles may be crucial to be addressed in clinical settings, specifically in patients with a history of adverse childhood experiences, in order to reduce the incidence of negative life events, and hence the chance of depression recurrence [20].The opposite effect was discovered in a study of explanatory style among community-dwelling older adults by Isaacowitz et al. (2003), wherein, adults with optimistic explanatory style (those who made external, temporary, and specific explanations for negative events) had most depressive symptoms at follow up [21].
However, the results show no significant difference in the cognitive style of the patients with dual substance use and multiple substance use.Therefore, the hypothesis that there will be a significant difference in the cognitive style of patients with dual substance use and multiple substance use, is not accepted.Patients with dual substance use were found to have a more negative cognitive style than those with multiple substance use but there was no significant difference in the cognitive style between the two groups.Supporting the finding, Debbie F (2007) found no considerable variation in the attributional styles between addicts and non-addicts [10].Perez-Bouchard et al. (1993) revealed that variations in attributional style between children of substance abusers and children of nonsubstance abusers were predominantly due to the stability and globality components (i.e., stable and global attributions for negative occurrences) [22].Hence, individuals having a pessimistic attributional style were more likely to relapse after a period of abstinence.
This study is a male-only study with a small sample size.The self-report data might be responsible for reporting bias, hence, the findings may not be generalized adding to the limitations of this study.
Conclusions
The present study aimed at exploring the cognitive style of patients with substance use disorder.The findings of the study indicate that most of the patients with substance use disorder using tobacco, alcohol, cannabis, and inhalants have a negative cognitive style, whereas patients with dual substance use have a more negative cognitive style than patients with multiple substance use, i.e., they tend to attribute the life events in a more negative way.Assigning cause to a behavior or an event helps people understand themselves and learn to avoid engaging in more negative behaviors.Therefore, understanding the cognitive style of the patients can be helpful in therapeutic intervention.
Table 2
depicts the scores of the cognitive style of the patients with substance use disorder.
Table 3
shows the cognitive style of dual and multiple substance use.There was no significant difference found in the cognitive style of patients with dual and multiple substance use.No significant difference was found in the domains of cognitive style (internality, stability, globality, negative consequences, and selfworth implication) between dual and multiple substance users.
|
v3-fos-license
|
2022-02-17T16:26:30.132Z
|
2021-11-20T00:00:00.000
|
246884871
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.lidsen.com/journals/aeer/aeer-03-01-007/aeer.2201007.pdf",
"pdf_hash": "966bed08d29e409419a771e9d68f4046e0565613",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46744",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "7bb69233c5d4998f96e41e46eeb57c8fee0fb1ac",
"year": 2021
}
|
pes2o/s2orc
|
How Might Changing Climate Limit Cyanobacteria Growth in Shallow Prairie Lakes? An Empirical Space-For-Time Evaluation of the Potential Role of Increasing Sulfate
Cyanobacteria blooms alter aquatic ecosystems and occur frequently in shallow prairie lakes, which are predicted to increase in salinity as the regional climate becomes hotter and drier. However, flat landscapes that experience depression bottom salinity with high concentrations of sulfate in addition to sodium and chloride, may mitigate nutrient increases or even inhibit cyanobacteria growth. Cyanobacteria can dominate shallow lakes with low N:P ratios because many cyanobacteria species fix dissolved N2, whether due to in-lake denitrification or exchange with the atmosphere, a process that requires molybdenum as an enzyme cofactor. Sulfate can compete with molybdate at cellular uptake sites, potentially limiting the competitive advantage of cyanobacteria. We studied 25 lakes located in a relatively limited geographic region of southern Alberta (Canada) and used a space-for-time analysis to model scenarios of increased sulfate concentrations under changing climate. Monthly, we measured nitrogen, phosphorus, sulfate, molybdenum, and cyanobacterial pigments and used mixed effects models to identify empirical relationships. Sulfate drives conductivity in the region and we found that most saline lakes we sampled are turbid lakes with high nutrients and high cyanobacteria biomass. In addition to phosphorus, molybdenum predicted cyanobacterial pigments in the top two models, showing a positive relationship with cyanobacterial biomass. We also found a negative relationship between sulfate concentration and cyanobacteria pigments, which suggests that as lakes get saltier, even with increased nutrients, there may not be an incremental increase in cyanobacteria biomass. Our results therefore suggest that competition between sulfate and molybdate may limit future cyanobacteria growth in shallow lakes and that with a warmer and drier climate it may not be inevitable that shallow lakes will continue to be dominated by cyanobacterial blooms, a hypothesis that could be tested directly via experimentation.
Introduction
Toxic cyanobacteria are a global public health concern [1,2] and are increasingly found in shallow lakes [3][4][5][6]. Many shallow prairie lakes are endorheic basins that are expected to increase in salinity due to regional climate becoming hotter and drier, which would decrease water inputs, increase water evaporation and decrease water levels, and therefore increase salt concentrations [7][8][9]. Salts, such as sulfate, accumulate from weathering of surrounding soil and bedrock. Sulfate concentrations in shallow lakes may increase [10] as prairie climate becomes hotter (increases of 3-5°C annual mean temperatures) and drier (decreases of 5-10% in precipitation) by the 2050s [9]. In agricultural regions, shallow prairie lakes also have increased nitrogen (N) and phosphorus (P) due to fertilizer used on surrounding croplands over the past 60-70 years [11,12]. The role of macronutrients and their supply ratios in generating cyanobacterial blooms has been well studied [13,14], but the importance of micronutrients, including enzyme co-factors, is much less clear, particularly in shallow lakes. However, increases in salinity may actually inhibit cyanobacteria growth [15].
Phosphorus has been argued to be the major driver of cyanobacteria growth [16,17] with the role of N more contentious [18,19]. Phytoplankton biomass often correlates with total P [20], yet a low N:P ratio can shift the phytoplankton to cyanobacterial dominance [21] because cyanobacterial N-fixation can reduce N-limitation [22]. However, if N-fixation alone provided enough N, primary productivity should never be N-limited [23]. Low productivity biomes, such as open oceans, are in contact with the N 2 -rich atmosphere yet make it clear that N-limitation exists even with abundant N 2 [24].
During times of N-limitation, N-fixation provides a competitive advantage to N-fixing cyanobacteria provided there are sufficient micronutrients, such as molybdenum (Mo) or iron (Fe) that are enzyme cofactors of nitrogenase [25]. Mo is a trace metal predominantly found in oxic water as molybdate (MoO 4 2-) [26] and derives from geologic weathering [27]. Typically, MoO 4 2-concentrations are relatively conserved [28]. Cyanobacteria take up MoO 4 2through specific sites prior to synthesizing nitrogenase [26]. However, in saline water, sulfate (SO 4 2-) can outcompete MoO 4 2at uptake sites because the ions share a similar charge-to-mass ratio and stereochemistry [26]. Although low Mo availability has been shown to limit cyanobacteria growth in saline coastal systems [29], it remains unknown whether this applies to all aquatic systems or whether a threshold exists where SO 4 2restricts the availability of MoO 4 2enough to limit cyanobacteria growth in freshwater systems, including shallow prairie lakes.
The goal of this study was to investigate implications of changing climate on future cyanobacteria growth in shallow prairie lakes. We employed a space-for-time analysis with 25 southern Alberta shallow lakes that span a large range in salinity, yet are within a relatively limited geographical area. We used general linear mixed effects models to test the hypothesis that MoO 4 2availability limits cyanobacteria growth and therefore late summer cyanobacteria biomass. We also looked for empirical evidence of a threshold ratio where SO 4 2outcompetes MoO 4 2and inhibits cyanobacteria growth.
Study Area
Lakes sampled were 60 to 160 kilometers east of Calgary, Alberta, in the semi-arid prairie pothole region characterized by mixed fescue grasses, black and brown chernozemic and solonetzic soils where there are natural saline conditions [30]. The dominant salts are sodium and magnesium sulfates, and salts primarily derive from bedrock and glacial till [31]. Groundwater seepages that concentrate salts are primarily local, rather than regional, and primarily depression-bottom salinity [31]. A variety of crops, such as spring and winter (non-durum) wheat and canola dominate the land cover [32] in addition to livestock operations. In 2011, 88% of total cropland in the local area had commercial fertilizer applied to it [33]. Livestock manure is typically redistributed on fields [33].
Lake Selection
Sampled lakes ( Figure 1) were purposely chosen to include a salinity gradient, which was estimated from conductivity measured in the field before sampling began. Eleven lakes had been previously sampled [34,35], which provided a known range of salinity and history of cyanobacteria blooms for those lakes. Table 1 summarizes selected physical and chemical attributes of all study lakes. Table 1 Selected characteristics of 25 shallow prairie lakes sampled to determine relationships between cyanobacteria pigments, sulfate and molybdate across a broad salinity gradient. Values are the mean +/-SEM of three samples taken in August 2016. Lake location and selected biological and chemical characteristics of 25 shallow prairie lakes sampled in Alberta. *Z max was taken from previous studies [34,35]. Chlorophyll-a Cyanopigment (mg · L -1 ) (mg· L -1 ) TP (mg · L -1 ) (mg · L -1 ) (mg · L -1 ) (mg · L -1 ) Two Bar 7,898 (+/-83) 1,920 (+/-527)
Lake Sampling
Twenty-five lakes were sampled four times (once each in June, July, August and September) between June 13 and September 6, 2016. At each lake, three samples were taken in acid-bathed (minimum 3 hours in 25% HCl and triple DDW rinsed) 1L Nalgene bottles by wading 1 to 4 meters from the shore and by using a bottle holder to avoid debris suspended from wading. At each site, the bottle was triple-rinsed with lake water and then the sample was taken from immediately under the water surface, then stored in a dark cooler on ice for transport back to the lab. The depth at the sampling site never exceeded 1.5 m and occasionally was less than 30 cm. The samples were purposely taken to include a "worst-case" scenario by visually inspecting the lake's surface for suspected cyanobacteria abundance, and sampling in the bloom, if seen. The three sampling sites were at least 30 m apart, if possible. Turbidity was measured with a HACH Portable turbidity meter (model 2100Q). Conductivity and temperature were measured with a ThermoScientific Orion Star (model A325) pH/conductivity portable multi-parameter meter. The pH of each sample was measured in the lab using a Mettler Toledo, FiveEasy Plus (model FP20) pH meter.
Ion and Nutrient Analysis of Lake Water
Soluble reactive phosphorus, ammonia, nitrate + nitrite, sulfate and molybdenum samples were filtered using Pall Corporation Life Sciences GN Metricel Grid 0.45 m filters within 7 hours of collection and stored in sterile 50 mL polypropylene conical tubes overnight at 4C. All nutrient samples were analyzed within 24 hours of collection, except where noted.
Total nitrogen was measured on a Shimadzu TOC-L Combustion Analyzer with TNM-L module with a Shimadzu ASI-L auto sampler with dilution, if necessary. Nitrate and nitrite were measured using ion chromatography on a Metrohm 940 Professional IC Vario equipped with a Metrohm 858 Professional sample processor.
Ammonia, sulfate, total phosphorus and soluble reactive phosphorus were measured on a WestCo Scientific Instruments Inc. Smart Chem (Model 170) discrete analyzer. Ammonia was measured with the Berthelot reaction following Method AMM-001-A [36]. Total phosphorus and soluble reactive phosphorus were measured with the molybdate blue method [37] using Method PHO-004-A [38]. Sulfate was measured using Method SUL-002-A [39]. Samples were auto-diluted by the Smart Chem or manually diluted beforehand and auto-diluted when sulfate levels were above the calibration curve.
Molybdenum samples were filtered through a Pall Corporation Life Sciences GN Metricel Grid 0.45 m and refrigerated at 4°C for a maximum of 5 months. At analysis, samples were acidified to 1% HNO 3to prevent precipitation and measured using an Agilent Technologies 8800 ICP-MS Triple Quad with an ASX-500 Series ICP-MS auto sampler. Molybdenum 95 and 98 were measured and the molybdenum 98 isotope measurements were used. Typically, triplicate measurements were made from a single vial for a given sample. In five lakes (Brush, Black, Whey, Horse and Dawson), the salt concentrations created matrix effects and in these instances, the method of standard addition was performed to obtain an accurate measurement. Indium was used as the internal standard to correct for any signal drift during analysis. Total molybdenum (Mo) was measured because molybdenum is predominantly found in oxic waters as the thermodynamically stable oxyanion molybdate [26], which is the Mo form taken up by cyanobacteria.
Pigment Analyses
Chlorophyll-a and accessory pigment measurements were performed on phytoplankton collected on 47 mm VWR glass microfiber filters (model 696; 1.2 m pore size) under low light and then frozen in aluminum packets at -20 o C for up to 5 months for later analyses. Filters were first freeze-dried in the dark over 48 hours in a Labconco, FreeZone 6 freeze drier to remove remaining water, then the pigments were extracted under low light by placing the filter in 10 mL of 98% ethanol, vortexing and letting the samples extract for 24 hours. Samples were then centrifuged and measured in small batches using a Molecular Devices SpectraMax M2 plate reader at 300 wavelengths from 400 -700 nm. The data were then analyzed using pigment-based chemotaxonomy and a Gaussian peak function, which predicts the range of pigments present in the sample [40,41]. The software estimates pigments from cyanobacteria, diatoms, dinoflagellates and green algae. We used chlorophyll-a, plus myxoxanthin, canthaxanthin and echinenone, the latter three which are specific to cyanobacteria, to create a parameter called "cyanopigment" by selecting the highest concentration out of the three pigments from each sample. Adding the three cyanobacteria pigments together would over-estimate the concentrations because all three of the pigments can occur in the same cyanobacterial species [42]. However, to use only one pigment would underestimate the cyanobacteria biomass because not every species produces all three pigments. Use of the highest pigment still underestimates the total concentration if two species in a sample contain different accessory pigments and we only use the highest accessory pigment to quantify the sample.
Statistical Analyses
All analyses were performed with R version 3.3.2 [43], and RStudio version 1.0.136 [44], using the lme4 [45], lmerTest [46] and effects [47] packages. General linear mixed effects models were analyzed with the lmer function in the lme4 R package [45] and generalized linear mixed effects models were analyzed with the lme4 function in the glmer package [48]. Stepwise regression was performed with the lme4::drop1 function [45]. P-values were calculated with the lme4::drop1 function using the Chi-squared test and with the lmerTest package using the Satterthwaite method. Figures were made with MS Excel, ggplot2 [49] and ggmap [50].
Cyanopigment was the dependent variable of three models: one with three extreme lakes removed (lme), one with all lakes included (lme) and one with the binomial family (glmm). For the two linear mixed effects models, lake and month were included as random effects. The random effect structure reflected the study design by allowing a random slope of month, within lake, with correlated intercept [51] and also calculated covariance. The assumptions of normality and homoscedasticity in the residuals were met by visually inspecting the residuals vs. fitted plot [52]. Only observations of cyanopigment above the detectable limits were included in the linear mixed effects models. To minimize the spread in the fixed effects' ranges and to reduce heterogeneity, sulfate was log 10 transformed and the other fixed effects and the dependent variable were natural log (ln) transformed. To test the effect of extreme values on the models, we ran the model with and without the values and report both models. In the first model, two lakes with extreme sulfate concentrations were removed, which lowered the range of sulfate from 13-17,324 mg · L -1 to 13-7,856 mg · L -1 (mean: 1460, median: 854) and one lake with extreme molybdenum concentrations was also removed, which lowered the range from 0.4-132 g · L -1 to 0.4-38 g · L -1 . Truncated data were included in the third model where cyanopigment presence/absence was coded with a categorical variable (0s, 1s) to determine significant effects for detection and below detection. The binomial model was a generalized linear mixed effects model with the family as binomial and the link function as logit. This model included all lakes and tested the same fixed effects as the first model; however the interaction term was not included. In this model, lake was included as a random effect and month was included as a fixed effect. Month was not included as a random effect on its own because there were insufficient levels (five to six levels are recommended at minimum; [51]). Variables were scaled to account for covariance.
General Patterns of the Study Lakes
The 25 shallow lakes sampled included broad ranges of conductivity, sulfate, total nitrogen and total phosphorus while in contrast, molybdenum varied less (Table 1). Conductivity had a relatively even and broad range (330.9-20,760 µS/cm - Figure 2) as lakes were purposely selected to include a conductivity gradient. Sulfate encompassed three orders of magnitude (13-17,324 mg · L -1 - Figure 2) with higher sulfate levels further east (Figure 1), except for the two large lakes farthest east (Seiu Lake and Crawling Valley). Brush Lake (Figure 2) had the highest conductivity (18.17-20.7 mS/cm) yet almost the lowest sulfate (13-93 mg . L -1 ). Molybdenum concentrations varied less across the 25 lakes (median 3.0 g · L -1 ; range 0.4-132 g · L -1 ). Goat Lake had a maximum molybdenum concentration 2.5 times greater than the next highest lake (132 g · L -1 compared to 37 g · L -1 ), which accounts for the high range and mean. Every lake had detectable molybdenum concentrations.
Figure 2
Relationship between conductivity (mS . cm -1 ), sulfate (mg . L -1 ) and cyanopigment (µg . L -1 ) for 25 shallow prairie lakes sampled in July 2016. Points represent the mean of three samples. The horizontal black dashed line represents 2 mS . cm -1 , the approximate cut off between fresh and brackish water. The dotted line represents the regression line. The full range (a) and (b) details for conductivity < 14 mS . cm -1 and sulfate < 7,000 mg . L -1 are shown.
Three lakes had low conductivity (<0.800 mS . cm -1 ), low sulfate and low cyanobacteria biomass ( Figure 2). These lakes also had low N and P, and macrophytes were visually observed during sampling. Another group of lakes between ~1.0-2.0 mS . cm -1 (170 -730 mg·L -1 sulfate) consistently had higher cyanobacteria pigment (>1 µg·L -1 ), higher nutrients and higher sulfate. Previous studies have found that sulfate concentrations of 768 mg·L -1 (8 mM) and greater start to inhibit molybdate uptake [54]. Some lakes above 2.0 mS . cm -1 consistently had very low or no cyanobacterial pigments; however a few lakes had mean cyanobacterial pigments between 1-5 µg · L -1 and two saline lakes had mean cyanobacterial pigments above 14 µg · L -1 .
All lakes were relatively small and shallow, except for Crawling Valley, which is a large reservoir with riverine characteristics. The lakes were slightly alkaline (pH 8.09-10.68) and the water temperature ranged from 11.5-26.0 °C. Turbidity ranged from 1.03-800 NTU. Although not confirmed in this study, some of the shallow lakes (e.g. Mushroom) have historically been fishless ( [34]; Jackson unpublished data, August 2018). Barnett Lake has Brook stickleback (Culaea inconstans), West Lake has Prussian Carp (Carassius gibelio) and presumably East Lake and Long Lake do too, based on their proximity and connecting culverts (Jackson, unpublished Joshi & Jackson, Figure 3 pike (Esox lucius) [55]. Crawling Valley has been stocked with sport fish. Three species of sucker and two species of minnows have also been caught in the reservoir [55].
Patterns of Cyanobacteria Pigments with Sulfate and Molybdenum
In general, we found higher cyanobacteria pigments (our proxy for biomass) in lakes with TN:TP < 16:1 (Figure 4). Lakes with TN:TP below 16:1 typically also had low sulfate:molybdenum. The first model found that ln cyanopigment was significantly correlated to ln(TN), ln(TP) in July and September, and ln(Mo). Log 10 (SO 4 2-), although not significant (p = 0.055), was included in the model. The linear mixed effects model used 22 lakes; the two lakes with the highest sulfate and one lake with the highest molybdenum (and greatest variability in the sulfate and molybdedum measurements) were removed from this model and only samples with cyanobacteria pigment >0 were used (n = 165). The model found that ln(cyanobacteria pigment) was not significantly predicted by log 10 (SO 4 2-) (p = 0.055), with a negative relationship that had an estimate of -0.89 +/-0.45 (intercept was 1.14 [52]). Although this relationship was not significant at α = 0.05 using the Satterthwaite method, it was closer, yet still not significant using the chi-square test (p = 0.051). There was a significant positive relationship between ln(cyanobacteria pigments) and ln(Mo) (p = 0.005), that had an estimate of 0.81 +/-0.29, and a strong, significant relationship between ln(cyanobacteria pigments) and ln(TN) (p = 0.005), with an estimate of 1.18 +/-0.41 (Supplementary Figure S1). There was also a significant interaction by month between ln(cyanobacteria pigments) and ln(TP), with a significant relationship in July Model 3 found that cyanopigment presence was significantly predicted by ln(TP). To account for the dependent variable being truncated in Models 1 and 2, a generalized linear mixed effects model using the binomial distribution to analyze detection and non-detection in cyanobacteria pigment was analyzed to complement the previous two models.
All lakes and samples were included in the model (n = 300), which found ln total phosphorus could significantly predict the presence (detection) of cyanobacterial pigments (p = 0.02) with an estimate of 0.41 +/-0.18 (intercept was 1.44 [52]). The other fixed effects in Model 1 were tested in this model but were not significant: scaled log 10 of sulfate (p = 0.79), scaled ln molybdenum (p = 54) and scaled ln total nitrogen (p = 31). These terms were removed from the model.
Discussion
We found a significant positive relationship between cyanobacteria pigment and molybdenum concentration and negative coefficients between cyanobacteria pigment (our proxy for cyanobacteria biomass; [56]) and sulfate concentration. These relationships suggest that increasing sulfate and decreasing molybdenum may limit cyanobacterial growth by out-competing molybdate uptake needed for the Mo-nitrogenase enzyme system responsible for biological N-fixation [57]. Because sulfate and conductivity were strongly correlated in the shallow prairie lakes sampled and regional climate is predicted to become hotter and drier, sulfate will increase in these shallow endorheic lakes and could limit cyanobacterial growth that is projected to increase over the next 30-60 years from anthropogenic eutrophication.
N-fixing cyanobacteria have been shown to dominate low N:P lakes through their ability to fix atmospherically derived N 2 because other phytoplankton groups cannot [16,21,22]. Some, but not all, lakes had higher cyanobacteria growth at low N:P ratios (Figure 4), and the low N:P lakes with the highest cyanobacterial pigments also had low sulfate:molybdenum ratios, which supports the notion that when molybdate availability does not limit Mo-nitrogenase production, cyanobacteria can dominate in low N:P lakes. However, some studies suggest that the total amounts of N and P better predict cyanobacterial dominance [58,59]. Total nitrogen can be a proxy for phytoplankton biomass, so a strong relationship between cyanobacterial pigment and total nitrogen is not surprising, yet shows that the amount of total nitrogen had the strongest and most consistent correlation to the biomass of the cyanobacterial community. The effect of total phosphorus depended on the month, indicating either a seasonal response to nutrients, or an unmeasured process, such as zooplankton grazing, that affected cyanobacterial biomass. Our results provide correlations between cyanopigment, SO 4 2and Mo (our estimate of MoO 4 2-); experimental manipulations would be required to confirm causality.
High concentrations of N and P come primarily from fertilizers applied to the surrounding cropland [11] although this region also has high geologic P inputs from glacial till. Most synthetic nitrogen applied to crops is used by the plants, yet isotopic tracing reveals that nitrogen can have long residency times in the soil and that 8-12% of nitrogen can enter aquatic ecosystems and groundwater [11]. Nitrogen would leave an endorheic shallow lake water column mainly through denitrification and sedimentation; however, inputs tend to exceed losses, so nitrogen and phosphorus concentrations tend to accumulate in shallow lakes. The concentrations of total nitrogen and total phosphorus in the lakes we sampled (Table 1) even exceeded measurements in other shallow lakes in Europe or Asia [3,60], yet the concentrations we measured are similar to previous measurements in these lakes [35]. Most of the lakes range between eutrophic and highly eutrophic and therefore have favourable nutrient conditions for cyanobacterial blooms. However as total nitrogen strongly correlated with conductivity ( Figure 3) these highly eutrophic lakes are also more likely to have high sulfate concentrations and provided an opportunity to evaluate the interacting effects of N and sulfate.
Cyanobacteria require molybdenum as an essential cofactor for the predominant form of nitrogenase [61]. Microscopy on phytoplankton communities in many of our study lakes over the last 20 years reveals that the same cyanobacterial species appear in summer blooms in these lakes. Microcystis sp. typically appears in late June or July. As summer progresses, the Cyanobacterial community is typically dominated (numerically) by N-fixing species such as Aphanizomenon flosaquae, Oscillatoria sp., Anabaena sp., Nostoc sp., Lyngbya sp., Gleotrichia sp., and Gleocapsa sp.. Blooms of N-fixing capable cyanobacteria suggests that the lakes have transitioned to N-limitation, a hypothesis that could be tested by N addition to mesocosms. Molybdenum predicted cyanobacterial pigments in both models and through all months, which supported the hypothesis that molybdate is correlated to cyanobacteria biomass. Sulfate competes with molybdate for uptake sites and showed a negative (though non-significant; p = 0.055) relationship with cyanobacterial growth. Although sulfate was not statistically significant in the model, the borderline non-significant statistical result may reveal a biologically significant relationship (additional samples from other lakes would help clarify the generality of this finding). Sulfate would derive from localized areas in the region with high salinity where salts accumulate in groundwater and then pool at the surface in patchy distributions, making some lakes highly saline and other lakes fresh [31]. Nitrogen is also assumed to come from leaching through soil and groundwater, and the combined results of sulfate, molybdenum, N and P together show that currently shallow lakes do produce large cyanobacterial blooms, some with associated toxins [52]. Although sulfate can limit cyanobacterial growth, not every lake above 2.0 mS/cm (~750 mg·L -1 sulfate) had low cyanobacterial biomass, presumably because these lakes are eutrophic and have N from sources other than the atmosphere (e.g. NH 3 , urea, dissolved organic nitrogen). Our results are consistent with previous studies that have shown that sulfate and molybdate affect nitrogen fixation in estuaries [25,62,63] where rates of N-fixation are low because molybdenum is scarce due to high sulfate [25].
Seawater is about 29 mM SO 4 2- [64] and about 107 nM Mo [65], which is approximately 270,000:1 SO 4 2-:Mo. In the study lakes we calculate an average ratio about one order of magnitude higher at 1.308 x 10 6 even if we assume Mo has the molecular mass of MoO 4 2because molybdate is the predominant form of Mo. The ratio of SO 4 2-:MoO 4 2therefore is higher in these shallow saline lakes than in seawater. Cole [64] suggested that, in marine systems, sulfate would inhibit MoO 4 2uptake at about 5% seawater concentrations or about 1.45 mM sulfate. Our lakes average (+/-1 sd) 17.65 (20.8) mM sulfate. If cyanobacteria are similarly affected by sulfate in marine and freshwater systems, sulfate would limit Mo availability in all but four lakes and would be right at limitation in one lake. This could be tested experimentally by adding eg, Na 2 MoO 4 to lower the SO 4 2-:MoO 4 2and measuring the corresponding response of cyanobacteria.
Mo has historically been thought to be essential for N 2 fixation [66]; however, recent evidence indicates that some terrestrial N-fixers have V-and Fe-only nitrogenases [67]. Together with the Mo-nitrogenase, these forms are collectively encoded by a complex system of over 80 [61] Nif genes that produce the necessary gene products for fully functional nitrogenases. In some laboratory growth studies, N-fixation has occurred despite no free Mo [68,69] and many genes responsible for V-and Fe-nitrogenases have now been identified [70,71]. Whether or not the same V-and Fenitrogenase forms identified in soil bacteria exist in freshwater cyanobacteria is not known; however, if they are present and produced they could potentially lead to weaker correlations between Mo and N-fixing cyanobacteria in aquatic systems and could provide a mechanism for a compensatory response if increasing sulfate effectively makes molybdate less available. Furthermore, Fe form and therefore its availability has also been shown to create Fe-limitation in some systems, including saline prairie lakes [72]. In shallow, well-mixed polymictic lakes, such as those we've sampled, Fe 2+ released from anoxic sediments would be rapidly oxidized but cyanobacteria do produce siderophones to aid in Fe acquisition [73].
Some of the high TN, TP, chlorophyll-a and cyanopigments we measured would be the result of the worst-case scenario sampling we employed. There would be different values of TN and TP from an integrated sample taken at various depths because of patchiness in the algal biomass in some lakes. The measurements would also be affected by variation in annual phytoplankton phenologyspring (2016) was about 3 weeks earlier than 'normal' with about three times higher precipitation in July 2016 (142 mm) than the monthly average for July (49 mm from 2004-2015) [74]. Even though the September sampling was at the very beginning of the month, most of the cyanobacterial blooms seen in July and August were not visible by September. The lakes were located within 100 km of each other and should share regional climate and geology. The models we ran indicate that even when interactions between independent variables are included, the bivariate relationships remain significant. Furthermore, the removal of outliers from the data in Model 1 did not lead to different results when compared to Model 2.
Thresholds and Alternate Stable States in the Lakes
Shallow lakes and aquatic ecosystems are among the most altered ecosystems on the planet [75] and continually respond to anthropogenic nutrient loading. Nutrient inputs can shift lakes from a clear, low-nutrient, high-macrophyte state to a turbid, high-nutrient, high-phytoplankton state, the latter often containing cyanobacteria [34,[76][77][78]. We did not identify statistical thresholds that would suggest bimodality in the relationships between sulfate or molybdenum and cyanobacteria and therefore the relationships between sulfate, molybdenum, N, P and cyanobacteria are continuous in the lakes and year we sampled.
Despite lack of evidence of nutrient-cyanobacteria thresholds, the lakes can still be grouped into three categories. In the two lakes with low nutrients, low salinity and low cyanobacterial biomass ( Figure 2) we also noted high macrophyte abundance and low turbidity. A second group of lakes had higher nutrients, were more turbid and had more cyanobacteria biomass, and while they had increased conductivity, it was below 2.0 mS/cm, which is near the threshold of molybdate uptake inhibition of 768 mg·L -1 sulfate found by Marino et al. [54] and also a rough distinction between fresh and brackish water. The third group of lakes had conductivity above 2.0 mS/cm and also had increasing sulfate, increasing turbidity and less cyanobacteria biomass. However, this last group of lakes also had the highest N ( Figure 3). There were two lakes above this threshold where cyanobacteria bloomed above 14 µg·L -1 , showing that this threshold did not entirely limit cyanobacterial growth. One of these lakes (Dog, which had extreme blooms) may have an unidentified nutrient source while the other lake (Bow) has abundant Microcystis sp., which does not fix N. Other brackish lakes had cyanobacteria pigment between 1-5 µg · L -1 periodically through the season. Cyanobacterial growth in brackish lakes shows that other conditions can override sulfate inhibition; for example, high N concentrations can support cyanobacteria growth that does not require N-fixation.
Most of the lakes we studied appear to be in a turbid state. A level of ~4 NTU has been identified in these shallow lakes as a threshold between clear and turbid states [34]. Three lakes were consistently below this level, while 22 lakes had values higher than this threshold. This strongly suggests that the majority of saline lakes in this region are turbid lakes with high nutrients and higher phytoplankton growth. An increase in turbidity leads to a decline in macrophytes as lakes become more eutrophied [79]; a survey in Europe showed that macrophytes did not grow above 2.5 mg/L of TN (Jeppesen et al. 2007). This turbidity has implications for management because clear lakes are typically more desirable, and are associated with higher water quality. Turbid lakes can also be challenging for management because shallow lakes tend to resist returning to a clear state even when nutrient levels are reduced [80], or nutrient levels need to be reduced lower than the level that tipped the lake into the turbid state originally. Furthermore, turbid lakes might exist as either intermediate turbidity, mixed-assemblage phytoplankton communities or high turbidity, cyanobacteria-dominated communities [80]. Cyanobacterial blooms can also produce microcystins, which pose additional challenges for drinking water, recreational use, watering livestock and management.
Zooplankton help to stabilize clear lakes by grazing phytoplankton; however, zooplankton have historically been thought to be unable to handle large colonial cyanobacteria and lack critical nutrients from grazing cyanobacteria [81][82][83]. More recent research suggests that zooplankton can graze large filamentous and colonial cyanobacteria [84] and that successive generations of zooplankton living in a cyanobacteria-dominated lake could co-exist with, but not control, increased cyanobacterial blooms [85]. With increasing salinity, the species richness in the zooplankton community may decrease, however, if salinity increases to a level where fish cannot survive, zooplankton may positively respond to a lack of predators [86]. We did not measure zooplankton abundance, yet their grazing could affect our results because it would reduce the phytoplankton biomass and specifically cyanobacteria biomass we measured, which of course would affect relationships with other variables. This unmeasured factor could also affect the significance of interaction terms by month because the zooplankton populations' predation pressure may rise and fall over the summer months, as evidenced by zooplankton-related phenomena such as the spring clear-water phase [87].
Conclusions
The negative relationship between sulfate and cyanobacteria biomass in our space-for-time analysis suggests that as prairie lakes become more saline, even with increased nutrients, cyanobacteria biomass will decrease. Climate change is predicted to increase annual mean temperatures across the prairies by 3-5°C and reduce precipitation by 5-10% by the 2050s [9]. While warm temperatures can favour cyanobacterial growth [88], warm temperatures can decrease water levels in evaporation basins, increase sulfate concentrations and potentially limit cyanobacteria growth. Knowledge gained from this space-for-time analysis suggests that increasing sulfate affects cyanobacteria growth, although not all lakes with high sulfate will have low cyanobacteria.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-09-08T00:00:00.000
|
1778021
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://aidsrestherapy.biomedcentral.com/track/pdf/10.1186/1742-6405-7-35",
"pdf_hash": "40488541fbc629a3ec5e7e0527957a0a573a5bd2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46746",
"s2fieldsofstudy": [
"Medicine",
"Education"
],
"sha1": "5fb9fb3f49723375c215822806995fecd80bb269",
"year": 2010
}
|
pes2o/s2orc
|
Mobile learning for HIV/AIDS healthcare worker training in resource-limited settings
Background We present an innovative approach to healthcare worker (HCW) training using mobile phones as a personal learning environment. Twenty physicians used individual Smartphones (Nokia N95 and iPhone), each equipped with a portable solar charger. Doctors worked in urban and peri-urban HIV/AIDS clinics in Peru, where almost 70% of the nation's HIV patients in need are on treatment. A set of 3D learning scenarios simulating interactive clinical cases was developed and adapted to the Smartphones for a continuing medical education program lasting 3 months. A mobile educational platform supporting learning events tracked participant learning progress. A discussion forum accessible via mobile connected participants to a group of HIV specialists available for back-up of the medical information. Learning outcomes were verified through mobile quizzes using multiple choice questions at the end of each module. Methods In December 2009, a mid-term evaluation was conducted, targeting both technical feasibility and user satisfaction. It also highlighted user perception of the program and the technical challenges encountered using mobile devices for lifelong learning. Results With a response rate of 90% (18/20 questionnaires returned), the overall satisfaction of using mobile tools was generally greater for the iPhone. Access to Skype and Facebook, screen/keyboard size, and image quality were cited as more troublesome for the Nokia N95 compared to the iPhone. Conclusions Training, supervision and clinical mentoring of health workers are the cornerstone of the scaling up process of HIV/AIDS care in resource-limited settings (RLSs). Educational modules on mobile phones can give flexibility to HCWs for accessing learning content anywhere. However lack of softwares interoperability and the high investment cost for the Smartphones' purchase could represent a limitation to the wide spread use of such kind mLearning programs in RLSs.
Background "Mobile learning" or "mLearning" is learning that occurs across locations, benefiting of the opportunities that portable technologies offer. The term is most commonly used in reference to using PDAs, MP3 players, notebooks and mobile phones for health education and knowledge sharing. One definition of mobile learning is: Any sort of learning that happens when the learner is not at a fixed, predetermined location, or learning that happens when the learner takes advantage of the learning opportunities offered by mobile technologies [1] but another definition might be learning in motion. One issue that became clear is that mobile learning is not just about learning using portable devices, but learning across contexts, within diverse target groups, according to different learning design, development and implementation [2].
Healthcare workers (HCWs) have indicated the need for an autonomous mobile solution that would enable access to the latest medical information for continuing professional development using low-cost devices and facilitate exchange of ideas about difficult clinical cases with peers through social media [2,3]. As the most important social technology used worldwide, mobile devices in particular play a major role in stimulating this information exchange, and the advent of mobile and wireless technology has changed the level of information and communication technology (ICT) penetration in the resource-limited setting (RLSs) [4][5][6][7].
Peru does not have an adequate health care workforce to meet the population's demand for services and for the management and development of new human resources.
Limited development of health personnel competencies, health personnel in remote areas who lack access to training opportunities, poor coordination with training institutions whose training does not meet regional needs, training programs carried out in settings different from the actual work context, no performance evaluation based on competencies, high turnover rates for trained staff are major challenges identifies by national, regional, and local governments for the healthcare human resource development in Peru [8]. At the present the vast majority of health care professionals are operating in isolation from vital health information [9]. Access to reliable health information has been described as one of the most effective strategies for sustainable improvement in health care [10,11]. In this context, the Peruvian Ministry of Health (MOH) approved the Policy Guidelines on Human Resources in Health, which include tailoring training to the needs of the country, building competencies, decentralizing the management of human resources, and generating motivation and commitment. The training of service providers in all areas of HIV prevention, treatment and care is a significant component of the MOH programme to develop human potential [12].
The goal of this mLearning project was to enable HCWs involved in HIV/AIDS care in urban and periurban stations in Peru to access the state-of-the-art in HIV treatment and care. To achieve this aim, in 2008 the Institute of Tropical Medicine Alexander von Humboldt (IMTAvH) in Lima and the Institute of Tropical Medicine (ITM) in Antwerp set up an educational mobile application, allowing knowledge sharing and data contribution through a mobile-based educational platform.
Materials and methods
Of 24 Peruvian department capitals, 20 were already involved with the IMTAvH in a distance-learning project begun in 2004 and lasting a year with the aim to scale up access to antiretroviral treatment in the Peruvian peripheral regions. Some of these facilities were included in the mLearning pilot project. Health centers in the department capitals are run by medical doctors and staffed with 5-10 HCWs, such as social workers, counselors, and data clerks. Individual Smartphones (10 iPhones, mobile phone with touch-screen and 10 Nokia N95, mobile phone with digit buttons to dial with), each equipped with a portable solar charger, were delivered to the 20 physicians based in the peri-urban HIV centers. A router connected to a DSL or cable modem, available in all stations, allowed wireless connection, facilitating surfing and the downloading of the didactic material in any area of the clinic. This access also simultaneously guaranteed wire-free interactions, without participants having to purchase a complete computer to connect, and reducing the cost of communications by using Skype via mobiles ( Figure 1).
The training program consisted of a set of "clinical modules" simulating interactive clinical cases that were adapted to mobile devices and sent to physicians working in the 20 peri-urban clinical stations. The case series involved five topic areas, the most common being the use of new drugs for HIV/AIDS treatment and their safety and side-effect profiles (see Additional file 1). The mLearning program was delivered during the months of November 2009-January 2010. Half-day training on how to operate with the mobile equipment was taken at IMTAvH by all participants before the launching of the mLearning program.
The didactic material used in this project was developed with 3D animations using iClone [13] and Moviestorm [14], reproducing specific scenarios (e.g., clinical consultation) (Figure 2) while the module revision at end of every case discussion was provided through multimedia files (developed with ScreenFlow [15], which enables starting from PowerPoint presentations to add audio and video to screen shots, and to publish everything in a mobile-accessible format).
Learning outcomes of the acquired knowledge were tested through mobile-based multiple choice questions (pre-and post-test) issued at the beginning and end of each module (Figure 3). A functional mobile platform (MLE Moodle) was offered to support the learning events, tracking student progress over time. The platform also provided access to Facebook for peer-to-peer learning sharing in clinical case discussions with a network of experts, which assured feedback content quality. The suggested readings were distributed within the timeframe of the 2-week clinical module discussion mainly in PDF format using Google Docs (Figure 4).
In December 2009, a mid-term user satisfaction survey delivered through a standardized anonymous questionnaire, coupled with a focus group discussion, was performed. The satisfaction survey sought to gain feedback on tutorial quality, usefulness of the information, and its applicability to the daily context of HIV treatment and care. The focus group discussion sought to identify general barriers to program adherence and the technical difficulties encountered during the implementation phase of the program.
Results
Of the 20 participants, 18 returned the standardized questionnaires (response rate, 90%). Participant median age was 48.5 years (range, 34-55 years), with a median of 6 years of experience treating HIV patients. Most participants had no prior mobile learning experience, and their social media literacy was also limited ( Figure 5).
Over half of the iPhone users (66.7%) indicated that Skype was easy to access compared to 22.2% using the Nokia N95; in addition, 88.9% of the iPhone respondents found it easy to access Facebook via mobile compared to the 44.4% using the Nokia N95. The results indicated similar usability of iPhone and Nokia N95 (88.9% and 87.5% respectively) for the download of podcasts and access to MLE Moodle for pre-and post-testing ( Figure 6).
The freedom to plan educational activities according to each individual user's personal agenda was indicated as an added value by 86.6% of the participants, while 94.4% indicated that access to the educational content without needing a computer was an added value. All respondents had positive opinions about the quality of the received information, the applicability of the content to clinical practice, and the appropriate relevance of the suggested readings.
The main advantages participants identified during the focus group discussion were the portability of the equipment and easy access to the educational content at the time and location of their choice. Some of the Nokia N95 users reported as problematic the screen size of the equipment, the keyboard size, and the quality of the images. The topics covered by the program were graded as pertinent to daily clinical practice and highly regarded by the participants.
Discussion
Many developing countries would move towards the use of distance-learning programs to avoid leaving peripheral health stations unstaffed when HCWs are absent for short or long training programs [16,17]. Because Peru is a developing country, there is limited access to information and teaching resources and a great need to enhance learning and teaching environments. Mobile phones can create an inexpensive and reliable learning environment between HCWs in one-to-one personal learning and between colleagues in a network [18]. Some of the mobile devices are relatively low cost, powerful, small, and lightweight, and they can perform well in difficult environments because of the limited power required by the battery, which can be recharged using inexpensive solar panels.
HCWs can learn to use mobile devices, search for information, and upload and download information in a relatively short time frame [19][20][21]. Smartphones enable users to upload and download information using a wireless network. The Smartphone can be very useful in distance learning, giving users the opportunity to contact a mentor by phone, receiving immediate feedback and helping to establish a network. This study showed the value of the use of mobile phones for personal education in RLSs. In addition, it attempted to compare performance of two different devices (touch-screen versus digit buttons) looking at screen and keyboard size and interoperability of the software applications of two different operating systems.
There was not a single mobile application able to provide all the different learning activities for both mobile devices, so different applications had to be used (e.g., MLE Moodle to provide pre-and post-test and Facebook for the discussion forum, Google Docs for document delivery).
After the pre-test on a specific subject the participants were challenged with a clinical case mirroring a real clinical situation developed in 3D (Figure 2). According to the learning objectives of every module the participants had to discuss some questions related to the topic using the Facebook discussion forum or Skype for a call. The most important points discussed were noted down and a final movie summarizing the most relevant information could be generated and made available together with the recommended readings links on the mobile phones. A post-test has been taken at the end of every module using MLE Moodle.
The overall satisfaction of using iPhone or Nokia N95 as expressed by the participants was generally greater for iPhone: the Nokia N95 users described access to Skype and Facebook as being more complicated, also expressing less satisfaction with the screen and the keyboard size and the quality of the images on this equipment.
The unique feature of this project is that technology was used bridging the gap between formal and experiential learning.
Three limitations need to be acknowledged and addressed. The first concerns the relatively high investment cost for purchasing the mobile devices, the phone service fee, and the need for an IT help desk to solve technical problems. The second limitation involves a lack of measure of the extent to which these findings can be generalized beyond the pilot project and the interoperability of those educational modules using other more basic phones.
This pilot project is a single case and we do not attempt to make a generalization of our results. More research is needed to understand if what observed can be applied to other mLearning programs moreover in RLSs. Our next step in this research will be to develop a survey with data triangulation using in depth interviews, group discussion and participants validation.
Conclusions
Educational modules available via mobile computing give flexibility to the healthcare workers who can carry and access content anywhere. Mobile devices enhance the learning environment and strengthen the ability to share knowledge through online discussion via social media or directly by phone. The sharing of experiences in a network facilitates the transformation of learning outcomes into permanent and valuable knowledge assets.
These preliminary results show that the delivery of up-to-date modules on comprehensive treatment and care of people living with HIV/AIDS can be contextualized and customized to some of the most-used mobile devices. Particular attention should be given to the adaptation of the educational material to the small screen size and to the performance of the program development in the different operating systems.
Additional material
Additional file 1: List of CME modules and learning objectives
|
v3-fos-license
|
2018-04-03T03:29:15.074Z
|
2016-06-23T00:00:00.000
|
5045102
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1038/srep28571",
"pdf_hash": "29458ffab8aa42f93b43cdbdc29f0b9648000d4d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46748",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "29458ffab8aa42f93b43cdbdc29f0b9648000d4d",
"year": 2016
}
|
pes2o/s2orc
|
Transcriptional quiescence of paternal mtDNA in cyprinid fish embryos
Mitochondrial homoplasmy signifies the existence of identical copies of mitochondrial DNA (mtDNA) and is essential for normal development, as heteroplasmy causes abnormal development and diseases in human. Homoplasmy in many organisms is ensured by maternal mtDNA inheritance through either absence of paternal mtDNA delivery or early elimination of paternal mtDNA. However, whether paternal mtDNA is transcribed has remained unknown. Here we report that paternal mtDNA shows late elimination and transcriptional quiescence in cyprinid fishes. Paternal mtDNA was present in zygotes but absent in larvae and adult organs of goldfish and blunt-snout bream, demonstrating paternal mtDNA delivery and elimination for maternal mtDNA inheritance. Surprisingly, paternal mtDNA remained detectable up to the heartbeat stage, suggesting its late elimination leading to embryonic heteroplasmy up to advanced embryogenesis. Most importantly, we never detected the cytb RNA of paternal mtDNA at all stages when paternal mtDNA was easily detectable, which reveals that paternal mtDNA is transcriptionally quiescent and thus excludes its effect on the development of heteroplasmic embryos. Therefore, paternal mtDNA in cyprinids shows late elimination and transcriptional quiescence. Clearly, transcriptional quiescence of paternal mtDNA represents a new mechanism for maternal mtDNA inheritance and provides implications for treating mitochondrion-associated diseases by mitochondrial transfer or replacement.
The mitochondrion (MT) is a membraned organelle present in all eukaryotic organisms. MT converts the energy of food molecules into ATP to support cellular and organismal metabolism, and is involved also in regulating diverse processes such as apoptosis and innate immunity 1,2 . MT is a unique organelle in possessing a multicopy genome, namely mitochondrial DNA (mtDNA). The human mtDNA is a double-stranded circular molecule and 16,569 bp in length, has a D-loop as the control region for replication and transcription, and 37 genes for 13 proteins, 22 transfer RNAs and 2 ribosomal RNAs 3 . These mtDNA features are highly conserved in diverse animal phyla including fish 4,5 . Exceptions exist. For instance, medusozoan animals such as those in the genus Hydra have linear mtDNA molecules 6 , and the mytilid bivalve (Musculista senhousia) has mtDNA that show differences in size and gene number between male and female origins 7 . In addition, mtDNA of certain vertebrates such as fish may show size variations by the presence and copy number of repetitive sequences in the D-loop region 5 . In human, MT dysfunction and mtDNA mutation are causative for diseases such as diabetes mellitus and cancers [8][9][10][11] . Replacement of a mutant mtDNA by its wildtype version via pronuclear transfer has the potential to prevent transmission of mtDNA-associated diseases in primates including human 12,13 .
Many organisms are homoplasmic, because their cells possess a pool of homogeneous mtDNA molecules. Homoplasmy is very important for normal development, because heteroplasmy-mixing of even two different normal mtDNAs-may lead to genetic instability in mice 14 and even human diseases 15 . One of the most important mechanisms to maintain homoplasmy is uniparental inheritance of mtDNA. Maternally uniparental inheritance (MUI) of mtDNA has been reported in a wide variety of organisms examined so far, including many invertebrates and all vertebrate species such as humans and other mammals 3,16 . Exceptions are certain bivalve mollusks, which show doubly uniparental inheritance (DUI) 17 . These mollusks have two distinct mtDNAs, namely female type (F-type) mtDNA and male type (M-type) mtDNA. The F-and M-type mtDNAs display more than 20% Scientific RepoRts | 6:28571 | DOI: 10.1038/srep28571 nucleotide sequence divergence. The transmission of two mtDNA types is, however, independent and uniparental, because the F-type mtDNA is transmitted through eggs to both female and male progeny, whereas the M-type mtDNA is transmitted through sperm to male progeny only. Consequently, female mollusks possess only F-type mtDNA and are thus hemoplasmic, and males are thus heteroplasmic because they have F-type in their somatic organs and M-type mtDNAs in their gonads. In these DUI organisms, M-type mtDNA plays an essential role in male sex determination, germline establishment, spermatogenesis and sperm function [18][19][20] .
Different degrees of paternal inheritance or leakage of mtDNA may occur even in organisms with demonstrated MUI such as Drosophila 21,22 . In human, paternal inheritance of mtDNA has been controversial. Paternal inheritance of mtDNA was suggested by linkage disequilibrium and recombination in mtDNA 23 . The best case of paternal inheritance of human mtDNA has been reported in a patient carrying a pathogenic mtDNA mutation 24 . Subsequent studies of patients with various mtDNA defects have, however, argued against paternal inheritance of human mtDNA 2,25 . Although M-type is transcribed in not only male germ cells 19,20 but also the somatic cells 26 , it has remained unknown whether paternal mtDNA is transcribed in MUI organisms. In fish, we and others have reported MUI in medaka 27 and recombination between maternal and paternal mtDNAs in hybrid triploids between goldfish and common carp 28 .
This study was aimed at investigation of the fate and behavior of paternal mtDNA in reciprocal hybrids between goldfish (Carassius auratus red var.) and blunt snout bream (Megalobrama amblycephala) as a model of cyprinid fishes. We show that MUI of mtDNA operates in both species by the elimination of paternal mtDNA during embryogenesis. Interestingly, we demonstrate that paternal mtDNA can persist to fairly advanced stages of embryogenesis and remains transcriptionally quiescent, excluding its phenotypic contribution to the developing embryos.
Results
Hybrid analysis system. We make use of cyprinid hybrids as a model system to analyze the mtDNA behavior of different parental origins in developing embryos. Certain species of even distantly related taxa of the family Cyprinidae can easily be mated by artificial insemination procedures to produce hybrid embryos and even adults 29 . Examples are the goldfish ( Fig. 1a; left panel) and blunt-snout bream ( Fig. 1a; right panel), which belong Genomic DNA mixtures between goldfish and the blunt-snout bream were prepared at various ratios and used for PCR analysis by using species-specific cytb primers. Notably, an amount as low as 1‰ is easily detectable. (e) PCR analysis of mtDNA origins, showing the absence of sperm cytb in the hybrid larvae between female goldfish and male blunt snout bream and the coexistence of maternal and paternal cytb in the zygotes from reciprocal hybridization. Asterisks depict sperm mtDNA. DNA was isolated from 20 pooled zygotes and embryos at each stage from parental species and reciprocal hybridization and analyzed by PCR at representative stages indicated. β -actin was used as a loading control. PCR and gels were run under the same conditions. B, blunt-snout bream; G, goldfish; BG, blunt-snout bream female × goldfish male; GB, goldfish female × bluntsnout bream male.
to subfamilies Cyprininae and Cultrinae, respectively. Previously we have shown that the cross between female goldfish and male blunt-snout bream leads to the production of hybrid adult fish, whereas the embryos from the reciprocal cross, namely the cross between female blunt-snout bream and male goldfish, develop abnormally and die shortly after hatching 30 , indicating a possibility that nucleocytoplasmic incompatibility would play a key role in distant hybridization between these two species.
A pair-wise comparison revealed that goldfish and blunt-snout bream shared 85% identity in mtDNA sequence. Specifically, they show an 85% sequence identity in cytb as a representative of mtDNA genes (Fig. S1), and an 89% sequence identity in tfam as a representative of nuclear genes (Fig. S2). Sequence alignment allows for designing PCR primers common or specific to mtDNAs of distinct parental origins (Figs S1 and S2). The PCR primers were designed in such a way so that amplicons of different parental origins differed in size, with those common to both species being intermediate between parental and maternal (Fig. 1b). A semi-quantitative PCR analysis in serially mixed DNA samples from both species revealed a sensitivity of as low as 1‰ for detecting the blunt-snout bream mtDNA in the presence of bulk goldfish mtDNA and nuclear DNA (Fig. 1c). A similar result was obtained also for the goldfish mtDNA serially diluted in the bulk blunt-snout bream mtDNA and nuclear DNA (Fig. 1d). Therefore, reciprocal hybrids between goldfish and blunt-snout bream provide a suitable model system to quantify mtDNAs of different parental origins by sensitive PCR assays.
Maternal inheritance and sperm delivery of mtDNA in cyprinid fish. Since MUI of mtDNA exists in the majority but not all of species examined so far, we examined the mtDNA origin in the hybrids between goldfish and blunt-snout bream. No detectable sperm mtDNA was present in the hybrid larva between female goldfish and male blunt-snout ( Fig. 1e; lanes 1-3), suggesting strict MUI of mtDNA in the hybrid of cyprinid fishes. It has been reported that paternal MT undergoes uneven distribution in mouse embryos 31 , which indicates a possibility that paternal mtDNA may be present in certain adult organs. In order to test this possibility, we examined the adult organs of three germ layers from the hybrid between female goldfish and male blunt-snout. Only maternal mtDNA was detected, whereas paternal mtDNA was absent, as species-specific cytb primers generated the PCR product of merely maternal origin from goldfish but not of paternal origin from blunt-snout bream in all of the seven representative organs examined (Fig. S3). As expected, nuclear gene tfam of both maternal and paternal origins was easily detected in all of the organs (Fig. S3), which is in accordance with the hybrid identity of the organism. Thus, MUI operates in goldfish as in other cyprinid species 32 .
Two major modes operate to ensure MUI of mtDNA. One is paternal mtDNA exclusion, where sperm mitochondria do not enter into the egg but remain outside, and are thus prevented from mtDNA inheritance. This mode has been thought as exceptional because it has so far been limited to the Chinese hamster (Cricetulus griseus) 1 . The other is paternal mtDNA elimination, where sperm mitochondria within the intact mitochondrial sheath do enter together with the tail into the egg at fertilization but become selectively eliminated. This mode has been reported in most invertebrates and vertebrates including the fish medaka (Oryzias latipes) 27 . To distinguish exclusion and elimination modes, we examined the mtDNA origin in zygotes from reciprocal hybridization between goldfish and blunt-snout bream. Paternal mtDNA from either goldfish or blunt-snout bream was easily detected in the zygotes (Fig. 1e). Clearly, sperm mtDNA is delivered into the egg at fertilization but subsequently eliminated to ensure MUI in both goldfish and blunt-snout bream.
Late elimination of paternal mtDNA in cyprinid embryos. It is well-known that paternal mtDNA elimination takes place in early developing embryos of human and diverse animals such as mouse 33,34 , pig 35 , medaka 27 and C. elegans 36 . For example, disappearance of paternal mtDNA occurs at the 4 to 8-cell transition in mouse 34 and the 2-cell stage in medaka 27 . The fact that goldfish and blunt-snout bream make use of paternal mtDNA elimination provoked us to examine the fate of paternal mtDNA during critical stages of embryogenesis. As expected, egg mtDNA was evident throughout embryogenesis (Fig. 2). Surprisingly, sperm mtDNA was still easily detected in embryos at the blastula, gastrula and even heart-beat stages before disappearance at hatching (Fig. 2). In consistence with a hybrid nature, tfam of also paternal origin existed throughout embryogenesis (Fig. 2). Therefore, paternal mtDNA persists to advanced stages of embryogenesis and undergoes late elimination in goldfish and blunt-snout bream, which suggests that embryos until the heart beat stage in both species apparently have mitochonadrial heteroplasmy.
Transcriptional quiescence of paternal mtDNA. Delayed elimination of paternal mtDNA described above provoked us to examine the transcriptional status of maternal and paternal mtDNA at critical stages of development by using cytb and tfam as representatives of mtDNA and nuclear genes. The cytb transcript of maternal origin was easily detectable in embryos of goldfish and blunt-snout bream at stages of blastula, gastrula, heartbeat and fry (Fig. 3a). In contrast, the cytb transcript of paternal origin was never detected in reciprocal hybrid embryos at any stages examined (Fig. 3b). The lack of paternal mtDNA expression was further confirmed by three additional mtDNA genes, namely nd6, atp6 and 16s rRNA (Fig. 3b), For a comparison, the tfam transcript of both maternal and paternal origins was readily detected in embryos of goldfish and blunt-snout bream as well as their reciprocal hybridization (Fig. 4). Taken together, fertilization-delivered sperm mtDNA is transcriptionally quiescent throughout fish embryogenesis, which excludes any effect and phenotypic contribution by sperm mtDNA to developing embryos and thus allows for paternal mtDNA persistence and ensures MUI of mtDNA.
Discussion
In the present study, we have performed a hybrid analysis of the germline transmission and behavior of fertilization-delivered paternal mtDNA in goldfish and blunt-snout bream as a model of cyprinid fishes. We show that MUI operates in both cyprinid fishes as in the majority of organisms examined so far 1,3,16,32,36 . Furthermore, Figure 2. Late elimination of paternal mtDNA in cyprinid embryos. Clearly seen is persistence of sperm mtDNA in developing embryos until the heartbeat stage (24 hpf) and its disappearance in fry around hatching (34 hpf). Asterisks and hashes depict sperm mtDNA from blunt-snout bream and goldfish, respectively. DNA was isolated from 20 pooled embryos at each stage from parental species and reciprocal hybridization and analyzed by PCR at representative stages indicated. β -actin was used as a loading control. PCR and gels were run under the same conditions. Nuclear gene tfam was used for comparison. For abbreviations see legend to Fig. 1. we present two lines of evidence supporting that MUI is the consequence of paternal mtDNA elimination rather than exclusion. One is the easy detectability of paternal mtDNA in zygotes and its disappearance around the hatching stage, demonstrating the delivery of paternal mtDNA at an easily detectable level by sperm at fertilization and its subsequent elimination during embryogenesis. The other is the absence of paternal mtDNA in all of the 7 examined adult organs of three germ layers, which largely excludes the possibility that paternal mtDNA may persist in certain organs through uneven distribution. Uneven distribution of paternal mtDNA has been reported in mouse embryos as an indicator of the possible presence of paternal mtDNA in certain adult organs 31 .
A surprising observation in this study is the persistence of paternal mtDNA in developing embryos until the heartbeat stage when many major organ systems have already been established. This observation demonstrates that paternal mtDNA is eliminated late during embryogenesis in both goldfish and blunt-snout bream. This late elimination is in sharp contrast to early elimination as has been reported in all MUI organisms examined to date, including vertebrates such as the fish medaka 27 and many mammals [33][34][35] , and invertebrates such as C. elegans 36 , where paternal mtDNA elimination occurs early in cleavage embryos. We have previously recorded recombination between maternal and paternal mtDNAs in hybrid triploids between goldfish and common carp 28 . Late elimination of paternal mtDNA revealed in this study may allow for paternal mtDNA persistence and thus favour recombination between maternal and paternal mtDNAs. Future work is needed to see whether late elimination of paternal mtDNA operates also in other animal species.
Mitochondrial heteroplasmy is usually associated with abnormal embryogenesis and diseased phenotypes in diverse organisms such as mammals 1,[37][38][39] . Persistence of paternal mtDNA due to its late elimination indicates mitochonadrial heteroplasmy in hybrid embryos between goldfish and blunt-snout bream up to advanced stages. We have previously shown that embryos between female goldfish and male blunt-snout bream are capable of normal development as evidenced by the production of normal adult fish, whereas embryos between male goldfish and female blunt-snout bream are characterized by abnormal development and perinatal mortality 30 . These observations lead to a notion that mitochonadrial heteroplasmy has little adverse effect on embryogenesis in goldfish and blunt-snout bream and cannot be made responsible for abnormal development and perinatal death of hybrid embryos between female blunt-snout bream and male goldfish.
A striking finding obtained in this study is the transcriptional quiescence of paternal mtDNA in cyprinid embryos, which is sharp contrast to the situation of DUI mollusks where paternal mtDNA transcription occurs in both male germ cells 19,20 and somatic cells 26 . This quiescence may prevent paternal mtDNA from contributing its effect and function to the cell and embryo, which in turn allows for paternal mtDNA persistence and normal development of heteroplasmic embryos as we have observed in goldfish and blunt-snout bream. In this context, transcriptional quiescence of paternal mtDNA represents a new mechanism for maternal mtDNA inheritance. Transcriptional quiescence may result from the incompatibility of mitochondrial transcription machinery and/or the inaccessibility of paternal mtDNA. In this study, we have shown that the incompatibility of mitochondrial transcription machinery is unlikely to be causative for transcriptional quiescence, because tfam RNA -whose protein product mitochondrial transcription factor A acts as a key player in mtDNA replication and transcription 40,41does not show any difference in embryonic transcription between maternal and paternal alleles. Although work is needed to test any difference between maternal and paternal mtDNAs in transcriptional inaccessibility, our finding that paternal mtDNA is transcriptionally quiescent has important implications for treating MT-associated diseases by MT transfer or replacement as has been attempted in primates including human 12,13 .
Materials and Methods
Fish. Fish work was performed in strict accordance with the recommendations in the Guidelines for the Care and Use of Laboratory Animals of the National Advisory Committee for Laboratory Animal Research in China and approved by the Animal Care Committee of Hunan Normal University (Permit Number: 4237). Goldfish (red variety; Carassius auratus) and blunt-snout bream (Megalobrama amblycephala) were maintained at the National Education Ministry Breeding Center of Polyploidy Fish, Hunan Normal University as described 32 . Reproduction and reciprocal hybridization were performed by using the dry method of artificial insemination. Embryos were placed on nylon meshes in water for mass production or in Petri dishes for experimentation. Embryos in Petri dishes were regularly monitored, snap-frozen in liquid nitrogen at different stages and stored at − 80 °C before use. Sequence analysis. Sequences were analyzed by using Blast search and aligned by using Vector NT.
DNA and RNA extraction. DNA was extracted from freshly dissected organs of adult fish or 20 pooled embryos at each stage by using the TaKaRa MiniBEST Universal Genomic DNA Extraction Kit (TaKaRa, Japan) as described 32 . RNA was extracted by using the E.Z.N.A. Total RNA Kit II (OMEGA).
Polymerase chain reaction. Genomic DNA PCR was run for 35 cycles (94 °C for 30 s, 58 °C for 30 s and 72 °C for 30 s) in a 25-μ l volume containing 50 ng of template DNA and appropriate primers for cytb, tfam and β -actin as described 32 . Template DNA samples used were goldfish DNA, blunt-snout bream DNA or their mixtures with serial dilutions. For RT-PCR, first-strand cDNA was synthesized by using the PrimeScript TM RT reagent Kit with gDNA Eraser (TaKaRa), and PCR was run for 35 cycles (94 °C for 30 s, 58 °C for 30 s and 72 °C for 30 s) in a 25-μ l volume containing 10 ng of template cDNA and appropriate primers for cytb and tfam, or for 30 cycles for β -actin as a loading control. Primers used are listed in Table S1. PCR products were separated on 1.5% agarose gels and documented on the White/UV Transilluminators (UVP, Upland, CA 91786).
|
v3-fos-license
|
2017-06-18T21:47:08.783Z
|
2014-11-19T00:00:00.000
|
16535898
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcpublichealth.biomedcentral.com/track/pdf/10.1186/1471-2458-14-1184",
"pdf_hash": "b11c1a91fe3857421d2e5e85d8b3e00f8842cbd1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46749",
"s2fieldsofstudy": [
"Sociology"
],
"sha1": "048bd3f31b20a379fb29e361eb80ca3251ec1c18",
"year": 2014
}
|
pes2o/s2orc
|
Differences in spousal influence on smoking cessation by gender and education among Japanese couples
Background Previous studies have reported that spousal non-smoking has a spillover effect on the partner’s cessation. However, discussion is lacking on the factors modifying that association. We examined whether the spillover effect of spousal non-smoking was associated with the couple’s educational attainment. Methods We used paired marital data from the Japanese Study on Stratification, Health, Income, and Neighborhood (J-SHINE), which targeted residents aged 25–50 years in four Japanese municipalities. We selected a spouse smoker at the time of marriage (target respondent), and set his/her smoking status change (continued or quit smoking after marriage) as an outcome, regressed on the counterpart’s smoking status (continued smoking or non-smoking) and combinations of each couple’s educational attainment as explanatory variables using log-binomial regression models (n =1001 targets; 708 men and 293 women). Results Regression results showed that a counterpart who previously quit smoking or was a never-smoker was associated with the target male spouse’s subsequent cessation. However, for women, the association between husband’s non-smoking and their own cessation was significant only for couples in which both spouses were highly educated. Conclusions Our findings suggest that a spouse’s smoking status is important for smoking cessation interventions in men. For women, however, a couple’s combined educational attainment may matter in the interventions.
Background
Smoking is a risk factor associated with the incidence of various non-communicable diseases. Owing to secondhand smoke, smoking is harmful even to non-smokers, making it a major target of public health intervention, not only for smokers but also their families and neighbors.
In common with developed Western countries, the smoking rate in Japan has declined markedly since the 1950s. However, the male smoking rate in Japan is the sixth highest among OECD (Organization for Economic Cooperation and Development) countries [1], and tobacco remains the largest contributor to the nation's burden of diseases [2]. There is a notable trend in the prevalence of smoking among women with respect to age: although the female smoking rate in Japan is relatively low compared with Western countries, the rate among young women is increasing [3].
Among individual characteristics affecting smoking initiation and cessation, researchers have focused on age [4], gender [5], educational background [6,7], and other sociodemographic factors [8]. In parallel with the studies of individual factors, many researchers have also studied the influence of family, especially a spouse, on smoking cessation among married people [9][10][11][12]. Spousal smoking status is regarded as a significant factor affecting a partner's smoking cessation. A longitudinal study by Falba and Sindelar [13] found that, if a spouse quit smoking, the odds of the partner's cessation showed an increase of up to 7.5-fold among men and 8.5-fold among women. Using self-reported life histories of smoking behavior, Monden et al. [10] found that respondents living with an ex-smoker or never-smoker spouse were more likely to quit smoking than respondents living with a current smoker.
Despite a number of publications linking a spouse's smoking cessation to that of their partner, discussion of the factors modifying these associations is lacking. As Ross et al. [14] discussed in their review paper, a couple's key characteristic in affecting the concordance of health-related behaviors is educational background. Many studies suggested that an individual's education affects their own health through material circumstances, behavioral factors (e.g., lifestyle), and psychosocial factors (e.g., social support) [14][15][16]. A spouse's education may also affect their partner's health by the same mechanisms as the individual's own education [17]. Therefore, taking a spouse's education into account may well explain the health of married people. For example, using cross-sectional data, Bloch et al. [18] found that highly educated couples showed a high concordance rate with respect to absence of smoking behavior. Monden et al. [17] also examined the effect of a participant's own and their spouse's educational attainment on smoking behavior using data from about 40,000 Dutch people. Although the results suggested that a spouse's education was significantly associated with their partner's smoking behavior, the effect was weaker than that of the participant's own educational level. However, because these studies did not examine when the couples started and stopped smoking, they could not conclude whether the observed association was the result of a real spillover effect of the partner's behavior change or just the reflection of assortative mating. Based on social exchange theory, assortative mating proposes that mate selection is not random and that individuals are likely to choose a partner who is similar in personality, behavior, physical features, and health [19][20][21][22].
To find a leverage point of behavioral intervention to reduce smoking, it would be beneficial for public health practitioners to know whether couples' behavioral interactions and their educational backgrounds affect the likelihood of smoking cessation. If the spillover effect exists and the effect is influenced by the couple's educational backgrounds, we can more effectively modify smoking cessation interventions according to their own and their partners' educational statuses whereas if the assortative mating explanation is a dominant factor for the couple's concordance in smoking behavior, approaches relevant to the target's educational background would not work. Thus, this paper examined how an individual's smoking cessation is affected by a spouse's prior smoking behavior status, and how these associations are altered by the couple's educational attainment combinations.
Data
Data from the Japanese Study on Stratification, Health, Income, and Neighborhood (J-SHINE) project were used for this study, details of which are described elsewhere [23]. The wave 1 survey was conducted from July 2010 to February 2011 in four municipalities in Japan (two in the Tokyo metropolitan area and two in a nearby prefecture), with a probabilistic sample of communitydwelling men and women aged 25-50 years. The sample size of the wave 1 survey was 8,408, and 4,357 respondents replied (response rate was 51.8%). Among them, 3,027 with a spouse or partner were invited to take part in a spouse/partner survey from August to December 2011; this involved asking similar questions as in the wave 1 survey questionnaire to make a pair-wise comparison. The questionnaire was filled out by the spouses themselves. Data from the spouse/partner survey were merged into the wave 1 data, and the paired data of 1,500 couples were available for the analysis. All couples were asked when they married, and when each member of the couple initiated/quit smoking. Thirty-two couples in which spouses were living together but were not legally married were not included in our analyses because there was no question identifying the starting date of their partner relationship. Additionally, 545 couples in which both spouses were non-smokers at the time of marriage were also omitted from our analyses. The present study analyzed 839 eligible couples who had no missing values in the measurement variables (as described below, the number of individual "targets" included in our analyses was 1001).
The study protocol and informed consent were approved by the ethics committees of the Graduate School of Medicine of The University of Tokyo.
Smoking status
Respondents were asked to identify their smoking status from three categories (1 = current smoker, 2 = exsmoker, 3 = never-smoker). Respondents who were categorized as current smokers or ex-smokers were then asked when they initiated smoking according to a yearmonth format. Ex-smokers were additionally asked when they quit smoking. In the spouse/partner survey, spouses were asked about their smoking status and the date of smoking initiation and cessation.
We extracted information about both spouses' smoking status at marriage and their change in smoking status after marriage, based on wave 1 and spouse survey data. Then, we included all subjects who were smokers at the time of marriage as the targets. In couples who both smoked at the time of marriage, each person in the couple was counted in the analysis as a target, and their counterpart's status at the time of behavioral change as an exposure. As a result, the number of individual targets in our analyses was 1001 (708 men and 293 women). Our main exposure variable was dichotomous, indicating whether the counterpart continued smoking after marriage or was a non-smoker (a never or ex-smoker at marriage, or quit after marriage). The outcome variable was also dichotomous, indicating whether the target respondent continued smoking after marriage or quit smoking during marital life (Table 1).
Demographic variables
Educational attainment of high school graduation or lower was coded as 0 and that of college graduation or higher as 1. Educational attainment was measured as completion of the level of schooling at the time of the survey. In the analyses, combinations of the couple's educational attainment were used as explanatory variables, i.e., we used four dummy variables indicating couples with a low-educated target and low-educated counterpart (LOW-low couple), a low-educated target and high-educated counterpart (LOW-high couple), a high-educated target and low-educated counterpart (HIGH-low couple), and a high-educated target and high-educated counterpart (HIGH-high couple). Additionally, the target respondent's age and gender and household's presence of children were also used as sociodemographic covariates.
Statistical analysis
We adopted log-binominal regression models rather than logistic regression models as the prevalence of outcome events (change in smoking behavior after marriage) was relatively large [24,25]. We set the target respondent's change in smoking behavior as an outcome, and their counterpart's non-smoking status as a main exposure.
First, we analyzed a model that included the main effect of counterpart's non-smoking status and its interaction effect with the target's sex to check whether there was a gender difference in the effect of counterpart's smoking status. Because this analysis included some couples twice, the independence assumption of regression was violated. To account for potential underestimation of errors, we adopted robust error estimation to take withincouple clustering into consideration [26]. Because the interaction was significant, we further conducted analyses stratified by gender, by regressing the main effects of the counterpart's non-smoking status and the couple's educational attainment combinations on the target's behavioral change in smoking after marriage (Model 1). In subsequent analyses, we added to Model 1 interaction terms between counterpart's non-smoking status and couple's educational attainment combinations to determine whether the spillover effect of counterpart's nonsmoking varied according to the couple's educational levels (Model 2). Table 2 presents descriptive statistics of the targets in the analysis.
Results
About half of the target respondents were in their 40s, and about 80% had children; 31.2% of men and 47.4% of women achieved high school graduation or less. The percentage of the couples matched in terms of counterpart's educational attainment ("LOW-low" or "HIGHhigh" couples) were 75.2% among men and 66.9% among women. Whereas 42.5% of male targets continued smoking after marriage despite having a non-smoking counterpart, only 10.6% of female targets did so. Only 4.1% of males quit smoking despite their wife's continuing smoking, whereas 37.2% of females did so. The proportion of couples in which both target and counterpart continued smoking after marriage was 16.0% among men and 38.2% among women.
Next, we examined whether the interaction term between the target's sex and counterpart's non-smoking was associated with the target's cessation after marriage to check for a gender difference in the spillover effect (data not shown). The results showed that, in contrast to the non-significant main effect of counterpart's nonsmoking on the target's postmarital cessation (risk ratio (RR) =1.20, 95% CI: 0.94-1.52), the interaction term was significant (RR =1.84, 95% CI: 1.22-2.76). A loglikelihood ratio (LR) test between the model with and without the interaction term showed that inclusion of the term was significant (LR chi 2 (1) =9.96, p < .01). Because these results indicated that there was a significant gender difference in the effect of counterpart's nonsmoking on the target's cessation after marriage, the following analyses were stratified by gender.
Model 1 in Table 3 shows the log-binomial regression estimates for the effects of the counterpart's non-smoking The couple's educational attainment combinations did not have significant main effects on smoking cessation in either men or women. As an additional analysis, we recategorized the counterpart's non-smoking status into "never-smoker" and "ex-smoker" and examined whether these non-smoking categories showed different associations with the target's cessation. The RRs of never-smoker counterparts were 2.04 for men and 1.02 for women whereas those of ex-smoker counterparts were 3.28 for men and 1.33 for women (continuing smoker as a reference, data not shown). Model 2 in Table 3 examined whether the association between a counterpart's non-smoking and a target's cessation varied depending on the couple's educational combination. Targets in couples where both spouses had low levels of education (i.e., LOW-low couples) and the counterpart was a non-smoker were set to the reference category to examine whether the effect of a counterpart's non-smoking depended on the couple's educational combination. For men, any combination of educational attainment did not show significant interaction with counterpart's non-smoking. However, for women, the interaction of high educational attainment in both spouses and their husband's non-smoking showed a significant association with their own smoking cessation after marriage (RR =1.48, 95% CI: 1.05-2.08). This result suggested that women in couples where both spouses were highly educated and the husband was a non-smoker were 1.48 times more likely to stop smoking than women in couples where both had lower educational levels and the husband was a nonsmoker. LR tests between models 1 and 2 demonstrated that interaction terms between educational pairs and spousal non-smoking as a whole were not significant either for men (LR chi 2 = 1.39, n.s.) or for women (LR chi 2 = 1.03, n.s.).
Discussion
This study indicated that counterpart's non-smoking had a major association with a target's subsequent cessation in men, i.e., there was a spillover effect of the wife's non-smoking only among men. However, husband's non-smoking was not associated with female target's cessation. In women, a significant association between counterpart's non-smoking and the target's own cessation was only observed in couples in which both spouses were highly educated. The results suggested that the spillover effect from highly educated husbands quitting tobacco use was effective only for highly educated wives. However, among men, the combinations of the couples' educational Counterpart's "Non-smoking" includes "cessation before marriage," "cessation after marriage but before target' cessation," and "never-smoked".
levels had no influence on the spillover effect. The strength of the present study is that the possibility of assortative mating was excluded to some extent by analyzing the target respondents' postmarital behavioral changes in smoking.
Intrinsically, one person in a partnership may regulate the partner's health behaviors through direct physical intervention in an effort to improve the health of their partner [27]. Although many spouses generally monitor and attempt to control their partner's health behaviors, women are more likely to attempt to control their spouse's health than men [16]. Our finding that men benefit from their female counterpart's non-smoking, regardless of her educational background could be explained by the general influence of women on the daily habits of their husbands by monitoring health and social behavior, and/or providing support for behavioral change. Meanwhile, our analyses suggested that a husband's non-smoking was significant only for women in highly educated couples, suggesting that a couple's educational level may modify the impact of a husband's non-smoking on women.
One plausible reason is that the amount of social influence and/or support that husbands exert on their wives depends on their educational backgrounds. A woman who marries a husband of low educational attainment may receive relatively little social support from him to stop smoking (e.g., emotional encouragement), or may have a negative influence from him (e.g., emotional pressure to smoke) [28]. Likewise, a woman's tendency to accept a husband's influence may also depend on her own educational level. For example, a high education increases knowledge about health and helps people accept the positive influences from family health behaviors. The analyses that split never-smoker and ex-smoker counterparts into independent categories indicated that the spillover effect for a male target from his counterpart may depend on whether the counterpart was a never-smoker or ex-smoker, and this was consistent with the findings of Monden et al. [10]. Ex-smoker counterparts may be more likely to dislike their spouse smoking and to intervene in their spouse's cessation than never-smoker counterparts, or ex-smoker counterparts may be able to provide more appropriate support for their smoking spouse because they know the difficulty of quitting smoking.
One possible reason that the main effect of a couple's educational level was not significant in this study may be attributed to the failure of previous studies to exclude the possibility of assortative mating [17,18]. That is, there is a possibility that a highly educated person marries a highly educated partner and, in such couples, the probability of being a non-smoker is high because of their high educational level. In fact, when examining the cross-sectional smoking status in our data, it was shown that the target's own and the counterpart's educational levels were associated with the target's smoking status. In this paper, we focused on smoking cessation after marriage and treated the respondent who was a smoker at the start of their marital life as the target, thus it is reasonable to believe that the main effect of educational attainment is weakened in our analyses, with a reduced influence of assortative mating.
Several limitations should be noted in this study. First, we measured the respondents' self-reported date of smoking cessation, which may be susceptible to measurement errors. Second, although partners who were living with, but were not legally married to the respondents in the wave 1 survey (e.g., common-law husband/wife) were also invited to participate in the spouse/partner survey, they were omitted from our analyses because of the lack of a defined starting date of the partner relationship. However, an unmarried partner can also be seen as influential in the issue of the spillover effect from intimate others. Future research should include unmarried couples and examine the effect of the intimate partner's smoking behavior. Third, the sample was derived from four municipalities in the Greater Tokyo metropolitan area, which may affect the generalizability of our findings. Finally, we simply assumed in our analysis that the initial cessation of a spouse affected the subsequent cessation of the partner; however, temporal initiation does not necessarily signify a causal association between the behaviors, and our estimates of the spillover effect may have been exaggerated.
The implications of our results for public health practice are that smoking cessation programs targeting both spouses may be more effective than those targeting individuals in some couples. For example, for couples who both smoke, if either one can successfully quit smoking, the eventual likelihood of success of both spouses quitting can be increased. Or, if a practitioner finds that one spouse's likelihood of cessation is higher than that of the other spouses', initial intervention for the former spouse can increase the probability of cessation in both spouses. Thus, both spouses should be involved in the intervention. This is especially the case for men. However, for women, the husband's smoking status may not enhance the effectiveness of the intervention if the educational attainment of both spouses is not high; our result implied that the spillover effect between spouses may not be strong in such couples.
Conclusion
Our findings suggest that a spouse's smoking status is associated with men's smoking cessation. For women, however, a couple's combined educational attainment may matter in that association. The present paper implies that cessation programs should involve both members of a couple, and such programs should take into account the educational backgrounds of the couple in the case of women smokers. design and editing of data. HH revised the drafted manuscript critically and contributed to revising the manuscript through the review process for important intellectual content. All authors read and approved the final manuscript.
|
v3-fos-license
|
2017-09-15T11:31:15.854Z
|
2012-02-01T00:00:00.000
|
39887295
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://cdn.intechopen.com/pdfs/27383.pdf",
"pdf_hash": "8376f6f9edee1455968a919793c7fa075f47c30e",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_08f3ca33-e262-4f97-a832-04b691aafe33.zst:46750",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "abe0de563ea58903b80b4d4515d5fd4d2bcebc76",
"year": 2012
}
|
pes2o/s2orc
|
Micro and Nano Corrosion in Steel Cans Used in the Seafood Industry
The use of metal containers for food preservation comes from the early nineteenth century, has been important in the food industry. This type of packaging was developed to improve food preservation, which were stored in glass jars, manufactured for the French army at the time of Napoleon Bonaparte (XVIII century), but were very fragile and difficult to handle in battlefields, so it was decided the produce metal containers (Brody et al, 2008). Peter Durand invented the metallic cans in1810 to improve the packaging of food. In 1903 the English company Cob Preserving, made studies to develop coatings and prevent internal and external corrosion of the cans and maintain the nutritional properties of food (Brody, 2001). Currently, the cans are made from steel sheets treated with electrolytic processes for depositing tin. In addition, a variety of plastic coatings used to protect steel from corrosion and produce the adequate brightness for printing legends on the outside of the metallic cans (Doyle, 2006). This type of metal containers does not affect the taste and smell of the product; the insulator between the food and the steel, is non-toxic and avoid the deterioration of the food. The differences between metal and glass containers, as well as the negative effects that cause damage to the environment and human health are presented in Table 1. The wide use of steel packaging in the food industry, from their initial experimental process, has been very supportive to keep food in good conditions, with advantages over other materials such as glass, ceramics, iron and tin. The mechanical and physicochemical properties of steel help in its use for quick and easy manufacturing process (Brown, 2003). At present, exist a wide variety of foods conserved in steel cans, but in harsh environments, they corrode. Aluminum is used due to its better resistance to corrosion, but is more expensive. With metal packaging, the food reaches to the most remote places of the planet, and its stays for longer times without losing its nutritional properties, established and regulated for health standards by the Mexican Official Standards (NOM). The difference between using metal cans to glass (Table 1) indicate greater advantages for steel cans (Finkenzeller, 2003). In coastal areas, where some food companies operate, using steel cans, three types of deterioration are detected: atmospheric corrosion, filiform corrosion and microbiological corrosion. Even with the implementation of techniques and methods of
Introduction
The use of metal containers for food preservation comes from the early nineteenth century, has been important in the food industry. This type of packaging was developed to improve food preservation, which were stored in glass jars, manufactured for the French army at the time of Napoleon Bonaparte (XVIII century), but were very fragile and difficult to handle in battlefields, so it was decided the produce metal containers (Brody et al, 2008). Peter Durand invented the metallic cans in1810 to improve the packaging of food. In 1903 the English company Cob Preserving, made studies to develop coatings and prevent internal and external corrosion of the cans and maintain the nutritional properties of food (Brody, 2001). Currently, the cans are made from steel sheets treated with electrolytic processes for depositing tin. In addition, a variety of plastic coatings used to protect steel from corrosion and produce the adequate brightness for printing legends on the outside of the metallic cans (Doyle, 2006). This type of metal containers does not affect the taste and smell of the product; the insulator between the food and the steel, is non-toxic and avoid the deterioration of the food. The differences between metal and glass containers, as well as the negative effects that cause damage to the environment and human health are presented in Table 1. The wide use of steel packaging in the food industry, from their initial experimental process, has been very supportive to keep food in good conditions, with advantages over other materials such as glass, ceramics, iron and tin. The mechanical and physicochemical properties of steel help in its use for quick and easy manufacturing process (Brown, 2003). At present, exist a wide variety of foods conserved in steel cans, but in harsh environments, they corrode. Aluminum is used due to its better resistance to corrosion, but is more expensive. With metal packaging, the food reaches to the most remote places of the planet, and its stays for longer times without losing its nutritional properties, established and regulated for health standards by the Mexican Official Standards (NOM). The difference between using metal cans to glass (Table 1) indicate greater advantages for steel cans (Finkenzeller, 2003). In coastal areas, where some food companies operate, using steel cans, three types of deterioration are detected: atmospheric corrosion, filiform corrosion and microbiological corrosion. Even with the implementation of techniques and methods of protection and use of metal and plastic coatings, corrosion is still generated, being lower with the use of plastics (Lange et al, 2003). Variaitons of humidity and temperature deteriorate steel cans (Table2).
Steel
Steel is the most used metal in industrial plants, for its mechanical and thermal properties, and manufacturing facility. It is an alloy of iron and carbon. Steel manufacturing is a key part of the Mexican economy. Altos Hornos is the largest company in Mexico, with a production of more than 3,000000 tons per year, located in Monclova, Coahuila, near the U.S. border (AHMSA, 2010). Steel is used in the food industry, especially in the packaging of sardines and tuna (Lord, 2008).
Metallic cans
The steel cans consist of two parts: body and ring or three parts: body, joint and ring (Figures 1a and 1b). When a steel can is not properly sealed, it is damaged by drastic variations of humidity and temperature creating microorganisms, which cause an injury on the health of consumers (Cooksey, 2005). Every day millions of cans are produced, the companies express their interest in research studies to improve their designs. There are two main types of steel cans: tin plated and plastic coated. Plastic coatings have good resistance to compression, and the resistance to corrosion is better than the tin plate. Since the oxide layer that forms on the container surface is not completely inert, the container should be covered internally with a health compatible coating (Nachay, 2007).
Production stages
The manufacture stages in a food industry are shown in Figure 2 (Avella, 2005): Washing: Cans are cleaned thoroughly to remove the bacteria that could alter the food nutritional value.
Blanching: The product is subjected to hot water immersion to remove the enzymes that produce food darkening and the microorganisms that cause rancidity. Preparation: Before placing food in the can the non-consumable parts of the sardine and tuna are removed, then the ingredients to prepare the food in accordance with the consumption requirements are added. Packaging: The food is placed in the can, adding preservatives such as vinegar, syrup, salt and others to obtain the desired flavor. Air removal: The can pass through a steam tunnel at 70 ° C, to avoid bad taste and odor. Sealing: by soldering or with seams. Sterilization: It is of great importance for the full elimination of microorganisms that might be left over from the previous stages, when the can is treated at temperatures of 120 ° C.
Cooling. Once sterilized the cans are cooled under running cold water or cold water immersion, from the outside without affecting the food quality.
Labeling. On the can label are placed legends with product ingredients, expiration dates and lot numbers of production. Packaging, is made to organize the food steel cans in boxes. Food technology specialists considers, that an adequate manufacturing process of canned foods, helps to keep certain products up to several months and years, as the case of milk powder to nine months, some vegetables and meat foods two and up to five years. A diagram summarizing all these stages is displayed in Figure 2.
Sea food industry in Mexico
The main coastal cities in Mexico, with installed companies that fabricate metallic cans for sardines and tuna conservation are Acapulco, Guerrero, Ciudad del Cabo in the State of Baja California Sur; Ensenada, Baja California, Campeche, Campeche, Mazatlan, Sinaloa, Veracruz, Veracruz (Bancomext, 2010). The sardine is a blue fish with good source of omega-3, helping to lower cholesterol and triglycerides, and increase blood flow, decreasing the risk of atherosclerosis and thrombosis. Due to these nutrition properties, its widely consumed in Mexico; it contains vitamins B12, niacin and B1, using its energy nutrients (carbohydrates, fats and proteins) as a good diet. This food is important in the biological processes for formation of red blood cells, synthesis of genetic material and production of sex hormones. Tuna is an excellent food with high biological value protein, vitamins and minerals. It has minerals such as phosphorus, potassium, iron, magnesium and sodium and vitamins A, D, B, B3 and B12, which are beneficial for the care of the eyes and also provides folic acid to pregnant women. Fat rich in omega-3, is ideal for people who suffer from cardiovascular disease (FAO, 2010).
Atmospheric corrosion
Atmospheric corrosion is an electrochemical phenomenon that occurs in the wet film formed on metal surfaces by climatic factors (Lopez et al, 2011, AHRAE, 1999. One factor that determines the intensity of damage in metals exposed to atmosphere is the corrosive chemical composition in the environments. The sulphur oxides (SO X ), nitrogen oxides (NO X ), carbon oxide (CO) and sodium chloride (NaCl) that generates chloride ions (Cl -), are the most common corrosive agents. The NaCl enters to the atmosphere from the sea; SO X , NO X and CO, is emitted by traffic vehicle. The joint action of the causes of pollution and weather determine the intensity and nature of corrosion processes, acting simultaneously, and increasing their effects. It is also important to mention other factors such as exposure conditions, the metal composition and properties of oxide formed, which combined, have an influence on the corrosion phenomena (Lopez, 2008). The most important atmospheric feature that is directly related to the corrosion process is moisture, which is the source of electrolyte required in the electrochemical process. In spite of existing corrosion prevention and protection systems as well as application of coatings in steel cans the corrosion control, is not easy in specific climatic regions, especially in marine regions. Ensenada which is a marine region of Mexico on the Pacific Ocean has a marine climate with cold winter mornings around 5 °C and in summer 35° C. Relative humidity (RH) is around 20% to 80%. The main climate factors analyzed were humidity, temperature and wind to determine the time of wetness (TOW) and the periods of formation of thin films of SO X and Clwhich were analyzed to determine the corrosivity levels (CL) in outdoors and indoors of seafood industry plants (Lopez et al, 2010).
Corrosion of steel cans
Corrosion of tinplate for food packaging is an electrochemical process that deteriorates the metallic surfaces (Ibars et al, 1992). The layer of tin provides a discontinuous structure, due to their porosity and mechanical damage or defects resulting from handling the can. The lack of continuity of the tin layer allows the food, product to be in contact with the various constituents of the steel, with the consequent formation of galvanic cells, inside of the cans. The presence of solder alloy used in the conventional container side seam is a further element in the formation of galvanic cells. Corrosion of tin plate for acidic food produces the dissolution of tin and hydrogen gas formation resulting from the cathodic reaction that accumulate in the cans. At present, the problems arising from the simultaneous presence of an aggressive environment, mechanical stress and localized corrosion (pitting) are too frequent (CGB, 2007).
Coatings
The food in steel can is protected by a metallic or plastic coating regulated by the FDA (Food Drugs Administration, USA) that does not generate any health problems in consumers (Weiss et al, 2006). The coating is adhered on the metal plate and its function is due to three main features: Thermal and chemical resistance assures the protection of the steel surface when a food produces a chemical attack by rancidity, changing the food taste. Adherence. The coating is easily attached to the inside can surface. Flexibility. Resistance to mechanical operations that modify the structure of the can, in the manufacturing process, such as molding shapes and bad handling.
Currently, new materials and coatings are analyzed to fit them to food variety, beverages and other canned products (Table 3). The coatings used in the food industry are organosol type, with high solids content, creating dry films with thickness 10 to 14 g/m 2 , for manufacturing or recycling, allowing large deformation (Soroka, 2002). To improve the strength of steel, two layers of epoxy-phenolic are applied, in the organosol film. If the food suffers decomposition, it generates deformation in the can (Yam et al, 2005). Coatings are applied on the cans on the inside and outside. Since the early twentieth century, coatings manufacturers have supported the food and beverage industries, using oleoresins resins, phenolic and later in 1935, was applied vinyl coating in the beer cans. Later comes the epoxy-phenolic coatings, organosoles, acrylic and polyester (Ray, 2006).
Protection in indoors of cans
They are in contact with the packaged product and are called "health coatings" Exterior coatings pigmented Are used to underlie the decorative printing of packaging, called "white enamel" or "white lacquer. "
Exterior coatings transparent
Are used to underlie print, called "coatings of hitch. " Protects the printing inks of defectives manipulations, known as "clear coatings". FUNCTIONS Protects the metal of the food Protects the product from contamination when steel parts from the can, are detached Facilitates the production Provides a basis for decoration Acts as a barrier against external corrosion and abrasion CHARACTERISTICS Must be compatible with the packaged product and to resist their aggressiveness Have a high adhesion to the tin or other metal Are free of toxic substances Not affect the organoleptic characteristics of the packaged product Not contain any items prohibited by the health legislation Is resistant of the sterilization and / or treatment, that is subject to packaging the product Adequately support to the welding of the body in two and three pieces of containers TYPES METALLIC PLASTICS Tin compound Oleoresins, phenolic, epoxy,vinyl, acrylic, polyester
COMPOSITION COATING MATERIAL COMPOSITION Acrylic Polyester Copolymer
The polyester is inserted in the acrylic Polyester resin The resin is unsaturated
Crosslinker
The polyester contains acrylic acid, or maleic anhydride with styrene.
Climate factors
The climate is composed of several parameters; RH and temperature are the most important factors in the damage of steel cans. Scientists that analyze the atmospheric corrosion, consider that the grade of deterioration of steel cans is due to the drastic changes in the humidity and temperature in certain times of the year, as expressed in ISO 9223 (ISO 9223, 1992). Managers and technicians of companies and members of health institutions in Mexico are concerned in some periods of the year, by the quality of seafood contained in steel cans (Moncmanova;2007).
Corrosion testing
Pieces of steel rolls were prepared for corrosion testing simulating steel cans, which were exposed at indoor conditions of seafood plants for periods of one, three, six and twelve months in Ensenada, following the ASTM standards G 1, G 4, G 31 (ASTM, 2000). The results were correlated with RH, TOW and temperature parameters. The concentration levels of SO X and Clwere evaluated with the sulfatation technique plate (SPT) and wet candle method (WCM), (ASTM G 91-97, 2010;ASTM G140-02, 2008). The industrial plants of seafood in this city are located at distances at 1 km to 10 from the sea shore. Steel plates used to fabricate steel cans with dimensions of 3 cm. X 2 cm. and 0.5 cm of width, were cleaned by immersion in an isopropyl alcohol ultrasound bath for 15 minutes (ISO 11844-1, ISO 11844-2, Lopez et al, 2008). Immediately after cleaning the steel probes were placed in sealed plastic bags, ready to be installed in the test indoor and outdoor sites. After each exposure period the steel specimens were removed, cleaned and weighed to obtain the weight loss and to calculate the corrosion rate (CR).
Examination techniques
The corrosion products morphology was examined by the scanning electron microscope (SEM) and the Auger Electron Spectroscopy (AES) techniques. SEM. Used to determine the morphology of the corrosion products formed by chemical agents that react with the steel internal an external surface. The SEM technique produces very high-resolution images of a sample surface. A wide range of magnifications is possible, from about 10 times to more than 500,000 times. The SEM model SSX-550 was used; revealing details less than 3.5 nm, in size from 20 to 300,000 magnifications and 0.5 V to 30kV by step. AES. It determines the chemical composition of elements and compounds in the steel cans and rolls, and analyzes the air pollutants deposited on the steel. With this technique we knew in detail, quickly and with a good precision, the structural form and location of corrosion at surface level which determined the type of corrosion (Clark,et al, 2006). AES analysis was performed in Bruker Quantax and ESCA / SAM 560 models, and the bombarding were obtained when samples with a beam of electrons with energy of 5keV. We made a clean surface of steel specimens analyzed with an ion beam with energy Ar + 5keV and current density of 0.3 uA / cm 3 to remove CO 2 from the atmosphere (Asami et al, 1997). The sputtering process indicates the type of film formed on the metallic surface of steel and the corrosion on separated points such as pitting corrosion.
Numerical analysis
A mathematical correlation was made applying MatLab software to determine the CL in indoors of seafood industry in Ensenada in summer and winter (Duncan et al, 2005). With this simulation we find out the deterioration grade of steel probes, correlating the climate factors (humidity and temperature) and air pollutants (CO, NO X and SO X ), with the corrosion rate (CR).
Results
The generation of corrosion in steel cans is promoted by the formation of the thin film of corrosion products in their surface and the exposition of chlorides and sulfides. The seafood industry is concerned with the economic losses caused by bad appearance of the containers and the loss of nutritional properties of sardine and tuna.
Deterioration of steel cans
Levels of humidity and temperature bigger than 75% and 35 °C accelerated the CR. In summer the CR was higher after one year. For temperatures in the range from 25 ºC to 35 ºC, and RH level of 35% to 75%, the CR was very high. Furthermore, in winter, at temperatures around 10 ºC to 20 ºC and RH levels from 25% to 85%, water condensates on the metal surface and the CR increases very fast. Variations of RH in the range from 25% to 75% and temperatures from 5 ºC to 30 ºC, and the concentration levels of air pollutants such as sulfides and chlorides, which exceeds the permitted levels of the air quality standard (AQS), increase the corrosion process. In the autumn and winter, corrosion is generated by a film formed uniformly on the steels (Lopez et al, 2010). Exposition to SO 2 indicates more damage, compared with the effect of the chlorides on the steel surface. The maximum CR representing the deterioration with steel exposed to SO 2 was in winter for the high concentration levels of RH and the minimum was in spring. The major effect of Clon the deterioration of metallic surface occurred in winter and the minimum was in spring, same with the exposition of SO 2 (Table 4). www.intechopen.com
Corrosivity analysis
A computer model of atmospheric corrosion has been used to simulate the steel exposed to air pollutants: Cl -, SO 2 , NO 2 , O 3 and H 2 S from a thermoelectric station located between Tijuana and Ensenada. RH was correlated with the major CR was 35% to 55% with temperatures of 20˚C to 30˚C. In summer CR was different than in winter, and in both environments (Figures 3 and 4). Air pollutants such as Cl -, NO 2 and sulfide penetrate through defects of the air conditioning systems. Figure 3 shows the CL analysis of indoors in summer, indicating the level 1, as the major aggressive environment and levels 4 the low aggressiveness grade which generate high deterioration grade of this type of materials. Some sections of the Figure 4, represents the different grades of aggressiveness, with high areas of level 1 and 2 but levels 3 and 4 exists in less percentage. RH and temperature ranges were from 25% to 80% and 20 ° C to 30° C with CR from 30 mg / m 2 .year to 100 mg / m 2 .year with RH and temperatures from 40% a 75% and 20 ° C to 35 ° C, with CR from 10 to 160 mg / m 2 .year.
SEM analysis
The steel samples of 1, 3, 6 and 12 months show localized corrosion with small spots during the summer period and more corroded areas with uniform corrosion in the winter. Air pollutants that react with steel surface form corrosion products, in some zones of steel cans and rolls with chloride ions (light color) and other with sulfides (dark color), as shown in the AES analysis. Some corrosion products in the internal of steel cans appeared on the surface contaminating the sardine (Figures 5 and 6). Various microorganisms and microbial metabolites are human pathogens in sardine and tuna conserved in steel cans were detected (Figures 7 and 8). According to the most common source of these organisms, they can be grouped as follows:
1.
Endogenous. Originally present in the food before collection, including food animal, which produces the zoonoses diseases, transmitted from animals to humans in various ways, including through the digestive tract through food. 2. Exogenous. Do not exist in the food at the time of collection, at least in their internal structures, but came from the environment during production, transportation, storage, industrialization. Fungi are uni-or multicellular eukaryotic type, their most characteristic form is a mycelium or thallus and hyphae that are like branches.
AES examination
AES analyses were carried out to determine the corrosion products formed in indoor and outdoor of the steel cans. Figure 9a show scanning electron micrograph (SEM) images of areas selected for AES analysis covered by the principal corrosion products which are rich in chlorides and sulfides in tin plate steel cans evaluated. The Auger map process was performed to analyze punctual zones, indicating the presence of Cland S 2-as the main corrosive ions present in the steel corrosion products. The Auger spectra of steel cans was generated using a 5keV electron beam (Clark et al, 2006), which shows an analysis of the chemical composition of thin films formed in the steel surface ( Figure 9b). The AES spectra of steel cans in the seafood plants show the surface analysis of two points evaluated in different zones of the steel probes. The peaks of steel appear between 700 and 705 eV, finding the chlorides and sulfides. In figure 10, the spectra reveals the same process as in figure 9 wit plastic coatings, with variable concentration in the chemical composition. In the two regions analyzed, where the principal pollutant was Clion. In the region of steel surface were observed different concentrations of sulfide, carbon and oxygen, with low levels concentrations of H 2 S, which damage the steel surface.
The standard thickness of 300 nm of tin plate and plastic coatings of internal and external of steel cans was determined by the AES technique with the sputtering process.
Conclusions
Corrosion is the general cause of the destruction of most of engineering materials; this destructive force has always existed. The development of thermoelectric industries, which generates electricity and the increased vehicular traffic, has changed the composition of the atmosphere of industrial centers and large urban centers, making it more corrosive. Steel production and improved mechanical properties have made it a very useful material, along with these improvements, but still, it is with great economic losses, because 25% of annual world steel production is destroyed by corrosion. The corrosion of metals is one of the greatest economic losses of modern civilization. Steel used in the cannery industry for seafood suffer from corrosion. The majority of seafood industries in Mexico are on the coast, such as Ensenada, where chloride and sulfide ions are the most aggressive agents that promote the corrosion process in the steel cans The air pollutants mentioned come from traffic vehicles and from the thermoelectric industry, located around 50kms from Ensenada. Plastic coatings are better than tin coating because, on the plastic coatings do not develop microorganisms and do not damage on the internal surface.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.