added
string | created
string | id
string | metadata
dict | source
string | text
string | version
string |
---|---|---|---|---|---|---|
2021-01-25T14:38:24.065Z
|
2021-01-01T00:00:00.000
|
231694623
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/epoly-2021-0008/pdf",
"pdf_hash": "ae891ab1cc386ec495c85d16afc2599c16adc17e",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2258",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "45691825f992a84ba31d3fc2b2b6848b67b48174",
"year": 2021
}
|
pes2o/s2orc
|
Formulation and optimization of gastroretentive bilayer tablets of calcium carbonate using D-optimal mixture design
Gastroretentive bilayer tablets of calcium carbonate (CC) were developed using D-optimal mixture design. The effect of formulation factors such as levels of HPMC K100 M (X1), sodium bicarbonate (X2), and HPMC E15 LV (X3) on responses like floating lag time (R1) and release of CC at 1 h (R2) and 6 h (R3) was elucidated. The optimized formulations developed by numerical optimization technique were found to have short floating lag time (2.85 ± 0.98 min), minimum burst release (27.02 ± 1.18%), and controlled yet near complete release (88.98 ± 2.75%) at 6 h. In vivo radiographic studies in rabbits indicated that optimized batch displayed a mean gastric retention time (GRT) of 5.5 ± 1 h which was significantly prolonged (P < 0.05) compared to the conventional tablets that displayed a GRT of less than 1 h. The studies proved that the gastroretentive tablets can be a promising platform to improve bioavailability of nutrients having absorption window in upper gastrointestinal tract.
Introduction
Calcium is the major component of the skeletal system which accounts for about 1-2% of the adult body weight (1). The recommended dietary allowance (RDA) of calcium varies from 800 to 1,300 mg/day for adolescents, 1,000 mg/day for adults, and 1,200 mg/day for elderly. Globally, more than 800 million people are undernourished and about 3.5 billion people are at risk of calcium deficiency due to inadequate dietary supply (1). It has been estimated that more than 6% of global mortality and morbidity burdens are associated with undernourishment and micronutrient deficiencies. Approximately, 90% of those at risk of calcium deficiency were found to reside in Africa and Asia and nearly 75-100% of Indians are found to be calcium-deficient. Sensitive population include children, elderly pregnant patients, and postmenopausal syndrome (PMS) women. Calcium deficiency can retard growth and cognitive development, impair immunological functioning, and increase the risks of noncommunicable diseases including skeletal, cardiovascular, and metabolic disorders (2). Calcium deficiency may lead to brittle or weak bones, bone fractures, delays in children's growth and development, problems with proper blood clotting, weakness and fatigue, heart problems involving blood pressure and heart rhythms, osteoporosis (3), etc.
Oral calcium is considered to be the first line therapy for calcium deficiency (4,5). Calcium supplementation (6) is currently done by conventional tablets containing calcium salts such as CC or calcium citrate (7). Calcium carbonate is the least expensive and most widely used salt of calcium. Nearly 85% of all calcium supplements sold in the US contain CC. However, only about 30% of the available elemental calcium is actually absorbed and bioavailable following oral administration (8). The likely reason for the poor bioavailability is that calcium absorption is pH-dependent, site-specific, and limited by the carriermediated transport (7). The soluble calcium normally is well-absorbed from the duodenum due to the presence of carrier protein 'calbindin' at active absorption sites (9). However, conventional calcium tablets exhibit poor bioavailability as they may quickly cross the absorption sites allowing a fraction of the dose to be absorbed. Moreover, the conventional tablets are likely to saturate the carrier proteins located in the duodenum and therefore hamper the complete absorption of the whole dose of calcium resulting in poor oral bioavailability. In this context, there is a need to develop a gastroretentive drug delivery system (GRDDS) for CC that has the potential to overcome the above-mentioned limitation of the conventional tablets as no such product is available in the Indian as well as the global market. The GRDDS, by virtue of its buoyancy, is likely to be retained proximal to the absorption site and stays afloat in the gastric fluid in which CC is known to possess a good solubility. Various technologies have been developed for gastroretention of the drug delivery systems which include low density or floating systems (<1 g/cm 3 ) that remain buoyant above the gastric contents, high density systems (>1 g/cm 3 ) that are retained at the antrum of the stomach, bioadhesive systems that adhere to the gastric mucosa, and expandable systems that swell or unfold to a large size to prevent the passage of the dosage form through the pyloric sphincter, magnetic, and superporous systems (10,11).
The development of GRDDS tablet for CC seemed quite challenging considering its high dose (200 mg) and high density (∼2.71 g/cm 3 ). To meet the challenge, we aim to develop a gastroretentive system for CC using a D-optimal design. The GRDDS tablets are first of its kind that have both floating and bioadhesive properties for site-specific delivery of calcium in the upper part of the gastrointestinal tract. In this context, the objective of the work was to model the effect of the composition of the bilayer tablets, namely, the proportion of binder (Hydroxypropyl methyl cellulose E15 LV), matrix material (Hydroxypropyl methylcellulose K100 M), and effervescent agent (sodium bicarbonate) on dissolution and floating lag time. In addition, we plan to validate the polynomial models by preparing the optimized formulation with the most desirable attributes using regression analysis and analysis of variance (ANOVA). Finally, image analysis of the optimized bilayer tablet formulation in rabbits to assess the in vivo gastroretention would be the integral part of the investigation. The present work would be the 'first of its kind' as, to the best of our knowledge, no extensive work has been undertaken to develop a bilayer GRDDS for calcium.
Materials
Calcium carbonate (confirming to IP) and barium sulfate (X-ray grade) were purchased from Loba Chemie Pvt. Ltd, Mumbai. Sodium bicarbonate, potassium dihydrogen orthophosphate, sodium hydroxide pellets, hydrochloric acid, and talc were supplied from S.D. Fine Chemicals, Mumbai. Magnesium stearate was supplied by Central Drug House Pvt. Ltd, New Delhi. Hydroxypropyl methyl cellulose K100 M and hydroxypropyl methyl cellulose E15 LV were supplied by Colorcon Asia Pvt. Ltd, Goa.
Fourier transform infrared spectrometry
Infrared spectrophotometry is a useful analytical technique utilized to check the chemical interaction between the formulations. The sample was powdered and intimately mixed with 10 mg of powdered potassium bromide (KBr). The powdered mixture was taken in a diffuse reflectance sampler and the spectrum was recorded by scanning in the wavelength region of 4,000-400 cm −1 in an FTIR spectrophotometer (Jasco 460 plus, Japan). The IR spectrum of the CC was compared with that of the physical mixture of check for any interaction of CC with any of the excipients used.
Preparation of gastroretentive floating bilayer tablets of CC: design of experiment (DoE)
A 3-factor 3-level D-optimal mixture design generated in Design Expert Software (version 10.0.6.0) was employed to study the effect of critical formulation on the product attributes of the floating bilayer tablets. The experimental design contained three factors or components, namely, the amounts of HPMC K100 M (X1), sodium bicarbonate (X2), and HPMC E 15 LV (X3). The sum of three components would be 100 where the proportions of X1, X2, and X3 were found to range from 50.00% to 79.00%, 20.00% to 49.00%, and 1.00% to 3.00%, respectively. The effect of formulation variables on responses like friability (R1), floatation time (R2), percent release at the end of 1 h (R3), and at the end of 6 h (R4) was systematically investigated. The compositions of formulations as per D-optimal mixture and the constraints set on each component are shown in Table 1.
The bilayer tablets contained two layers, i.e., an effervescent floating layer and CC layer. All the ingredients were passed through a 250 µm sieve. The floating layer was prepared by direct compression of the blend of HPMC K100 M and sodium bicarbonate. Calcium carbonate layer was produced by wet granulation method. In brief, CC was blended with a solution of HPMC E15 LV in water. The quantity of HPMC E15 LV to be incorporated was predetermined by the experimental design. The wet mass was passed through a 12 mesh sieve of aperture size 1.67 mm and the wet granules produced were dried at 60°C for 30 min in a hot air oven. The dried granules were passed through the same sieve to break the lumps. The blend of the floating layer and dried granules of CC were separately lubricated with magnesium stearate (1.5% w/w) and talc (2.5% w/w) for 2-3 min. The lubricated blends were finally compressed to bilayer tablets weighing 420 mg on a bilayer tablet rotary press (Cronimach, Ahmedabad, Gujarat) using a 9 mm diameter die to a hardness of 5-7 kg/cm 2 . The formulation variables employed to produce 16 batches of bilayer tablets as per the experimental design are portrayed in Table 2. 2.3 Evaluation of floating bilayer tablets:
Weight variation
Weight variation of the bilayer tablets from each batch was determined as per official method (12). Twenty tablets were selected at random and individual weight of the bilayer tablets was determined in an analytical balance (Model 220A XB, Precisa, Switzerland). The weights were recorded in mg; the mean and standard deviation values were computed. The average weight of the bilayer tablets and the acceptable limit were deduced.
Thickness and diameter
Tablet thickness and diameter of ten randomly selected bilayer from each batch were determined (13). The values were recorded in mm using a digital caliper (Mitutoyo digimatic caliper, Mitutoyo Corporation, Kawasaki, Japan). The mean and standard deviation of the thickness and diameter were calculated.
Hardness
The resistance of tablets to shipping or breakage under conditions of storage, transportation, and handling before usage depends on their hardness. Hardness of ten randomly selected bilayer tablets from each batch was measured using a Stokes Monsato hardness tester (M/s Cambell Electronics, India) (14). The hardness was measured in terms of kg/cm 2 . The mean and standard deviation values were computed.
Friability
The friability of bilayer tablets was determined by following the official procedure (15). Friability was determined by subjecting twenty randomly selected tablets of each batch to abrasion in automated USP friabilator (Electrolab, Mumbai, India) for 100 rotations. The de-dusted tablets were weighed and % friability was calculated using (1) for each batch of bilayer tablets and expressed as mean of 3 determinations. The tablets which tend to lose less than 1% of their weight are generally considered acceptable.
Content uniformity
Content uniformity test was performed as per USP procedure by random sampling ten tablets from each batch (16). The tablets were crushed and allowed to equilibrate with pH 1.2 buffer for 24 h. Subsequently, the solutions were filtered through (0.45 μm, Millipore) and suitably diluted to determine the content of CC using a flame photometer (Systronics, Flame photometer 128, Ahmedabad, Gujarat).
Floating lag time
The time required for the tablet to rise to the surface and remain afloat was considered as floating lag time (17). To record the floating lag time, the bilayer tablets were transferred to the dissolution medium taken in USP Type II dissolution apparatus in 900 mL of pH 1.2 buffer kept at 50 rpm and 37 ± 0.5°C. The floating lag time and floating lag time of bilayer tablets were recorded in triplicate for each batch of bilayer tablets produced.
In vitro release studies
The dissolution studies of the bilayer floating tablets were performed for a period of 6 h in USP dissolution apparatus-2 (Electrolab, Mumbai, India) at a paddle speed of 50 rpm in 900 mL of pH 1.2 buffer maintained at 37 ± 0.5°C (18). About 5 mL of samples were withdrawn at 1, 2, 3, 4, 5, and 6 h and immediately replaced with same amount of fresh dissolution medium maintained at the same temperature in order to maintain the sink condition. The aliquots sampled were filtered through 0.45 µ filters and analyzed using a flame photometer to determine the amount of CC released at different time points. The dissolution data recorded in triplicate was analyzed to calculate percentage cumulative calcium released at different time intervals.
In vitro release kinetics
In order to investigate the kinetics and mechanism of release of calcium from prepared tablets, the release data were examined using zero-order kinetic (19), firstorder kinetic (20), and Higuchi kinetic (21).
For the zero-order kinetic, data obtained were plotted as cumulative amount of calcium released versus time whereas for the first-order kinetic, the obtained data were plotted as log cumulative calcium remaining versus time. For Higuchi kinetic, the obtained data were plotted as cumulative percentage calcium release versus square root of time.
Stability study
The optimized formulation was covered in aluminium foil and subjected to Real time stability condition for 6 months at 25 ± 2°C/60 ± 5% RH. The samples were analyzed at 1, 3, and 6 months against the tablets on day "0" for physical characteristics, floating lag time, and Dissolution till 6 months.
In vivo X-ray imaging studies
In vivo animal studies were performed in normal rabbits using X-ray imaging technique for evaluating the gastroretentive potential of the optimized tablet formulation as per the protocol approved by the Institutional Ethical Committee (IE-52, dated 12/10/2019) at in vivo Biosciences, Magadi Road, Bengaluru, India. Unisex rabbits of New Zealand white strain weighing 2-2.5 kg were housed under standard laboratory conditions at 25 ± 2°C and 55 ± 5% RH with standard diet and tap water ad libitum (two groups of animals with four animals in each group were used for the studies). Prior to initiation of the studies, the animals were kept overnight under fasting condition in order to avoid difficulties during imaging. The first group of animals were orally administered with optimized batch of bilayer tablet formulation containing barium sulfate as a marker, while the control group of animals were orally treated with conventional tablets containing the same marker. The animals were held in the upright position for imaging to locate the position of both control and floating tablets in the GI tract under X-ray machine (Skanray Model: Microskan DR) at predetermined time intervals like 0, 2, 4, and 6 h, respectively.
Statistical analysis
The data generated during the in vitro and in vivo studies were statistically analyzed by ANOVA in GraphPad 5.0 Instat demo version software (GraphPad Inc. CA, USA). The probability value (P) of less than 0.05 was considered to be significant.
Results and discussion
The aim of the investigation was to produce bilayer tablets that were able to float at least for a period of 6 h and at the same time ensure complete dose of the CC in the stipulated floating time of 6 h. Considering this, initially, we developed an effervescent floating matrix tablet of CC using HPMC K4M as a matrix material, sodium bicarbonate as effervescent agent, and HPMC E15LV as a binder. Even though the effervescent matrix tablets of CC were found to float for the period of 6 h, the release of calcium was substantially hampered even in sink conditions at pH 1.2, allowing a fraction of the calcium dose to be released in the stipulated time span of 6 h. Considering this, we planned to separate the floating layer from the CC layer in order to ensure a floating time of 6 h and at the same time ascertain a near complete release of calcium in a controlled fashion in pH 1.2 in the stipulated floating time. In order to systematically accomplish our goals, we planned to develop and optimize the formula or the composition of the bilayer floating effervescent floating tablet of CC using a D-optimal design. Mixture experimental designs are generally used to analyze the impact of formulation variables on the responses. D-optimal design is a mixture design that is used to evaluate the effect of changes in the composition on the responses and allows statistical optimization of the formulation with least number of experiments. The design comprises of a total of 16 points including 6 points for the modelling, 5 points to estimate lack of fit, and 5 points to estimate the pure experimental error (22). The buoyant layer was produced by direct compression of the blend of HPMC K100 M and sodium bicarbonate. On the contrary, CC layer was produced by wet granulation method using an aqueous solution of HPMC E15 LV as binder.
Fourier transform infrared spectroscopy
The FTIR spectrum of CC, the physical mixture of the CC with the other excipients used, and the bilayer tablet are portrayed in Figure 4a-c, respectively. The IR spectrum of CC displayed the characteristic absorption peak at 1,796 cm −1 that can be assigned to -C═O stretching. In addition, an intense band owing to the OH stretching was probably due to the moisture content in the compound. Similarly, the IR spectrum of the physical mixture depicted the broad band that can be assigned to OH stretching along with the characteristic absorption peak at the same position, though the peak intensity differed indicating the absence of any interaction between CC and other excipients in the physical admixture. Likewise, the IR spectra of CC bilayer tablet did not reveal any significant shift in the peaks, though the intensity of the peak decreased indicating absence of any interaction between CC and other excipients during the tablet processing, thereby proving the integrity of CC in the bilayer tablet.
Characterization of tablets
All the batches of the bilayer tablets were found to comply with official tests for content uniformity and weight variation. The average diameter of different batches of tablets was found to range from 8.82 ± 0.09 mm to 8.89 ± 0.06 mm. The average thickness of different batches of tablets was found to range from 4.31 ± 0.12 mm to 4.42 ± 0.07 mm. The variation of the hardness and friability ranged from 4.27 ± 0.38 kg/cm 2 to 7.53 ± 0.19 kg/cm 2 and 0.02 ± 0.01 kg/cm 2 to 2.77 ± 0.64 kg/cm 2 for different batches of bilayer floating tablets. The hardness of the tablets was just sufficient to not hamper the complete release of CC as observed with some batches. The release of CC from most formulations was found to follow first-order kinetics. The mechanism of the release could be characterized to follow Higuchi diffusion model.
Data analysis of the D-optimal mixture design
The design expert ® v-10 software was used to systematically analyze the experimental data obtained and generate mathematical models that define the relationship between the proportions of the three components (X1, X2, and X3) and the four responses, namely, friability, floating lag time (FLT), Rel 1h , and Rel 6h . The experimental data were analyzed by fitting the data to the Scheffe polynomial equations (23). These equations are modified from the general polynomial equations to lack intercept and squared terms in order to fit the mixture designs. An attempt was made to fit the four responses, namely, Friability, FLT, Rel 1h , and Rel 6h , simultaneously quadratic, special cubic and cubic models, and statistically analyze the data by performing ANOVA. The statistical parameters used to analyze and select the best fit model included p value of the model (must be <0.05), lack of fit (needs to be insignificant), coefficient of determination (R 2 ), adjusted R 2 , and predicted R 2 adequate precision and residual sum of squares (PRESS). The backward elimination procedure was employed to eliminate the insignificant terms from the models and include only the significant ones. On eliminating the insignificant terms, the sequential p values for the four responses were found to be <0.0001, indicating the models generated were significant. Likewise, the lack-of-fit was insignificant (p > 0.05) for the three models analyzed as the p values for Y1, Y2, Y3, and Y4 were found to be <0.0001. Moreover, the selected models showed high R 2 values displaying a strong correlation between the adjusted R 2 and the predicted R 2 . A high signal to noise ratio that exceeded 4 suggested an adequate signal.
The actual responses and polynomial equations for friability, FLT, Rel 1h , and Rel 6h in terms of the actual factors that are used as predictive models are represented in Tables 3 and 4.
The terms like X 1 X 2 in the polynomial equation represent the nonlinear interaction between the factors on the response. A positive value signs of the coefficients in the interaction terms indicate a synergism where each factor potentiates the effect of the other. On the other hand, a negative sign indicates an antagonist effect where each factor counteracts the effect of the one factor. The curvilinear lines reveal nonlinearity, suggesting an interaction between the two factors on the response, whereas straight lines rule out interaction of the two factors on the response.
Friability
A friability limit of less than 1% is considered to be acceptable for compressed tablets as per the pharmacopoeia (15). However, effervescent tablets may have different limits for friability. The friability of the bilayer gastroretentive tablets was found to range from 0.02 ± 0.01% for F12 to 2.77 ± 0.64% for F14 displayed in Table 3. Most of the batches of bilayer tablets produced were found to comply with the friability test except batches F4, F6, F9, and F14. The batches F4, F6, F9, and F14 exceeded the friability limit as they displayed friability values of 2.53 ± 0.31%, 2.64 ± 0.28%, 2.68 ± 0.18%, and 2.77 ± 0.64%, respectively. Coincidentally, the hardness of these batches failed to cross 4.5 kg/cm 2 . The high friability values can be directly related to the low binder levels as it is observed that all the four batches were found to contain low binder levels (1% w/w).
The statistical analysis indicated that of the three factors, the influence of X3 was greatest, followed by X2, whereas the effect of X1 was found to be the least. The amount of HPMC E15 LV (X3) had a high negative coefficient (−17897.32) that implies the factor was found to have a substantial negative impact on friability. This decrease in friability can be explained on the basis of decrease in binder concentration produced tablets with low hardness. It was observed that the CC layer and not the buoyant layer was the major contributor to the tablet friability. It is a common consensus that the hardness of the tablets was found to increase as the binder amounts increased (24). It could be concluded that the friability could be minimized merely using moderate to high levels of HPMC E15 LV. The negative impact of the binder on the friability was clearly visible in the 3D Plots captured in Figure 1a.
Of the three factors studied, HPMC K100 M and sodium bicarbonate were found to display a low positive coefficient values of 0.18 and 0.32, signifying a negligible influence on the friability. The possible reason for the poor effect noted with the two factors could be the fact that HPMC K100 M and sodium bicarbonate are the components of buoyant layer and not the CC layer.
Floating lag time
The FLT of the bilayer tablets was found to range from 2.85 ± 0.18 min for F9 to 36.55 ± 0.47 min for F15. The values are captured in Table 3 and representative pictures are portrayed in Figure 2. Leaving out batches F4, F6, F9, and F14 that failed to comply with the friability test, the batches F1, F5, F7, F15, and F16 were associated with high FLT exceeding 30 min, whereas the batches F3, F11, and F12 were more than 10 min. Rest of the batches, namely, F2, F6, F8, F10, and F13, displayed acceptable lag time that was less than 10 min. A short lag time is preferable as Each data point represents mean ± S.D (n = 3). a X 1 , X 2 , and X 3 represent the amounts of HPMC K100 M, sodium bicarbonate, and HPMC E15 LV, respectively. X 3 was used as 8% w/v and as binding solution in the bilayer tablets. prolonged lag time could eventually lead to system failure due to unanticipated or accidental rapid gastric clearance by the peristaltic action of the stomach and forcible gastric housekeeping waves. Generally, batches with high FLT contained higher levels (≥2%) of binder HPMC E15 LV. Mathematical modelling of the experimental data suggested that the three factors investigated were found to have a substantial influence on FLT. Among the three factors explored, the effect of X3 was the most, followed by X2, while the effect of X1 was found to be the least. The amounts of HPMC E15 LV had a highest negative coefficient value (25289.06), which implies that the factor has most significant influence on floatation lag times. This can be related to the fact that higher binder amounts could result in more compact tablets with reduced porosity. The decreased tablet porosity is likely to substantially hinder the penetration of dissolution medium into the tablet matrix, which in turn would delay the generation of carbon dioxide that may be required to initiate floatation (25). It could be summarized that the FLT could be minimized merely using moderate levels of HPMC E15 LV. The negative impact of the binder on the friability is clearly visible in the 3D Plots captured in Figure 1b. In contrast, the amounts of sodium bicarbonate with a positive coefficient value of 1.19 were found to display a mild impact on the FLTs. The positive effect of bicarbonate can likely be attributed to the ability to generate carbon dioxide by a reaction of sodium bicarbonate and gastric fluid that would be efficiently entrapped in the polymeric gel layers, thereby decreasing FLTs (26). Of the three factors investigated, the amount of HPMC K100 M was found to have minor effect with a positive coefficient of 0.24. The positive effect can be attributed to HPMC K100 M, a high viscosity hydrophilic material that could form a layer of strong gel matrix in the gastric fluids (27). The strong gel barrier in turn effectively entraps the carbon dioxide liberated in situ, thereby reducing the tablet density below unity to confer the tablet buoyant (28). However, the FLT in the present study was invariably affected by composition of the carbonate layer rather than the floating layer.
Release at 1 h
The percentage calcium release at the end of first hour was found to range from 15.87 ± 2.54% for F5 to 55.61 ± 1.28% for F14 as per Table 3. The dissolution profiles of the model formulation were presented in Figure 3. The three formulation factors investigated were found to have a significant influence on the release of calcium at the end of 1 h. Among the three factors, the effect of X3 was most significant, followed by X2, while the effect of X1 was found to be the least. The batches F4, F6, F9, and F14 were deemed to be unsuitable as they were found to exhibit a burst release of 53.85 ± 2.58%, 54.82 ± 2.13%, 56.60 ± 1.59%, and 55.61 ± 1.28%, which coincidentally corresponded well with the high friability values of 2.53 ± 0.31, 2.64 ± 0.28, 2.68 ± 0.18, and 2.77 ± 0.64, respectively. The binder concentration displayed high negative coefficient (64542.84) indicating it had most significant effect on the release of calcium at 1 h. On the contrary, higher binder levels produced more compact tablets that effectively prevented the initial burst release. Literature citations in the past have indicated that increase in binder concentrations in the matrix tablets substantially reduced the burst release during the first hour (29). In contrast, sodium bicarbonate with a coefficient of 4.38 was found to display a mild positive effect on the release at 1 h. The tendency of bicarbonate to produce effervescence that renders the tablet porous could be the likely reason for the better calcium release at 1 h. Previous reports have indicated that increase in the bicarbonate amounts would increase the drug release from matrix tablets (30). In summary, the burst release could be minimized by using moderate to high levels of HPMC E15 LV. The negative effect of the binder on the burst release is clearly evident in the 3D Plots captured in Figure 1c.
Of the three factors investigated, HPMC K100 M was found to have a negligible influence on the release at 1 h. The likely reason for the same would be that HPMC K100M was not a part of the matrix in the CC layer. The rest of the batches that were devoid of initial burst release can be considered to be more suitable as they displayed a controlled pattern of calcium release.
Release at 6 h
The percentage calcium release by 6 h was found to range from 54.08 ± 0.63% for F7 to 88.72 ± 0.92% for F13 as is presented in Table 3. The total of 12 batches including F1-F3, F5, F7, F8, F10-F13, F15, and F16 that were devoid of initial burst release were found to display a controlled pattern of calcium release. The batches F4, F6, F9, and F14 were found to exhibit a burst release exceeding 50% as they displayed low hardness and high friability. However, the subsequent release of calcium from these tablets appeared to be retarded. The likely reason for the retarded release could be that the dissolution media that has been already saturated with the dissolved CC (>50% CC in dissolved state) is less likely to create a sink condition to generate the concentration differential for further dissolution of CC from the matrix tablet.
The three factors investigated were found to significantly influence the release at 6 h. Of the three factors investigated, the influence of X3 was the most, followed by X2, whereas the effect of X1 was found to be the least. The amount of binder HPMC E15 LV (X3) had a high negative coefficient (−1539.68) that implies the factor was found to have the considerable influence on the release at 6 h. An increase in the concentration of binder effectively controlled the release of calcium from the matrix tablets. The negative influence of the binder could be explained by the fact that higher binder amounts produced compact and denser tablets that displayed controlled release of calcium during 6 h. The results also imply that the release rate could be modulated by merely varying the concentration of the binder HPMC E15 LV. HPMC E15 LV alone is reported to effectively control the drug release from the matrix tablets (31). To conclude, the complete release of calcium could be ensured by just using moderate levels of HPMC E15 LV. The negative influence of the binder on the release at 6 h is clearly observed in the 3D Plots captured in Figure 1d. On the contrary, sodium bicarbonate with a coefficient of 0.88 was found to exert a mild positive influence on the release at 6 h. The ability of bicarbonate to render the tablet porous, especially in those with lower binder levels, might be the probable reason for the higher release observed (32). Of the three factors studied, HPMC K100 M was found to have a negligible influence on the calcium release at 6 h. As described earlier, the poor impact of HPMC K100 M could be due the fact that the high molecular weight polymer did not constitute the matrix material in CC layer. At the end of the studies, it could be concluded that the batches F2, F8, and F13 that contained moderate amount of binder were found to be most suitable formulations as they were found to comply with the official friability limits, devoid of the initial burst effect, displayed a short FLT, and resulted in a controlled yet complete release of calcium by 6 h.
Optimization
A numerical optimization technique using the desirability approach was employed to develop two new floating bilayer tablet formulations with the desired responses. The compositions of the optimized batches of floating bilayer tablet along with predicted and experimental values for the response parameters are portrayed in Table 5. The prediction error for the response parameters was found to range from −14.29 to +12.50.
The low values of prediction errors prove the validity of the mathematical models generated by ANOVA and regression analysis. The in vitro calcium release from the optimized formulation of bilayer tablets was found to follow first-order kinetics.
Stability study
The results of real time stability studies for optimized formulation batch carried out as per ICH guidelines did not show any physical change in the tablets during the study period. The characteristic peaks of CC were clearly evident in the spectra of the tablets too, proving the integrity of CC and at the same time ruling out the possibility of any chemical interaction between CC and other excipients used in the formulation. The representative spectra of CC and tablet mixture are captured in Figure 4. No significant difference was noted in the content uniformity, FLT, burst release, and the amount released at 6 h proving the stability of the formulation ( Table 6).
In vivo radiographic studies
The representative images of the in vivo radiographic studies (33) with the bilayer floating tablets are captured in Figure 5. The in vivo studies revealed that the mean gastric retention time for the tablets from the optimized batch correlated well with the in vitro floating time. The studies indicated that the bilayer floating tablets from the optimized batch remained in the stomach for a mean period of 5.5 ± 1.0 h in rabbits which was significantly higher (p < 0.05) than the conventional tablets that displayed a mean gastric retention time of less than 2 h. The bilayer tablets by virtue of the floating properties were found to be well-retained in the stomach, despite the action of peristalsis and forcible housekeeping waves compared to the conventional tablets. As the tablets are well-retained in the stomach proximal to the absorption window and probably release the contents in a controlled manner, they are less likely to saturate the calcium transporters situated in the duodenal region of the gastrointestinal tract and therefore may exhibit a superior bioavailability compared to the conventional tablets.
Conclusion
Floating bilayer tablets of CC were successfully developed employing D-optimal design. Of the three formulation factors investigated, the levels of HPMC E15 LV used as a binder in the CC layer significantly affected the friability, FLT, and release of calcium. The batches F2, F8, and F13 that contained moderate to high amount of binder were found to be most suitable as they were complied with the official friability limits, devoid of the initial burst effect, displayed a short FLT, and resulted in yet a controlled and complete release of calcium by 6 h. Numerical optimization technique was successfully employed to develop optimized formulations by setting constraints on the responses. The experimental data for the optimized formulations were found to agree well with those predicted by the polynomial models proving the validity of the models generated. In vivo radiographic studies of the optimized bilayer tablet formulations in rabbits revealed that floating tablets were found to be retained in the stomach for 5.5 ± 1 h. The studies collectively proved that bilayer gastroretentive tablets possessing floating properties would be highly promising drug delivery platform for nutrients and therapeutic agents with absorption window in the upper part of the gastrointestinal tract.
|
v3-fos-license
|
2022-12-16T14:21:51.675Z
|
2022-12-15T00:00:00.000
|
254688303
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2022.1059173/pdf",
"pdf_hash": "c623e8c13570bc3d6deccc2938ef478ec97f5380",
"pdf_src": "Frontier",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2260",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "c623e8c13570bc3d6deccc2938ef478ec97f5380",
"year": 2022
}
|
pes2o/s2orc
|
Immune system modulation in aging: Molecular mechanisms and therapeutic targets
The function of the immune system declines during aging, compromising its response against pathogens, a phenomenon termed as “immunosenescence.” Alterations of the immune system undergone by aged individuals include thymic involution, defective memory T cells, impaired activation of naïve T cells, and weak memory response. Age-linked alterations of the innate immunity comprise perturbed chemotactic, phagocytic, and natural killing functions, as well as impaired antigen presentation. Overall, these alterations result in chronic low-grade inflammation (inflammaging) that negatively impacts health of elderly people. In this review, we address the most relevant molecules and mechanisms that regulate the relationship between immunosenescence and inflammaging and provide an updated description of the therapeutic strategies aimed to improve immunity in aged individuals.
Introduction
The worldwide population is suffering an accelerated growth of old people, bringing formidable healthcare and socioeconomic challenges. From a biological perspective, aging is a complex and multisystemic process that adversely impacts on the organism function, with the nervous, endocrine, hematopoietic, and immune systems being the most affected by this process. Specifically, aging elicits a decline in the immune system, affecting both the innate and adaptive immunity responses (immunosenescence), which result in increased vulnerability to toxins and pathogens and the establishment of a chronic inflammation state (inflammaging). An in-depth study of the immune system has regained relevance due to the public health emergency caused by the coronavirus disease 2019 (COVID- 19) pandemic, which preferentially affects older individuals. In this scenario, the present review is focused on the mechanism underlying both immunosenescence and inflammaging, providing also a description of current therapeutic strategies aimed to ameliorate the impact of aging/senescence on immunity.
Immunosenescence
The term "immunosenescence" comprises several humoral and cellular events that generate age-related dysfunction of the immune system (1). This condition is associated with a higher risk of developing different aged-related pathologies, including infections, cardiovascular and neurodegenerative diseases, autoimmunity, and cancer (2). The main determinants of immunosenescence include genetics, nutrition, sex, race, exercise, and pathogen exposure (2,3). To better understand immunosenescence, it is necessary to consider the age-driven physiological changes that are related to immunity. Body physical barriers are the first line of defense against pathogens, and in elderly people, the skin becomes thinner and drier, which in turn reduces the amount of fat-soluble defensins. Likewise, the mucosal barrier loses efficiency during aging because the ciliary function is impaired, which consequently facilitates pathogen settlement (4). With respect to the cellular events that underlie immunosenescence, thymic involution, decreased number of T and B lymphocyte cells, impaired telomerase activity, increase of inflammatory mediators, and a weak immune response to vaccination have been consistently observed in aging (5)(6)(7)(8)(9). It is worthy to mention that these deficiencies are worsened by exposure to pathogens (2,3). Furthermore, alterations in the innate immune cells (neutrophils, macrophages, natural killers, dendritic, and mast cells) have been found in aged individuals ( Figure 1A). Neutrophils display diminished killing capacity, even though their production in the bone marrow remains unchanged and their number in blood is even slightly higher in aging (10). Similarly, the function of natural killer cells is disturbed in aging, although their turnover in the bone marrow diminishes and their baseline number increases (11). Macrophages and dendritic cells show similar phagocytic activity between young and old people; however, both the total number of these cells in peripheral blood and their ability to present antigens and stimulate T cells are defective in elderly people (12,13). In addition, macrophages show increased inflammatory response (14). Finally, the activation and function of mast cells are altered in aged individuals (15). On the other hand, the repertoire of naïve T and memory B lymphocytes is abundant during childhood, whereas in old age, a decline in B-cell production in the bone marrow (16) and a reduction in the number of T lymphocytes due to thymic involution occurs (13, 17). Altogether, these events result in an overall reduction of immunity in older individuals (18).
Inflammaging
Inflammaging is defined as a systemic proinflammatory state caused by an imbalance between pro-and anti-inflammatory mechanisms, which provokes in turn increased cytokine production (19). This imbalance elicits a prolonged state of low-grade inflammation (19) characterized by augmented levels of pro-inflammatory mediators, including IL-1b, IL-6, TNF-a, IL-8, and CRP (1) ( Figure 1B). This phenomenon is a hallmark of aging and is even considered a biomarker of accelerated aging (20). Inflammaging is modulated by multitude interrelated processes; at the physiological level, some relevant factors that can promote inflammaging include physical inactivity, obesity, psychological stress, early life adversity, exposure to xenobiotics, and chronic infections (1,21). Inflammaging is also considered as a risk factor for several pathologies, including cardiovascular, kidney, and neurodegenerative diseases, type 2 diabetes mellitus, cancer, depression, sarcopenia frailty and infectious diseases (20,22). Furthermore, several studies correlate inflammaging with the susceptibility of older people to develop COVID-19 with severe complications (1, 23), due to a hyperreactive response to the infectious agent through a massive release of chemical mediators (20,24,25). Inflammaging has been recently regarded as an adaptive process that can lead either to healthy aging or to a pathological state, depending on genetic and environmental conditions and lifestyle factors (1,19,22). This idea has been reinforced by studies on centenarian populations, in which high levels of inflammatory biomarkers were found to favor longevity via their interaction with anti-inflammatory molecules (26,27). Inflammaging is a dynamic and complex process driven by several age-related molecular mechanisms, rather than having an exclusive connection with the immune system (22). For instance, oxidative stress induces age-related transcriptional changes in genes encoding key components of inflammatory pathways (19). Specifically, the pro-inflammatory secretome of senescent cells can exert paracrine effects on nearby tissues extending the inflammatory state at the organismal level (19). Finally, dysregulation of the microbiome is another important contributor to inflammaging (21). It is believed that amelioration of age-related dysbiosis by probiotic clinical intervention might in turn alleviate inflammaging (14).
Effect of chronic infections on the development of inflammaging and immunosenescence
Chronic infections are a major health problem that affects millions of people worldwide. The innate immune system of aged people loses the ability to respond to viral infections; it initiates a local inflammatory response but fails to eliminate the virally infected cells (28, 29). Chronic infections trigger persistent adaptive immune responses that generate a proinflammatory environment. As this state persists, different alterations emerge, such as downregulation of immune responses, which further aggravates the inflammatory response (28, 29) ( Figure 1C). This unresponsive immune system, also called immunosenescence, exacerbates the inflammatory response, due to accumulation of inefficient adaptive immune cells, which ultimately causes a physiological decline (30).
Mechanisms controlling inflammaging and immunosenescence
Changes in the immune system that occur during aging have just started to be understood. The intricate interplay between inflammaging and immunosenescence requires to be deciphered, in order to develop therapeutic interventions aimed to improve/rejuvenate the immune system. In this section, the most relevant proteins/pathways and immune system cells that modulate the host immune response during aging are described.
NF-kB
Nuclear factor kappa B (NF-kB) is a main protagonist of the inflammatory and immune responses. NF-kB responds to various stimuli such as the T-and B-cell receptors (TCR and BCR, respectively) (31). During chronic infections, NF-kB orchestrates different T-cell responses; it induces maturation of T cells in the thymus and modulates differentiation and activation of regulatory T cells (Tregs). These activities are Aging of the immune system: mechanisms and therapeutic strategies. (A, B) The immune system declines during aging, and its exposure to pathogens induces overstimulation and overreaction of immune cells (macrophages; lymphocytes B and T; natural killer and dendritic cells), releasing chemical mediators that affect their function and driving them toward immunosenescence and inflammaging. These processes accelerate the onset of age-related diseases reducing health span. (C) Aging alters immunity provoking unbalance between immunostimulatory and immunosuppressive mechanisms, which in turn impairs relevant functions of the immune system, including thymic involution, altered surface markers and phagocytosis of macrophages, decreased number and activity of B and T lymphocytes, telomerase shortening and DNA damage, reduced cytokine secretion, decreased mitochondrial biogenesis, and elevated ROS level. (D) All these changes ultimately cause dysfunction in different tissues and systems such as adipose, hepatic, and skeletal muscle tissues and cardiovascular and nervous systems. (E) Therapeutic strategies aimed to rejuvenate the immune system and decrease the risk of infectious diseases in elderly people are depicted. AP-1, activator protein 1; HIF-1a, hypoxia-inducible factor 1; IL1b, interleukin-1b; IL6, interleukin-6; NF-kB, nuclear factor kappa B; p38-MAPK, p38-mitogen activated protein kinase; PPAR-g, peroxisome proliferatoractivated receptor gamma; TNF-a, tumor necrosis factor-alpha. Created with BioRender. required to induce or suppress the immune response, lessening inflammation (32, 33). Furthermore, NF-kB delays immunosenescence by upregulating telomerase production in T cells during chronic infections, thereby promoting an opportune clonal expansion (34). On the other hand, NF-kB elicits production of IL-6 and TNF-a in macrophages, which contributes to immune clearance, but in the long-term could accelerate inflammaging, provoking cell and tissue damage ( Figure 1D). Overall, these studies place NF-kB as a neuralgic center where pro-inflammatory and anti-inflammatory signals converge (35-37).
HIF-1a
Hypoxia is a pivotal modulator of immunity. It regulates immune cell proliferation and the response to pathogens through epigenetic regulation, which is largely controlled by transcription factor hypoxia-inducible factor 1 (HIF-1a) (38). When a chronic infection is established, a large amount of reactive oxygen species (ROS), chemokines and cytokines, such as IL-1b, are produced, which in turn increases inflammation and parallelly activates NF-kB-mediated HIF-1a synthesis (39,40). HIF-1a is a key transcription factor for modulating the inflammation response, because it promotes the expression of proinflammatory cytokines and chemokines (39,40). Consistent with this role is that HIF-1a-deficient mice were found to be resistant to develop inflammatory diseases; however, when they were subjected to chronic infection developed inflammatory responses and died early, compared with control animals (41)(42)(43). Overall, these studies state that hypoxia-mediated HIF-1a activation promotes a chronic lowgrade inflammatory state (inflammaging), which in turn can lead to immunosenescence (40). The crucial function of HIF-1a in these phenomena makes it an ideal therapeutic target to modulate immune responses during aging.
Lymphocytes B
Lymphocytes B are the humoral immune response cells responsible for antibody production. These cells provide discrimination between self and non-self-antigens and the memory to evoke previous contact with specific pathogens, which result in a bulkier response in subsequent hostpathogen interactions (44). During persistent viral infections, accumulation of atypical deficient B cells occurs, which are unable to differentiate into antibody productive cells and have also reduced the ability to trigger production of cytokines, antibodies, and the B-cell receptor (45,46). On the contrary, the response of B cells in the germinal centers is robust and efficient as the infection progress, which contrasts to the persistent T-cell response that leads to their exhaustion (47).
The continued immune response generates in turn an exacerbated pro-inflammatory environment, with high production of autoantibodies by B cells. The formation of this inflammation-feedback loop greatly contributes to the establishment of immunosenescence (16, 48). The abovedescribed alterations undergone by B cells that result in a generalized reduction of the overall immunity are faithfully recreated during aging, affecting the protection of the elderly against pathogens (49) ( Figure 1D).
ROS
Oxidative stress emerges as a consequence of the loss of the redox (reduction/oxidation) balance. Then, high levels of ROS cause oxidation of lipids, proteins, and DNA and innate immune responses (Toll-like receptor signaling and NLRP-3 inflammasome) (50). An enhanced expression of cytokines and chemokines (IL-1, IL-6, TNF-a, and IL-18) provokes further augmentation of ROS levels, creating a positive feedback loop of ROS production (50) ( Figure 1B). Using mouse models of aging, the connection between ROS and immunosenescence has been evidenced. It has been observed that both leukocytes and macrophages from premature aging mice (PAM) lose the balance between oxidant compounds and the antioxidant defense (51). Conversely, long-lived mice maintain the redox equilibrium in macrophages (52). During aging, macrophages produce a high amount of oxidant compounds and lipofuscin to enter consequently into immunosenescence (52). On the other hand, viral infections (HIV or herpes virus) also cause the immune cells to generate oxidant compounds, and when a chronic infection is established, the persistence of oxidative stress leads to chronic inflammation and later to premature immunosenescence (53), via NF-kB activation and induction of TNFa, IL6, and IL1 expression (54)(55)(56)(57)(58). Thus, elevated levels of ROS are mechanistically linked to immunosenescence and inflammaging.
P38-MAPK
The p38-mitogen-activated protein kinase (p38-MAPK) pathway regulates the balance between inflammatory and antiinflammatory responses, preventing chronic inflammation and the further establishment of immunosenescence. A connection between p38-MAPK activation, inflammaging, and immunosenescence has been demonstrated using a human model of acute inflammation (59). This study showed that the onset of inflammation progress in a similar manner between young and old subjects; however, the conclusion of the process was clearly disturbed in elderly people, due to a decrease of T-cell immunoglobulin mucin receptor-4 (TIM4), a macrophage receptor that enables the engulfment of apoptotic bodies (efferocytosis). This alteration was found to be mechanistically associated with increased p38-MAPK activity in the macrophages of aged subjects, as TIM4 expression and the resolution of inflammation were rescued through oral administration of p38 inhibitors (59). Consistent with a crucial role for p38-MAPK in inflammaging, the sestrin-dependent activation of p38-MAPK induced a pro-aging phenotype in lymphocytes (60) ( Figure 1D). Furthermore, chronic inflammation and premature immunosenescence phenotypes, induced by bacterial infections, have been found to be associated with a persistent activation of p38-MAPK (61).
Lymphocytes T
As aging progresses, T lymphocytes undergo functional changes that impact their function. It has been reported that the number of T lymphocytes decreases during aging (62); furthermore, they exhibit low proliferation due to replicative senescence induced by telomere shortening. Consistently, aged individuals exhibit an elevated number of T lymphocytes positive for the senescence-associated beta-galactosidase activity (63). Furthermore, the presence of immunosenescent T cells has been related to chronic inflammation during aging (64). Both accumulation of exhausted non-functional T cells and the presence of chronic infections during aging result in an hyperinflammatory state (65) ( Figure 1A). Remarkably, the evolution of T lymphocytes, from their development to exhaustion, is driven by two key transcriptional factors, namely, transcription factor 7 (TCF7) and thymocyte selection-associated high-mobility group box (TOX) (66). TCF7 belongs to a DNA-binding protein family termed "HMG box"; it is highly expressed in thymocytes and peripheral naïve T cells and is involved in the development and differentiation of T-lineage cells (67). TCF7 exerts its function by assembling with b-catenin into an active transcription complex, which results in WNT/b-catenin signaling pathway activation and the expression of genes implicated in embryonic development and self-renewal of stem cells at the adult age (68). During chronic viral infections, TCF1 is present in T cells with an exhausted phenotype; interestingly, chronically stimulated T cells, which are positive for TCF7, have the ability to either survive for a long time, self-renew, or proliferate (69). As for TOX, it has been associated with CD8 + T-cell exhaustion as well (70). TOX is predominantly expressed in hematopoietic and immune tissues, specifically in CD4+ T and natural killer cells, and its expression is activated by the chronic stimulation of CD8 + T cells (70). TOX activity in turn promotes CD8 + T-cell exhaustion via chromatin remodeling and upregulation of T-cell inhibitory receptors, including protein disulfide isomerase (PDI) (71).
Trends in therapeutic modulation of inflammaging during immunosenescence
During aging, the immune system declines due to dysregulation and overactivation of its innate and adaptive responses, leading to the onset of inflammation-related chronic diseases highly observed in elderly people (72). For that reason, several pharmacological and cellular/genetic strategies have been developed to slow down or reverse the deleterious effects of immunosenescence on health (73): (a) Induced pluripotent stem cells (iPSC) have been employed to generate hematopoietic cells and/or various specific immune cells; (b) administration of cytokine and growth factor cocktails boosted macrophage function; (c) bone marrow transplantation is a widely used therapy for thymus regeneration (74) ( Figure 1E); (d) the use of Cdc42 and BATF inhibitors or antioxidants enhances the number and function of lymphoid-biased hematopoietic stem cells (75,76); (e) inhibition of dual specific phosphatases 4 boosts memory CD4+ T-cell function (77,78); (f) administration of fibroblast growth factor 7 (FGF7) stimulates naive T-cell production and promotes the removal of dysfunctional cells, thereby restoring thymus function (79,80); and (g) administration of rapamycin improves CD8+ T-cell function (81,82) (Figure 1). Finally, a relevant nonpharmacological strategy that has been proven to enhance immunity is caloric restriction; it delays the accumulation of senescent T cells and stimulates thymopoiesis through the activation of IGF-1 and/or PPAR pathways (83,84). On the other hand, recent studies have unveiled the relevance of functional foods to ameliorate oxidative stress and inflammation and to improve the metabolism of lipids associated with metabolic diseases, via Nrf2 and/or NF-kB signaling pathways (85,86).
Some of the molecules/pathways that modulate immunosenescence have therapeutic potential. Owing to the crucial role of the activator protein 1 (AP-1) signaling pathway in macrophage-mediated inflammation, targeting of AP-1 has been approached to attenuate inflammation. Transfection of lentiviral siRNA against AP-1 in mice fed with high-fat diet resulted in the alleviation of systemic and hepatic inflammation (87). Interestingly, the use of rosiglitazone, a PPARg agonist, was found to exert a positive effect on animals with sepsis, decreasing cell death and cardiac inflammation; furthermore, increased fatty acid oxidation and improved insulin resistance were also observed in human skeletal muscle (88). Since aging is a very complex process that involves different biological processes, therapies aimed to modulate inflammaging have to be focused on the synergic effect of more than one compound, to regulate simultaneously different pathways. For instance, a combinatory treatment using three different compounds, rapamycin, acarbose, and 17a-estradiol, converge on the regulation of both ERK1/2 and p38-MAPK pathways (89).
Conclusions
Inflammation is a key factor for the onset and progression of almost all chronic diseases affecting aged individuals, with immunosenescence and inflammaging being two relevant phenomena that modulate the immune system during aging. Therefore, identification and characterization of the molecular and cellular mechanisms underlying the immune system dysfunction will surely help to develop effective therapeutic strategies to prevent the negative outcomes of infectious diseases on aged individuals. Recent scientific evidence indicates that different immune system cells, including hematopoietic stem cells, T cells, B cells, NK cells, thymocytes, macrophages, microglia, granulocytes, and dendritic cells, are suitable targets for cellular and genetic therapies. An effective therapy must combine in a balanced manner immunostimulatory and immunosuppressive strategies, toward a reasonable immune rejuvenation. Given the intricate network of the molecular events involved in the regulation of inflammation/immunosenescence, the therapeutic approaches described herein are focused on the improvement of the immune system in aged individuals rather than longevity.
Funding
This work was supported by grants from CONACyT CF2019-514879 to BC and 258043 to JM.
|
v3-fos-license
|
2024-04-05T06:18:05.115Z
|
2024-04-03T00:00:00.000
|
268888579
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "deaabf570c810132aa7ff50a432d8b382c42a267",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2261",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"sha1": "eeedec7651fdd709b4be30d416d64a80c61cfef8",
"year": 2024
}
|
pes2o/s2orc
|
Proteomic insight into arabinogalactan utilization by particle-associated Maribacter sp. MAR_2009_72
Abstract Arabinose and galactose are major, rapidly metabolized components of marine particulate and dissolved organic matter. In this study, we observed for the first time large microbiomes for the degradation of arabinogalactan and report a detailed investigation of arabinogalactan utilization by the flavobacterium Maribacter sp. MAR_2009_72. Cellular extracts hydrolysed arabinogalactan in vitro. Comparative proteomic analyses of cells grown on arabinogalactan, arabinose, galactose, and glucose revealed the expression of specific proteins in the presence of arabinogalactan, mainly glycoside hydrolases (GH). Extracellular glycan hydrolysis involved five alpha-l-arabinofuranosidases affiliating with glycoside hydrolase families 43 and 51, four unsaturated rhamnogalacturonylhydrolases (GH105) and a protein with a glycoside hydrolase family-like domain. We detected expression of three induced TonB-dependent SusC/D transporter systems, one SusC, and nine glycoside hydrolases with a predicted periplasmatic location. These are affiliated with the families GH3, GH10, GH29, GH31, GH67, GH78, and GH115. The genes are located outside of and within canonical polysaccharide utilization loci classified as specific for arabinogalactan, for galactose-containing glycans, and for arabinose-containing glycans. The breadth of enzymatic functions expressed in Maribacter sp. MAR_2009_72 as response to arabinogalactan from the terrestrial plant larch suggests that Flavobacteriia are main catalysts of the rapid turnover of arabinogalactans in the marine environment.
Introduction
Marine environments contain many different polysaccharides as dissolv ed or ganic matter (DOM) or in particulate or ganic matter (POM).These are a vital carbon source for micr oor ganisms, r eleased from algae as exudates or during lysis by zooplankton predation or viral infection.Monosaccharide analysis of planktonic biomass from the North Sea revealed already in 1982 a dominance of glucose follo w ed b y arabinose , galactose , and mannose (Ittekkot et al. 1982, Urbani et al. 2005, Alderkamp et al. 2007, Scholz and Liebezeit 2013, Huang et al. 2021 ).These monomers are the building blocks of algal polysaccharides: the abundant betahomoglycans laminarin, cellulose, and xylan are often complemented with species-specific glycans such as agar, alginate, carr a geenan, fucoidan, mannan, pectin, por phyr an, and ulv an.The degradation of these glycans has been studied intensively in marine systems, ho w e v er, details for ar abinogalactan ar e missing (Bäumgen et al. 2021 ).Recentl y ar abinogalactan was detected in the high molecular weight dissolved organic matter (HMWDOM) and POM fraction using monoclonal antibodies during the algal spring bloom in the North Sea (Vidal-Melgosa et al. 2021 ).This coincides with the high arabinose and galactose content of Phaeocystis spp., a haptophyte blooming in the North Sea (Alderkamp et al. 2007, Sato et al. 2018 ).The antibody-based quantification also sho w ed a decrease in arabinogalactan content to w ar ds the end of the spring bloom, suggesting a fast turnover of the compound-contrasting with the accumulation of fucose-containing sulfated pol ysacc harides (Vidal-Melgosa et al. 2021 ).The major source of arabinose and galactose in algae are likely arabinogalactan proteins, whic h anc hor pol ysacc haride cell walls in the outer membrane of plants and algae (Silva et al. 2020, Leszczuk et al. 2023 ).The model compound for arabinogalactan type II is arabinogalactan from larch wood.It contains d -galactose and l -arabinose in a 6:1 molar ratio as well as traces of rhamnose , fucose , mannose , xylose , and d -glucuronic acid (Fujita et al. 2019, Villa-Riv er a et al. 2021, Leszczuk et al. 2023 ).Type II arabinogalactans have a complex backbone structure consisting of β-1,3-linked galactan backbone with β-1,6-linked galactan side chains (Kelly 1999 , Wang and LaPointe 2020 ).Type I has a β-1,4-linked galactan backbone, whereby C3 can be linked with l -arabinofuranose (Hinz et al. 2005 ).
Plant arabinogalactan is degraded by aerobic bacteria and fungi as well as by anaerobic fermenting bacteria in gut systems, including Bifidobacterium and Bacteroidetes (Shulami et al. 2011, Ndeh et al. 2017, Cartmell et al. 2018, Luis et al. 2018, Wang and La-Pointe 2020, Sasaki et al. 2021 ).The latter phylum encompasses also aerobic Flavobacteriia that have been identified as specialists for pol ysacc haride degr adation in marine systems (Sidhu et al. 2023 ).For this first study on the degradation of arabinogalactan by marine micr oor ganisms , we selected a fla vobacterial strain with a published genome and a particle-associated lifestyle, Maribac-ter sp.MAR_2009_72 (Kappelmann et al. 2018, Heins et al. 2021a ).Strains of the genus Maribacter are rarely isolated from sea water, but they are more abundant in particle fractions (Nedashko vska ya et al. 2004, Heins and Harder 2023, Lu et al. 2023, Sidhu et al. 2023 ).Abundances of up to 4% were detected in the oxic surface layer of sandy sediments (Probandt et al. 2018, Miksch et al. 2021 ).Even higher abundances were observed in micro-and macroalgae phycosphere populations (Heins et al. 2021b, Lu et al. 2023 ).This makes Maribacter strains ideal candidates for studying the degradation of algal cell wall pol ysacc harides.
The uptake and degradation of pol ysacc harides in Bacteroidetes is often encoded in pol ysacc haride utilization loci (PULs).The first PUL was described for Bacteroides thetaiotaomicron for starch utilization (Shipman et al. 2000 ).Pol ysacc haride utilization starts with the extracellular hydrolysis of polysaccharides into oligosaccharides on the surface of the cell.The oligosaccharides are transported into the periplasm via the SusC/D transport system, which is energized by a proton gradient via an ExbB/D-TonB system in the cytoplasmic membrane and by a domain in the periplasm to open the β-barrel channel of SusC for the transport (Noinaj et al. 2010 ).The hydr ol ysis of pol ysacc harides is ac hie v ed by gl ycoside hydr olases (GH), gl ycoside tr ansfer ases, pol ysacc haride lyases, and carbohydrate esterases with a high specificity, sometimes assisted by carbohydrate binding modules .T hese five groups of proteins are classified as carbohydrate active enzymes (CAZymes) (Bäumgen et al. 2021, Drula et al. 2022 ).For the degradation of arabinogalactan from lar ch w ood, PULs w ere so far characterized for gut bacteria including Bifidobacterium longum ssp.longum NCC2705, Bacteroides caccae ATCC 43185, and Bacteroides thetaiotaomicron (Ndeh et al. 2017, Cartmell et al. 2018, Luis et al. 2018, Wang and LaPointe 2020 ).Here, we analyzed Maribacter sp.MAR_2009_72 proteomes using cells grown on arabinogalactan, arabinose , galactose , and glucose .T hose pr oteomes wer e compared to identify the proteins induced by arabinogalactan.This study expands a recent in silico study that did not report on ar abinogalactan-specific PULs (Ka ppelmann et al. 2018 ) and provides experimental observations for a better interpretation of marine metagenomes.
Growth experiments
Maribacter sp.MAR_2009_72 (DSM 29384), originally isolated from a phytoplankton catch in the Wadden Sea near the island Sylt, Germany, was revived from glycerol stocks that had been pr eserv ed in the laboratory since the initial isolation (Hahnke and Harder 2013 ).The strain was grown in the liquid medium HaHa_100 V with 0.3 g/l of casamino acids as the sole carbon source (Hahnke et al. 2015 ).This limited growth to an optical density (OD) at 600 nm belo w 0.2.Gro wth bey ond an OD of 0.3 was ac hie v ed by adding 2 g/l of a carbohydrate source, here arabinose , galactose , glucose (Sigma Aldric h/Merc k KGaA, Darmstadt, German y), and larc h ar abinogalactan (The Dairy Sc hool, Auc hincruive , Scotland).T he supplier of arabinogalactan had specified the monosaccharide composition as 81% galactose, 14% arabinose, and 5% other, whereby the other fraction was not defined.For pr oteomics, thr ee cultur es of 50 ml were inoculated with 0.4% v/v of a pr egr own cultur e in the same medium and incubated at r oom temper atur e at 110 r/m.A fourth cultur e per substr ate was maintained to monitor bacterial growth by measuring OD at 600 nm beyond the harvest point.Cells were harvested at an OD of 0.25.Cells were pelleted by centrifugation in 50 ml tubes with 3080 × g for 30 min at 4 • C. Pellets were resuspended in 1 ml medium and centrifuged in 1.5 ml tubes at 15870 × g for 15 min at 4 • C. The wet biomass was weighed and stored at −20 • C.
For microbiome size determinations, colony-forming units (CFU) were determined with 4 g/l lar ch w ood arabinogalactan as organic carbon source on marine plates (Hahnke and Harder, 2013), using 4 g/l glucose or ZoBell's 2216 marine agar plates as r efer ence.Inoculation of serial diluted sea or sediment pore water was performed with a 96 pin-holder.Inoculations were at room temper atur e. P artial 16S rRNA gene sequences of str ains wer e obtained by colony PCR and Sanger sequencing (Hahnke and Harder 2013 ).Partial 16S rRNA gene sequences have been deposited at GenBank under the accession numbers PP600029 to PP600099.
Protein prepar a tion and mass spectrometry
Pr oteins wer e extr acted fr om cells using a bead-beating method following the protocol by Schultz et al. ( 2020 ).A pellet of wet weight ranging from 20 to 200 mg was disrupted using 0.25 ml glass beads in 500 μl of lysis buffer.The protein content was quantified using the Roti Nanoquant assay (Carl Roth, Karlsruhe, German y).For pr otein purification on denaturing pol yacrylamide gels (SDS-PAGE), 50 μg of protein was combined with 10 μl of 4x SDS buffer [composed of 20% gl ycer ol, 100 mM Tris/HCl, 10% (w/v) SDS, 5% β-mercaptoethanol, 0.8% bromophenol blue, pH 6.8] and loaded onto Tris-glycine-extended precast 4%-20% gels (Bio-Rad, Neuried, German y).Electr ophor esis was conducted at 150 V for 8 min.Subsequently, the gel was fixed in a solution of 10% v/v acetic acid and 40% v/v ethanol for 30 min, stained with Brilliant Blue G250 Coomassie, and the desired protein band was excised.The proteins were extracted from the gel in one piece and then washed with a solution of 50 mM ammonium bicarbonate in 30% v/v acetonitrile .T he gel pieces were dried using a SpeedVac (Eppendorf, Hambur g, German y), and then r ehydr ated with 2 ng/ μl trypsin (sequencing grade trypsin, Promega, USA).After a 15-min incubation at room temperature, excess liquid was removed, and the samples were digested overnight at 37 • C. Following digestion, the gel pieces were covered with water suitable for mass spectrometry (MS), and peptides were eluted using ultrasonication.The peptides wer e subsequentl y desalted using Pierce™ C18 Spin Tips (Thermo Fisher, Schwerte, Germany) in accordance with the manufacturer's guidelines .T he eluted peptides were dried using a SpeedVac and stored at −20 • C. For MS analysis, the samples were thawed and reconstituted in 10 μl of Buffer A (99.9% acetonitrile + 0.1% acetic acid).
Tryptic peptides of Maribacter sp.MAR_2009_72 were analyzed using an EASYnLC 1200 system coupled to a Q Exactive HF mass spectrometer (Thermo Fisher Scientific, located in Waltham, USA).P eptides w er e loaded onto a custom-pac ked anal ytical column containing 3 μm C18 particles (Dr.Maisch GmbH, Ammerbuch, Germany).The loading was performed using buffer A (0.1% acetic acid) at a flow rate of 2 μl/min.Peptide separation was achieved through an 85-min binary gradient, transitioning from 4% to 50% buffer B, composed of 0.1% acetic acid in acetonitrile, at a flow rate of 300 nl/min.Samples were measured in parallel mode; survey scans in the Orbitr a p wer e r ecorded with a resolution of 60 000 with a m/z range of 333 to 1650.The 15 most intense peaks per scan were selected for fragmentation.Precursor ions were dynamicall y excluded fr om fr a gmentation for 30 s. Single-c har ged ions as well as ions with unknown c har ge state were rejected.Internal lock mass calibration was applied (lock mass 445.12003Da).
The MS files were analyzed in MaxQuant version 2.2.0.0 in the standard settings against the strain specific protein F igure 1. Gro wth curve of Maribacter sp.MAR_2009_72 in presence of four different carbon sources; arabinogalactan, arabinose, galactose, and glucose.MAR_2009_72 was grown in 50 ml of modified HaHA100V with 2 g/l of the r espectiv e carbon source at room temperature at 110 r/m.The OD was measured at 600 nm.
For the visualization of the data the following pr ogr ams and pac ka ges wer e used: R v ersion 4.3.2(R Cor e Team 2023 ), ggplot2 (W ickham 2016 ), gggenes (W ilkins 2023 ), and Pr oksee (Gr ant et al.
).
The MS proteomics data have been deposited to the Pro-teomeXchange Consortium via the PRIDE (Perez-Riverol et al. 2022 ) partner repository with the dataset identifier PXD049074 and 10.6019/PXD049074.
Growth on arabinogalactan
Maribacter sp.MAR_2009_72 gr e w in presence of larch wood arabinogalactan to a maximum OD of 0.338 and at a maximum gr owth r ate μ = 0.06 h −1 (Fig. 1 ).When 2 g/l of galactose or arabinose were provided in the medium a maximum OD of 0.419 and 0.446 was measured with respective growth rates of 0.07 h −1 and 0.06 h −1 .Glucose supported the largest biomass formation, with an OD of 0.526 and μ = 0.05 h −1 .The ar abinogalactan cultur es r equir ed mor e time to enter the exponential gr owth phase than the cultur es with monosacc harides as substr ates .T he physiological
Protein expression in Maribacter sp. MAR_2009_72
The compar ativ e pr oteomic anal ysis was based on glucose as r efer ence a gainst ar abinose , galactose , and arabinogalactan.We identified 1874 proteins in the arabinogalactan proteome (Fig. 2 A).Ov er all, these four conditions shared 1636 pr oteins.Onl y a small number of proteins were found to be unique to a particular growth condition.The glucose proteome had 36 unique pr oteins, the ar abinose proteome 17 proteins, and the galactose proteome 19 proteins.Arabinogalactan had 52 unique proteins.We used the expr ession data, her e label fr ee quantification intensities (LFQ), to visualize the difference between the four conditions in a principal component analysis (PCA) (Fig. 2 B).The PCA plot indicated that the ar abinogalactan pr oteome had the most contr asting expr ession pattern.The PCA analysis documented that the differences between monosacc haride pr oteomes wer e less pr onounced than to the arabinogalactan proteome.
Maribacter sp.MAR_2009_72 has a genome of 4.35 Mb encoding 3635 proteins (Fig. 3 ).Nine PULs contain one or several SusC/D transporter and neighboring C AZymes .We labelled the PULs based on the arrangement in the genome, with PUL 1 being closest to the origin of replication ( Table S1 , Supporting Information ).The expr ession v alues r e v ealed a pr oteomic r esponse to ar abinogalactan in PUL 1, 7, and 8 and outside of PULs.
PUL 1 encodes 13 proteins of which three out of four CAZymes and one SusC/D pair wer e expr essed in ar abinogalactan gr own cells (Fig. 4 ).The SusC/D pair (JM81_RS00910 and JM81_RS00905) was onl y expr essed in the ar abinogalactan pr oteome.Four other pr oteins wer e clearl y induced by ar abinogalactan, ar abinose, and galactose .T he αl -ar abinofur anosidase GH43_1 (JM81_RS00875) was 10-fold induced r elativ e to the glucose proteome.A GH10, an endo-β-1,4-xylanase, sho w ed a similar expression pattern with a 5-fold difference to glucose .T he third enzyme was a GH67, an α-glucur onidase, whic h had the str ongest induction in ar abinose and galactose proteomes .T he fourth induced protein of the operon with an expression in the arabinogalactan proteome affiliated to the superfamily of protein or cofactor modifying RimK-type glutamate ligases with an ATP-grasp binding domain (JM81_RS00865).
PUL 7 contains a single SusC/D pair and a tandem of SusC/D pair in one genetic region.It encodes 42 enzymes, 13 being classified as C AZymes , three SusC/D pairs , and one sulfatase (Fig. 5 ).One SusC/D pair and 6 CAZymes were expressed in arabinogalactan grown cells.SusC (JM81_RS13730) and SusD (JM81_RS13725) wer e expr essed in the galactose and ar abinogalactan pr oteomes 100-fold and 10-fold stronger than in the ar abinose pr oteome, r espectiv el y, and not in the glucose proteome .T he tandem SusC/D pairs were not detected in any of the proteomes.An αl -fucosidase of the GH29 family (JM81_RS13700) had the highest expression among the CAZymes in this PUL.The GH29 was expressed in similar intensities in all four conditions, suggesting a constitutive expression of this periplasmic enzyme .Less intense , but also expressed in all proteomes was a xylan-α-1,2-glucuronidase belonging to the GH115 family (JM81_RS13820), with the strongest expression on galactose.Two GH105 unsaturated rhamnogalactur-on yl hydr olases (EC 3.2.1.172)(JM81_RS13845 and JM81_RS13890) wer e expr essed in all four gr owth conditions, with the exception of JM81_RS13890, which was not detected in the arabinose proteome.A GH43_18 (JM81_RS13895) was expressed in all four proteomes with similar expression intensities.An αl -rhamnosidase GH78 (JM81_RS13900) was expressed under all growth conditions.During our analysis a hypothetical protein (JM81_RS13825) with a six-hairpin GH like family domain sparked our interest.It was expressed in all four proteomes, with higher intensities in arabinogalactan, arabinose, and galactose proteomes.
PUL 8 encodes a total of 58 proteins, including 11 CAZymes and two SusC/D pairs (Fig. 6 ).A total of five C AZymes , two SusCs but only one SusD were expressed in arabinogalactan grown cells.SusC (JM81_RS16585) and SusD (JM81_RS16590) were expressed in the arabinose and arabinogalactan proteome.Another SusC (JM81_RS16455) sho w ed expr ession, slightl y lo w er than the other SusC, in the arabinose proteome and slightly less for arabinogalactan.Two GH105 proteins (JM81_RS16470 and JM81_RS16475) annotated as unsatur ated rhamnogalactur on yl hydr olases wer e expressed similar in all proteomes.JM81_RS16510 includes two domains, GH43_19 and GH43_34.It was expressed in the arabinose, arabinogalactan, and galactose proteome, whereby the highest intensities were measured for arabinose.Another αl -ar abinofur anosidase, a GH51 (JM81_RS16515), was expressed in a similar pattern to the GH43_19 + GH43_34 protein.These two genes are followed by genes of the arabinose metabolism to the pentose phosphate pathwa y-ribulokinase , l -ribulose-5phosphate 4-epimerase, and l -arabinose isomerase-and a gene for a galactose m utar otase.All pr oteins in this oper on wer e expressed in the arabinose, arabinogalactan, and galactose proteome, with highest intensities in ar abinose pr oteomes.Unknown is the function of a GH109, a member of the Gfo/ldh/MocA superfamily of NAD(P) dependent oxidoreductases, that had the highest expression in the arabinogalactan proteome .T he expression of a mannonate dehydratase (JM81_RS16615) hinted at a sugar acid metabolism.Inter estingl y, PUL 8 is pr eceded by an operon with sugar acid metabolizing enzymes .T he following enzymes were induced in the arabinogalactan proteome in comparison to glucose: 5-dehydr o-4-deoxy-d -glucur onate isomer ase, gluconate-5-dehydrogenase, a sugar kinase, 2-dehydro-3-deoxyphosphogluconate aldolase, and ta gatur onate r eductase.
An analysis with dbCAN3 identified 153 CAZymes in the genome, of which 106 were detected in the proteomes.Outside of the PULs 1, 7, and 8, se v er al CAZymes wer e expr essed in arabinogalactan degr adation.Man y expr essed CAZymes had a signal peptide for export out of the cytosol ( Table S1 and Fig. S2 , Supporting Information ).Three of the CAZymes were annotated as GH family 3 enzymes.JM81_RS00095 was expressed in all four conditions, the highest intensities were measured in the arabinogalactan proteome ( Fig. S2A , Supporting Information ).The second GH3 (JM81_RS08450) was expressed in all four conditions, but with a three to four times lar ger expr ession in ar abinogalactan, arabinose, and galactose ( Fig. S2B , Supporting Information ).A third GH3 (JM81_RS18250) was as well expressed in all four conditions, but the highest intensities wer e measur ed for arabinose and galactose.It was part of an operon also including an endo-1,4-β-xylanase (GH10) expressed only in arabinose and galactose grown cells ( Fig. S2C , Supporting Information ).All three GH3 were annotated as galactosidases.A GH43_26 (JM81_RS08585) was expressed in all four datasets, whereby the highest intensities wer e r ecorded for arabinose and nearly identical LFQs for glucose and arabinogalactan ( Fig. S2D , Supporting Information ).A GH115 xylan-α-1,2-glucuronidase (JM81_RS03245) was only expressed in arabinose and arabinogalactan grown cells ( Fig. S2E , Supporting Information ).
The transport of the monosaccharides across the inner membrane may be facilitated by an ABC transport system consisting of ABC substrate-binding (JM81_RS03610), ABC permease (JM81_RS16840), and ABC ATP binding proteins (JM81_RS01625).
Marine glycans are often decorated with sulfate.We identified 13 sulfatases in the genome of MAR_2009_72, of which three wer e expr essed in ar abinogalactan gr own cells.JM81_RS05685, JM81_RS05692, and JM81_RS076760 were equally expressed in all four proteomes.All three were previously affiliated with the utilization of m ucin, whic h contains to some extent galactose (Tailford et al. 2015, Glover et al. 2022 ).
Discussion
Galactose belongs to the four abundant monosaccharides in planktonic or ganic matter, mainl y as part of pol ysacc harides and more complex molecules , i.e .arabinogalactan proteins .Plating sea and sediment pore water on arabinogalactan medium sho w ed a large microbiome with the capacity to utilize arabinogalactan for growth.Together with the recent finding that particleassociated bacteria dominate the r eadil y cultur able fr action of seater microbiomes (Heins and Harder 2023 ) this observation indicates that arabinogalactan is a common carbon source for particle-associated bacteria.
Ar abinogalactan degr adation pathw ays w er e so far onl y described for bacteria from gut and plant systems, but not for marine bacteria (Shulami et al. 2011, Ndeh et al. 2017, Cartmell et al. 2018, Luis et al. 2018, Fujita et al. 2019, Wang and LaPointe 2020, Sasaki et al. 2021 ).These studies provided information regar ding enzymes inv olv ed in ar abinogalactan utilization, whic h includes GH families GH43, GH51, GH27, and GH28, often orga-nized in PULs (Shulami et al. 2011, Cartmell et al. 2018, Luis et al. 2018 ).Hence, we inspected first the upregulated proteins in arabinogalactan grown cells in comparison to glucose grown cells.After a discussion of the SusC/D systems, we analyzed the uniqueness of marine PULs for arabinogalactan degradation in Maribacter sp.MAR_2009_72.
The transport of the oligosaccharides involved several SusC/D pairs.PULs 1, 7, and 8 encode the three SusC/D systems that had the highest expression intensities of all SusC/Ds in the arabinogalactan proteome.On the basis of the dedicated substrate specificity of SusC/D transport systems, we propose two explanations for the induction of se v er al SusC/D pairs: (i) the extracellular hydr ol ysis of larc h wood ar abinogalactan gener ates a mixtur e of structur all y differ ent oligosacc harides whic h need dedicated transport system and (ii) a signal molecule derived from larch wood arabinogalactan may induce the expression of proteins that may not be necessary for larch wood arabinogalactan, but for the degradation of marine arabinogalactans .T he structural diversity of arabinogalactans in terrestrial system is well characterized (Fujita et al. 2019, Villa-Riv er a et al. 2021, Leszczuk et al. 2023 ), but marine arabinogalactans are understudied.
In the periplasm the oligosaccharides are further hydrolyzed by a range of C AZymes .Some PULs (1 and 7) expressed enzymes that can generate monomers .Furthermore , the proteome detected CAZymes that are not encoded in PULs and are predicted to be periplasmatic.The GH10 of PUL 1 was annotated as an endo-1,4-β-xylanase, which indicates that arabinoxylans may also be a substrate for the enzymes of PUL 1.The expression of an αglucuronidase annotated to GH67, coincides with the presence of glucuronic acid in side chains of arabinogalactan.GH67 removes glucur onic acid fr om side c hains by a single displacement mechanism using an inverting mechanism (Shulami et al. 1999, Biely et al. 2000, Nagy et al. 2002 ).But it onl y r emov es glucur onic acid fr om nonreducing ends of the oligo-and polysaccharides.A broader substr ate r ange is known for GH115 pr oteins, whic h r emov e glucuronic acid from terminal and internal regions of oligosaccharides (Ryabova et al. 2009, Aalbers et al. 2015 ).The presence of both GH families, GH67 and two GH115, suggests that glucuronic acid is part of the decoration of arabinogalactans .T he expression of the GH29 argues for fucose as a decorating sugar.Enzymes of the family GH29 are exo-α-fucosidases and cleave via an retaining mec hanism (Gr ootaert et al. 2020 ).Also, rhamnose as specific substrate is supported by expression of a GH78, αl -rhamnosidase.This GH famil y solel y includes rhamnosidases, whic h use an inv erting mec hanism to hydr ol yze bonds in cooper ation with their catal ytic r esidues (Cui et al. 2007 ).The galactan bac kbone hydr olysis r equir es a βd -galactosidase .T his enzymatic function is frequent among members of the GH family GH3.The proteome detected thr ee expr essed GH3 pr oteins.Final steps of the ar abinogalactan pathway include the translocation through the inner membr ane, likel y via an ABC transport system, and cytoplasmic transformations to channel galactose , arabinose , glucuronic acid, rhamnose, and fucose into the pentose phosphate pathway and the gl ycol ysis.
We investigated the distribution of PUL 1, 7, and 8 of Maribacter sp.MAR_2009_72 in the PULDB database using the expressed CAZymes (Terr a pon et al. 2018 ).Homologs of PUL 1 have been c har acterized for human gut bacteria and Bacteroides spp.for the utilization of a range of xylan polysaccharides including arabinoxylan (Martens et al. 2008, Rogowski et al. 2015, Wang et al. 2016 ).The PUL was in silico detected in genomes of a large variety of Bacteroidota.In contrast, PUL 7 has so far not been studied experimentally.An in silico search detected a homologous PUL structure in Maribacter sedimenticola DSM19840 (Nedashko vska ya et al. 2004 ).PUL 8 has also a homolog in M. sedimenticola DSM19840 and other Bacteroidota .
A r ecent meta genomic study of particle-associated bacteria detected a GH43-rich PUL in a Maribacter MA G , which the authors annotated as an arabinogalactan PUL (Wang et al. 2024 ).This PUL is different to the PULs we identified for arabinogalactan in the genome of Maribacter sp.MAR_2009_72.
Our observ ations r e v ealed a substr ate specificity of the thr ee PULs.In PUL 1, arabinogalactan is the only inducer for SusC/D, and the expression of a glucuronidase and a xylanase suggests that also glycans with these sugars are substrates for the PUL ( Fig. S3 , Supporting Information ).This hypothesis is supported by pr e vious studies with gut bacteria (Martens et al. 2008, Rogowski et al. 2015, Wang et al. 2016 ).PULs 7 and 8 have so far not been experimentally observed.PUL 7 is characterized by a v ery str ong induction of SusC/D by galactose and ar abinogalactan ( Fig. S4 , Supporting Information ).Galactose is for se v er al pr oteins the strongest inducer, suggesting galactans as substrate .T he presence of fucosidase , glucuronidase , and rhamnosidase suggests a decoration of the marine galactans with the corresponding monosaccharides.PUL 8 is dedicated to arabinose containing glycans .T he SusC/D is induced by arabinose and arabinogalactan ( Fig. S5 , Supporting Information ).Besides GHs, the genetic region of PUL 8 includes also monosacc haride-tr ansforming cytoplasmatic enzymes for arabinose and sugar acids .T his PUL shows that the consideration of cytosolic carbohydrate-transforming enzymes in the bioinformatic analysis of PULs ma y impro ve predictions of substrate specificity.
The compar ativ e pr oteomic anal ysis of larc h wood ar abinogalactan degradation by Maribacter sp.MAR_2009_72 identified expr essed pr oteins encoded in thr ee PULs and outside of PULs (Fig. 3 ).In summary, members of the GH families 43, 51, and 105 may produce a variety of oligosaccharides.At least three SusC/D systems are involved in the transport into the periplasm, where enzymes belonging to the GH families 3, 10, 29, 67, 78, and 115 produce monosaccharides .T he interpla y of all these enzymes allows for the utilization of ar abinogalactan, whic h we have summarized in a gr a ph (Fig. 7 ).The plant pol ysacc haride structur e is expected to be less complex than the variety of arabinogalactans present in the marine habitat (Pfeifer et al. 2020 ).T his ma y explain why not all CAZymes of each PUL were detected as expressed proteins.A difference between this study of a marine bacterium and pr e vious studies on gut and plant associated bacteria was the presence of GH105 enzymes and the absence of GH27 and GH28 enzymes.Future studies might characterize marine arabinogalactans and enzymatic studies will r esolv e the individual functions of the induced proteins to provide further information on the microbial utilization.
Figure 2 .
Figure 2. Comparison of the number of detected proteins in ar abinogalactan, ar abinose , galactose , and glucose .(A) Venn dia gr am showing the ov erla p of detected proteins in at least one of three biological replicates.(B) Principal component analysis shows the differences between the expression intensities of the four proteomes of MAR_2009_72.
Figure 3 .
Figure 3. Full genome ov ervie w of Maribacter sp.MAR_2009_72 showcasing the GC content (ring one (most inner ring)), all annotated coding genes (CDS, ring two and three) in forward and r e v erse dir ection, C AZymes identified by dbC AN3 (ring four), SusC/D (ring five), sulfatases (ring six), and PULs (ring se v en).Furthermor e, we highlighted CAZymes and SusC/Ds that might be important for arabinogalactan utilization.
Figure 4 .
Figure 4. Gene organization and expression of polysaccharide utilization locus 1 of Maribacter sp.MAR_2009_72 grown in the presence of ar abinogalactan, ar abinose , galactose , and glucose .Expression intensities in the plot are the mean values of three biological replicates of each condition shown in LFQ values [log10].
Figure 5 .
Figure 5. Gene organization and expression of polysaccharide utilization locus 7 of Maribacter sp.MAR_2009_72 grown in the presence of ar abinogalactan, ar abinose , galactose , and glucose .Expression intensities in the plot are the mean values of three biological replicates of each condition shown in LFQ values [log10].
Figure 6 .
Figure 6.Gene organization and expression of polysaccharide utilization locus 8 of Maribacter sp.MAR_2009_72 grown in the presence of ar abinogalactan, ar abinose , galactose , and glucose .Expression intensities in the plot are the mean values of three biological replicates of each condition shown in LFQ values [log10].
|
v3-fos-license
|
2019-04-20T13:03:44.048Z
|
2012-06-01T00:00:00.000
|
123206412
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1088/1367-2630/14/6/063001",
"pdf_hash": "bd9aeb7ad9151972738310710e9ac84e45d375e7",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2262",
"s2fieldsofstudy": [
"Materials Science",
"Physics"
],
"sha1": "3385941e1bd33a10445894dcbad13179d80e7c39",
"year": 2012
}
|
pes2o/s2orc
|
Site-selective couplings in x-ray-detected magnetic resonance spectra of rare-earth-substituted yttrium iron garnets
Site-selective x-ray detected magnetic resonance (XDMR) spectra were recorded in transverse detection geometry on two iron garnet thin films grown by liquid phase epitaxy (LPE) on oriented gadolinium gallium garnet (GGG) substrates: whereas the stoichiometry of the first film corresponded to pure yttrium iron garnet (1 = YIG) used as reference, yttrium was partly substituted with lanthanum and lutetium in the second film (2 = La–Lu–YIG). Surprisingly, the XDMR spectra of film 2 recorded at either the Fe K-edge or the La L3-edge revealed well-resolved structures that had fairly different relative intensity depending on whether we probed the tetrahedral () sites of iron or the dodecahedral () sites of lanthanum. The narrow XDMR lines measured at the Fe K-edge also contrast with the broad, foldover distorted lineshapes of the ferrimagnetic resonance spectra measured in the same scan. Further XDMR experiments were carried out with a thin, disc-shaped, single crystal of gadolinium iron garnet (3 = GdIG). At temperatures slightly above the gadolinium ordering temperature (T > TB = 69 K), the Gd L2-edge XDMR spectra were dominated by two well-resolved lines of nearly equal intensities. Similarly, the Fe K-edge XDMR spectra recorded under identical conditions did also split into several narrow lines but of strongly unequal intensity. These results suggest that, in the exchange-enhanced paramagnetic regime, spins precessing at the dodecahedral () sites of gadolinium do not couple in the same way with spins precessing at either the tetrahedral () or octahedral () sites of iron. On the other hand, destructive interferences between modes of opposite helicities were also observed in Fe K-edge XDMR spectra recorded far above the compensation temperature (T ≫ Tcp = 290 K). This looks like a typical signature of nonlinear four-magnon scattering processes at a very high pumping power.
2 spectra were dominated by two well-resolved lines of nearly equal intensities. Similarly, the Fe K-edge XDMR spectra recorded under identical conditions did also split into several narrow lines but of strongly unequal intensity. These results suggest that, in the exchange-enhanced paramagnetic regime, spins precessing at the dodecahedral (D 2 ) sites of gadolinium do not couple in the same way with spins precessing at either the tetrahedral (S 4 ) or octahedral (S 6 ) sites of iron. On the other hand, destructive interferences between modes of opposite helicities were also observed in Fe K-edge XDMR spectra recorded far above the compensation temperature (T T cp = 290 K). This looks like a typical signature of nonlinear four-magnon scattering processes at a very high pumping power.
Magnetic resonance with element and edge selectivity
In recent years, XDMR emerged as a novel spectroscopy in which x-ray magnetic circular dichroism (XMCD) is used to probe the resonant precession of either spin or orbital magnetization components in a strong pump field typically oscillating at microwave frequencies [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17]. XDMR can be seen as a double-resonance experiment since two independent conditions need to be simultaneously satisfied: (i) either the frequency of the microwave pump field or the effective magnetic field acting locally on spin and orbital magnetization components has to be tuned to magnetic resonance; (ii) the energy of the monochromatic, circularly polarized (CP) x-ray photons has to be adjusted in order to maximize the amplitude of the XMCD probe signal. This can happen only in the close vicinity of one of the multiple absorption edges of a given absorbing element. In this respect, XDMR spectroscopy is inherently element-selective.
Edge-selectivity stems from the conservation of angular momentum in the photoionization process of deep atomic core levels [18]: the angular momentum carried by a CP x-ray photon (+h for a right-handed circular polarization and −h for a left-handed circular polarization) is transferred to the excited photoelectron in a way that primarily depends on spin-orbit coupling in the excited core level. For L 2,3 absorption edges that are split by the spin-orbit, the Fano effect implies that a part of the photon angular momentum is converted into spin moment of the photoelectron via spin-orbit coupling ( + s at the L 3 edge; − s at the L 2 edge). Obviously, no such conversion is possible at a K-edge (or L 1 -edge) due to the absence of spin-orbit coupling in the core state and the photon angular momentum is entirely converted into ±h orbital moments of the photoelectron. Next, the excited photoelectron will probe the spin and orbital polarization of the final states, which is expected as a consequence of magnetic exchange splitting under the additional constraints of crystal field and spin-orbit interactions. In this respect, XMCD spectra simply reflect the difference in the density of final states that are allowed by the electric dipole (E1) selection rules owing to the symmetry of the initial core state. It immediately appears that an XMCD signal measured at a K-edge can only be assigned to the orbital polarization of the final states. The interpretation of the XMCD spectra recorded at spin-orbit split edges is not as straightforward since one has to disentangle the intricate contributions of spin and orbital polarizations. By summing up the integrated dichroisms resulting from the excitation of electrons originating from conjugated sub-levels, the residual signal should be assigned to the orbital polarization of the final states; in contrast a properly weighted difference between the integrated dichroic intensities measured at the L 3 and L 2 edges should reflect the spin imbalance in the excited states given that the orbital momentum transferred to the photoelectron has strictly the same sign at both edges. This is the physical content of the XMCD sum rules [19][20][21].
However, in field-scan XDMR experiments, the spectra are recorded at a fixed x-ray photon energy (E RX ) maximizing the sensitivity of the XMCD probe: under such conditions, there is no point in referring anymore to integrated dichroism intensities. Moreover, orbital (L z ) and spin (S z ) magnetic moments are energy-integrated quantities which may well vanish even though a large dichroic signal can be measured at selected photon energies: this is typically the case of XMCD spectra recorded at the L-edges of the diamagnetic Y 3+ cations in yttrium iron garnet (YIG) thin films [22]. In this context, a differential formulation of the XMCD sum rules as proposed by Strange [23] and others [24][25][26] clearly looks more appropriate for XDMR, even though one should admit that it was never established on a firm theoretical ground.
Let us first consider the case of an XDMR experiment carried out at a K-edge. Ignoring first electric quadrupole (E2) transitions, one may write [2,5] [ σ ] K = 3C p d dE L z p E = 3C p z p (1) in which [ σ ] K is the difference in absorption cross-sections for left-and right-CP x-rays measured in the vicinity of the K-edge, C p being a constant factor. Note that such a differential formulation of the sum rule refers to a fixed energy of the photoelectron: E = E RX + E 0 − E F , where E RX , E 0 and E F , respectively, denote the energy of the x-ray photons, the binding energy and the reference Fermi level. Whereas L z p is the expectation value of the orbital angular momentum operator integrated over all states featuring p-type symmetry, z p defines the orbital polarization of p-projected densities of states (DOS) at energy E. Taking into in which N b 2 is the statistical branching ratio. At least three operators ( z , s z , t z ) may then be required to describe the dichroic signal at a fixed energy: those clearly exhibit some analogy with another set of operators (w 101 , w 011 , w 211 ) used by Van der Laan to describe the line shape of XMCD spectra at L-edges [27]. Note that the validity of equations (3) and (4) was only established for cubic systems in which t z and w 211 both vanish [25]. A similar situation should still prevail for iron garnet thin films or single crystals as considered in this paper. Moreover, it will be shown in the next section that the XDMR spectra recorded at the L-edges of yttrium, lanthanum or gadolinium essentially reflect the precession dynamics of the spin polarized magnetization components s z , the relevant contribution of z being systematically found to be negligible.
Site-selective ferrimagnetic resonance in iron garnets
It is the aim of this paper to check how far the information extracted from XDMR spectra recorded at different absorbing edges could provide us with a refined picture of the precession dynamics of local spin and orbital magnetization components. In this respect, ferrimagnetic iron garnets look like an excellent testing ground for such challenging investigations. Recall that yttrium iron garnet (YIG) is the prototype member of a rich family of ferrimagnetic compounds that all have the same cubic crystal structure (space group: Ia3d; group N • 230) [30] and in which the yttrium ions (Y 3+ ) can be substituted in variable proportions with nearly all the trivalent rare earth (RE) cations. The generic formulation The respective contributions of the tetrahedral (S 4 ) and octahedral (S 6 ) coordination sites of iron to the Fe K-edge XMCD spectrum of YIG: all the displayed spectra were simulated in the energy range of the Fe K-edge pre-peak using the PY-LMTO-LSDA code. Site selectivity arises because the XMCD signal due to electric dipole transitions (E1) is much stronger at the S 4 sites than at S 6 sites. The contributions of electric quadrupole transitions (E2) are anyhow one order of magnitude weaker. (B) Simulated spectra of the spin and orbitally polarized d-projected DOS s z 4d and z 4d at the yttrium (D 2 ) sites. Note that z 4d is very weak when s z 4d reaches its extrema; there is no significant contribution of t z 4d .
the magnetic ordering temperature (T C 550 K), the two iron sublattices get magnetized antiparallel to each other according to the ferrimagnetic model of Néel, with an unbalanced magnetization (about 5 µ B ) in favor of the tetrahedral sites. It has long been recognized that the two iron sites were indeed coupled by a strong superexchange interaction mediated by the oxygen anions [22,30].
As a useful preamble, we would like to draw attention to ab initio simulations of the contributions of each individual site to the whole dichroic signal of YIG in its ferrimagnetic state. These simulations were carried out with the fully relativistic PY-LMTO-LSDA code [31]. Typically, we have reproduced in figure 1(A) the specific contributions of the tetrahedral (S 4 ) and octahedral (S 6 ) coordination sites of iron to the Fe K-edge XMCD spectrum in the spectral range of the XANES pre-peak. As commonly expected, it is clearly seen that the contributions 6 of the electric quadrupole (E2) transitions are very weak. However, the point of considerable importance for this work is that the contribution of the electric dipole (E1) transitions to the XMCD signal is much stronger for the tetrahedral (S 4 ) than for the octahedral (S 6 ) coordination sites of iron: this is because E1 transitions are allowed from the 1s core level to final states that belong to the same representations (b, e) of group S 4 as the 4p atomic orbitals, whereas E1 transitions are forbidden to final states belonging to the a g or e g representations of group S 6 . In other words, XDMR spectra recorded at the maximum intensity of the Fe K-edge XMCD spectrum should benefit from a strong site-selectivity favoring the S 4 Fe sites: this is indeed a considerable advantage over ferrimagnetic resonance (FMR).
The latter simulations also provided us with an opportunity to check the limits of validity of the differential sum rules for XMCD experiments carried out at the L-edges of yttrium. The results have already been detailed elsewhere [22]. As suspected, the integrated ground state spin moment S z s,d,f 0.03 µ B is very small, whereas both L z s,d,f and T z s,d,f simply vanish. In contrast, the spectra reproduced in figure 1(B) confirm that quite a substantial and wellstructured XMCD signal can be perfectly measured at the L-edges of yttrium. Actually, the corresponding dichroic signals have already been used to measure a weak XDMR signal at the Y L-edges [5]. It is clearly seen from figure 1(B) that such an XDMR signal should essentially probe the precession dynamics of the spin magnetization component s z d which largely exceeds z d , especially when the dichroic signal is maximized.
Paper content and organization
Following the present introduction, section 2 is dedicated to a brief review of a variety of experimental and instrumental constraints that apply to the XDMR experiments reported in this paper. In particular, we feel that it important to introduce the reader to the superheterodyne detection scheme that was developed at the ESRF to record high-quality XDMR spectra in the so-called transverse detection geometry (TRD). We would like to also highlight some efforts made to upgrade the performances of our XDMR spectrometer.
In section 3, we shall compare the Fe K-edge XDMR spectra collected on two iron garnet thin films that were grown by liquid phase epitaxy (LPE) on oriented gadolinium gallium garnet (GGG) substrates: film 1 = Y 3 Fe 5 O 12 (YIG no. 520) was grown on a [111] substrate; film 2 = [Y 1.3 La 0.47 Lu 1.3 ]Fe 4.84 O 12 (Y-La-LuIG) was grown on a [001] substrate. In film 2, we deliberately selected 'diamagnetic' ( 1 S 0 ) RE cations (La 3+ , Lu 3+ ) to substitute for Y 3+ in the dodecahedral (24c) sites (point group: D 2 ). Note that a careful characterization of films 1 and 2 and of their magnetic properties has already been reported elsewhere [22]. In the latter reference, we also reported detailed analyses of the XMCD spectra recorded not only at the Fe K-edge, but also at the yttrium, lanthanum and lutetium L-edges for film 2: these analyses included the evaluation of equations (3) and (4) as well as a useful comparison of the spectra associated with the magnetically polarized s z and z 4d-DOS of yttrium and 5d-DOS of lanthanum. In particular, it was found that the magnetically polarized 5d-DOS at the lanthanum sites were structured in quite the same way as the polarized 4d-DOS of yttrium in YIG. It was also confirmed that, at the lanthanum sites, 2s z 5d largely exceeded z 5d [22].
Keeping in mind that XDMR spectra recorded at the Fe K-edge are largely dominated by the precession dynamics of orbital magnetization components at the S 4 sites, we found it very attractive to look for subtle differences in XDMR spectra recorded at the La L-edges which we expect to reflect mostly the precession dynamics of very weak, induced spin magnetization components located at the D 2 sites. Unfortunately, even for film 1, the XDMR signal measured at the Y L-edges was too weak to make such a comparison meaningful.
In section 4, we shall report further XDMR spectra collected on a thin, polished disc of a gadolinium iron garnet single crystal (3 = Gd 3 Fe 5 O 12 = GdIG). Even though YIG and GdIG have identical crystal structures and nearly the same Curie temperatures (T C = 551-556 K), their magnetic properties are fairly different due to the weak coupling of the Gd sublattice with the iron S 4 sublattice: recall that the Gd spins get fully ordered only below a further ordering temperature (T B 69 K) [32]. Above T B , the Gd magnetization can be described as a temperature-dependent Brillouin function for spin 7/2 in a field proportional to the net magnetization of the strongly coupled ferric ions. The most spectacular consequence is the existence of a compensation point, i.e. a temperature (T cp = 290 K) at which the spontaneous magnetization of GdIG passes through zero [30]. It has long been known that major changes can be observed in FMR spectra at the compensation point. One of these is the inversion of the Larmor precession helicity [33]. This point stimulated our curiosity and encouraged us to check whether the Fe K-edge XDMR spectra would similarly be sensitive to such a change of precession helicity. However, we would like to show that XDMR experiments revealed even more spectacular spectral changes that one could not see in FMR, e.g. near the gadolinium ordering temperature.
Experimental and instrumental constraints
The experiments reported in this paper were all carried out at the ESRF beamline ID12, where an XDMR spectrometer has now been permanently installed for several years [3,5]. Recall that beamline ID12 was optimized for x-ray circular dichroism studies over the entire energy range 2-20 keV. Owing to the limited beamtime available for projects running on beamline ID12, XDMR experiments clearly suffer from the serious handicap that long data acquisition times are most often needed due to the weakness of the XMCD probe signal.
Sample-related constraints
It should be immediately noted that the oriented GGG substrates used to grow films 1 and 2 by LPE are far too thick (d = 457 ± 50 µm) and too absorbing to let the XMCD probe signal be measured in a transmission mode. The same situation prevails for the small disc cut from a GdIG single crystal. The XMCD probe signal can nevertheless be detected by measuring the x-ray fluorescence total yield. Several factors, unfortunately, concur to lower the detection sensitivity: (i) the limited solid angle over which the x-ray fluorescence photons can be collected; (ii) the rather poor fluorescence yield, especially at the L edges of yttrium or even lanthanum; (iii) a substantial re-absorption of the fluorescence photons by the sample itself (e.g. the lutetium L α or L β lines are strongly reabsorbed at iron sites). One should also worry about a dramatic loss of sensitivity whenever one has to extract a weak XDMR signal from a strong background contributing only to the statistical noise: this typically happens at the Gd L 2 absorption edge where there is a fairly intense fluorescence background due to the large residual absorptions associated with the Gd L 3 and Fe K edges. There is indeed a price to be paid in terms of data acquisition length.
There is an additional constraint that becomes crucial when working with x-ray photons of high energy: the skin depth of the microwave radiation should (greatly) exceed the penetration 8 depth of the x-ray photons. This condition is fully satisfied by iron garnets, which are known to be excellent insulators with no skin depth restriction.
XDMR geometries
Recall that XDMR spectra can be recorded in two distinct detection geometries: 1. In the transverse geometry (TRD), the wavevector k ⊥ RX of the incident, CP x-rays is set perpendicular (⊥) to both the external bias field B 0 and the microwave pump field b p : the XMCD probe signal is then proportional to a weak transverse magnetization m ⊥ which oscillates at the microwave pump frequency. 2. In the longitudinal geometry (LOD), k RX is set parallel ( ) to B 0 . In this geometry, what is measured is mainly a time-invariant XMCD signal proportional to the steady-state change m z of the projection of the magnetization along the precession axis (z).
For a ferromagnetic thin film with uniaxial anisotropy and perpendicular magnetization, the opening angle of precession θ 0 is quite small. However, θ 0 is a constant of motion which characterizes the precession dynamics and can be determined by normalizing the XDMR crosssections σ XDMR (k RX ) with respect to the equilibrium XMCD cross-sections measured in the absence of any microwave pumping [2,5]. For XDMR measurements in longitudinal detection (LOD) geometry: whereas in the TRD geometry: It is immediately seen that m z is only a second-order effect with respect to the opening angle of precession (θ 0 ) and m ⊥ . Moreover, any information on the phase and helicity of the precession gets lost in LOD geometry. One should also keep in mind that the relaxation processes directly affect θ 0 : the shorter the relaxation times, the broader will be the resonance lineshapes and the weaker the intensity of the XDMR signal at resonance [2].
Superheterodyne detection scheme
The TRD geometry unfortunately suffers from the considerable handicap that there is, as yet, no x-ray detector that can measure a small dichroic signal oscillating at microwave frequencies, especially in the x-ray fluorescence excitation mode. At the ESRF, high-quality XDMR spectra could nevertheless be recorded in TRD geometry using a novel heterodyne detection scheme [3,5]. The underlying concept becomes fully transparent if one converts into the frequency domain the time-structures of the incident x-ray beam and of the related fluorescence intensities. Typically, the time-structure of the excited x-ray fluorescence intensity (I f (t)) consists of a series of discrete bunches, with a periodicity T = 1/RF = 2.839 ns defined by the RF frequency (352.202 MHz) of the ESRF storage ring. Let us admit that all bunches have a Gaussian shape with an average full-width at half-maximum length of about 50 ps: On Fourier-transforming I f (t), one obtains in the frequency domain a Gaussian envelope of harmonics of the RF frequency: One can easily check that the half-width at half-maximum of the Gaussian envelope, F 1/2 25 × RF = 8.79 GHz, falls in the microwave X-band. Since the ESRF storage ring inherently provides us with a microwave local oscillator (LO) at a frequency close to the XDMR pumping frequency, it seems most attractive to measure the resulting low-frequency beating signal. A further gain in sensitivity was obtained by exploiting a super-heterodyne detection scheme relying on a 180 • bi-phase modulation technique (BPSK; bi-phase-shift keying) [5]. Defining the XDMR pumping frequency as F p = N × RF + IF, the superheterodyne detection consists in catching the modulation satellites at frequencies IF ± F bpsk . A block diagram summarizing the entire detection scheme used for XDMR experiments in TRD geometry is reproduced in figure 2.
Whereas the heterodyne or superheterodyne detection schemes clearly belong to the group of time-average measurement methods, a time-resolved approach was developed quite independently by Arena, Bailey et al [7][8][9][10][11][12]. In their XDMR experiments, the pumping frequency was systematically selected as being a low-order harmonic of the RF signal at the Advanced Photon Source at Argonne National Laboratory (USA) so that the XMCD signal could be directly sampled stroboscopically by the x-ray pulses. High-quality XDMR spectra were apparently recorded in this way, but mostly in the soft X-range where XMCD signals turn out to be rather large.
Vector detection of XDMR spectra
Whereas any information on the phase of the precessing magnetization component is definitely lost in XDMR experiments carried out in LOD geometry, XDMR experiments performed in TRD geometry let us benefit from the great advantage that one can compare the phases of two resonant lines recorded under strictly identical conditions. This, however, requires a careful vector analysis of the corresponding XDMR signals. At this stage, it is essential to realize that the phase information is preserved in the heterodyne detection, which is basically a translation in the frequency domain. Thus, all that we need to do is to carry out a vector decomposition of the XDMR signal at the beating frequency (IF) on an orthogonal basis that consists of two reference signals oscillating in phase and quadrature at frequency IF. It is precisely the role of a microwave IQ mixer (labeled 1 in figure 2) to provide us with such ultrastable references. Two RF mixers (labeled 1 and 2 in figure 2) make such a vector decomposition possible. Actually, our superheterodyne detection adds one more translation in the frequency domain and the final phase determination is ultimately carried out at the modulation frequency (F bpsk ) using two separate channels of the multichannel vector signal analyzer (VSA) operated in a standard timeaverage mode.
Under the conditions of magnetic resonance, the signal intensities of the IQ channels, formally ascribed to the real (Re) and imaginary (Im) parts of a complex signal, can be properly recombined in order to recover the true profiles of the absorptive and dispersive components of a resonant complex susceptibility. Throughout this paper, we shall make use of the usual criterion that, at resonance ( = 0), the dispersive lineshape should cross the zero axis while the absorptive lineshape should reach its maximum. In particular, we would like to compare the Figure 2. Block diagram of the superheterodyne detection scheme used in TRD geometry. Key components include an ultra-low phase noise microwave generator (Anritsu) and a high-sensitivity multichannel vector spectrum analyzer (Agilent Technologies) operated in the time-average mode. Both the Anritsu generator and the VSA are locked to the same 10 MHz RF master oscillator. What makes the superheterodyne detection possible is the 180 • -biphase microwave modulator (BPSK) operated at low modulation frequency (F bpsk ). A combination of two quadrature (IQ) microwave mixers and of two RF mixers allows us to carry out vector analyses of FMR and XDMR spectra.
phase shifts ( ) associated with XDMR spectra recorded at various sites. On the other hand, let us recall that two magnetic modes featuring opposite precession helicities should exhibit identical absorptive lineshapes but inverted dispersive lineshapes: this is a direct consequence of the even (odd) parity of the absorptive (dispersive) part of the complex resonant susceptibility with respect to the angular precession frequency (ω).
Upgrade of the XDMR spectrometer
We would like to highlight here further efforts made to upgrade the performance of the XDMR spectrometer that was described in previous works [4,5]. As shown in figure 2, we inserted Gordon coupler makes it possible to operate the cavity in either the overcoupling or the critical coupling mode. A thin beryllium (Be) window that is totally transparent to the x-ray fluorescence photons ensures the electrical continuity inside the cavity and prevents the microwave from perturbing the photodiode located outside the cavity. The direction of the external magnetic field B 0 is normal to the Be window and to the photodiode. a second quadrature (IQ) microwave mixer on the exit port of the microwave circulator. This rather simple modification now allows us to perform vector analyses of both XDMR and FMR spectra recorded under strictly identical conditions. This is essential if one wishes to access phase-shifts between XDMR and FMR spectra. The key to such an advanced option lies indeed in the capability of operating the high-performance vector spectrum analyzer (VSA 89600-S, Agilent Technology Inc.) in a fully synchronous multichannel mode.
There is another critical point which concerns the design of tunable microwave cavities optimized for XDMR experiments in TRD geometry. As illustrated in figure 3, the rectangular cavity operated in the TE 102 mode was split into three parts: in addition to the fixed central part (in which the sample is inserted), two sliding parts that move in opposite directions render a small adjustment of the cavity length possible. It is well documented in old textbooks [34] that only a minor perturbation of the cavity Q factor is to be expected when the length of the moving parts is close to λ g /4, λ g being the standard notation for the wavelength inside the rectangular waveguide sections. A high-precision translation stage (Schneeberger) was used to make 12 the displacements very accurate and highly reproducible. This new cavity also benefits from the design of a semi-automated Gordon coupler which allows us to freely vary the coupling of the cavity over a wider range [35]: in practice, this turned out to be very helpful in recording XDMR spectra of YIG thin films which exhibit extremely narrow resonance lines and for which large overcoupling proved to be preferable. At critical coupling, a loaded Q-factor (Q L ) in excess of 3000 was measured with such a tunable cavity. As suggested by figure 3, the cavity is inserted inside a non-magnetic vacuum chamber that fits perfectly into the magnetic gap of the Bruker BE 15V electromagnet.
The sample is glued at the extremity of a lossless sample holder made of a sapphire rod (Ø = 4.5 mm) carefully pre-machined in order to avoid any unwanted tilt of the magnetic film or disc-shaped GdIG platelet. A translation of the sapphire rod along the vertical (Y ) axis makes it easy to insert (remove) the sample into (from) the cavity. Since the sapphire rod is itself attached to the cold finger of a constant flow helium cryostat, there is a possibility to cool the sample down to very low temperatures (T 20 K), at least when the microwave pumping power is kept below 1 mW.
X-ray fluorescence photons are collected over a wide solid angle through a well-polished, (metallic) beryllium window (Ø = 31 mm; thickness: 25 µm) that preserves the electrical continuity inside the cavity but also prevents any leak of microwave radiation from perturbing the x-ray photodiode located outside the cavity. Recall that XDMR experiments in TRD geometry require fast photodiodes featuring a large active area (300 mm 2 ): special photodiodes were carefully optimized for such a highly demanding application [36]. The benefit of using a tunable microwave cavity becomes immediately obvious if one keeps in mind that the bandpass of the photodiode and preamplifier assembly is restricted to about 2 MHz. Even though the magnetic bias field (H 0 ) is always directed along the normal to the beryllium window and to the photodiode, its orientation with respect to the sample can be accurately adjusted by a rotation (β Y ) of the sample around the vertical axis as sketched in figure 3. Whereas the sample can be freely rotated in a conventional FMR experiment, XDMR experiments in TRD geometry are most conveniently performed with θ H = −β Y 45 • , θ H denoting the polar angle of the magnetic bias field in the rotating frame (x 1 ; y; z 1 ) of figure 3. Note that the sensitivity of the XDMR experiments in TRD geometry is getting rather poor for either β Y = 90 • (in plane magnetization but grazing incidence of the incident x-ray photons) or β Y = 0 • (perpendicular magnetization but restricted solid angle for the detection of x-ray fluorescence photons).
YIG film 1
XDMR spectra of high quality have been recorded in TRD geometry on YIG film 1 [6]. The spectra reproduced in figure 4 were obtained for a microwave pumping power as low as 1 mW. For this experiment, the microwave pumping frequency was F p = 8452 MHz, the beating frequency with the LO at frequency 24 × RF being thus: I F = 856.4 kHz. The phase modulation frequency F bpsk = 2.6948 kHz was selected as being a very-low-order sub-harmonic of the RF frequency: F bpsk = RF/(31 × 31 × 17 × 8). The VSA triggering frequency was even lower: F Trigger = F bpsk /16. Recall that the energy of the CP x-ray photons was tuned to the maximum of the Fe K-edge XMCD spectrum (E 1 = 7113.91 eV) and that the film was rotated by β Y 42 • in order to minimize the demagnetizing field anisotropy. Under such conditions, the XDMR peak intensity was found to increase linearly with the square root of the pumping power up to about 10 mW [5,6].
In figure 4(A), we compare first the FMR and XDMR power spectral density (PSD) spectra that were recorded simultaneously. It is quite obvious that the intensity of the sharp BMSW modes is much stronger in the FMR PSD spectrum than in XDMR. It also appears that the two PSD spectra do not peak at the same resonance field and have different lineshapes. No definitive explanation can yet be given for such differences. One might argue, for instance, that it cannot yet be taken for granted that there is no small tilt angle between the true precession axes of the spin and orbital magnetization components at the tetrahedral (24d) iron sites. More generally, as pointed out by several authors [37,38], one may also question how far is it legitimate to systematically reduce the 20 sublattices of YIG to only two rigidly coupled iron sublattices and to neglect the thermal excitation of magnetoelastic modes that may not affect sites of different symmetries in the same way.
Since the experiment was performed in TRD geometry, a complex vector analysis allowed us to recover some phase information. This is illustrated with figure 4(B) in which the absorptive (χ ) and dispersive (χ ) XDMR components of film 1 can be identified with the real and imaginary parts of the vector detection scheme. A small (instrumentation-dependent) phaseshift ( 6 • ) was added in order to let χ pass through zero when χ is maximum as well as |XDMR|.
Arrows in figures 4 point to very weak satellite resonances that could possibly be assigned to magnetostatic spin waves. As reported elsewhere [5], those signatures grow rapidly with the microwave pumping power. This supports our view that, locally, the orbital magnetization components M ( ) couple to nonuniform magnetostatic spin waves through dipole-dipole interactions. However, we have also pointed out the reasons that made us expect the relative amplitude of the forward/backward MSW satellites to be much weaker in XDMR than in conventional FMR spectra. Recall that there is no chance to excite and detect standing waves resonances associated with magnetostatic modes unless there is a net transverse magnetization component interacting with both the microwave pump field and the CP x-rays: in the YIG thin film 1, this can be envisaged only for standing waves of rather low order and featuring an odd number of semiperiods [39].
Y-La-LuIG film 2
A priori, one would guess that the XDMR spectra recorded at the Fe K-edge should look fairly similar for films 1 (YIG) and 2 (Y-La-LuiIG). In reality, we had some reasons to be suspicious because previous XDMR spectra recorded on film 2 in LOD geometry already led to puzzling results [5,6] when we tried to evaluate the precession cone angle (θ 0 ) using equation (5): (i) From XDMR measurements carried out at the Fe K-edge with the bias field normal to the film, we found that the apparent precession cone angles of the orbital magnetization components were much larger for film 2 (θ 0 [Fe] 13-19 • ) than for film 1 (θ 0 [Fe] 7.2 • ). (ii) For film 2, the opening cone angle of the orbital components precessing at the iron S 4 sites would also largely exceed the opening cone angle of the spin components precessing at the lanthanum D 2 sites (θ 0 [La] 4.7 • ).
It should be kept in mind, however, that XDMR experiments carried out in LOD geometry require a high pumping power (630 mW). It was therefore our interpretation that many of these unexpected results could be caused by nonlinear processes. For example, Suhl's secondorder instability process is typically associated with a four-magnon scattering process in which two uniform magnons were annihilated whereas two non-uniform magnons were created [40,41]. In this respect, XDMR experiments carried out in TRD geometry benefit from a higher sensitivity so that, in principle, one may hope to lower the pumping power. Moreover, we already emphasized elsewhere that rotating the film by 45 • could help considerably in minimizing the foldover lineshape distortions [5]. Unfortunately, due to a much stronger reabsorption of the x-ray fluorescence photons in film 2, we could not preserve the same signal-to-noise ratio as for film 1 without increasing the pumping power by nearly one order of magnitude. As a preamble to any discussion of the XDMR spectra collected in TRD geometry on the Y-La-LuIG film 2, we would like to insist that the operating conditions were pretty much the same as for the previous experiments on film 1: F p = 8 453.7 MHz; IF = 843.6 kHz; F bpsk = RF/(31 × 27 × 16 × 7) = 3.757 07 kHz; F Trigger = F bpsk /12.
Fe K-edge XDMR spectra.
We have regrouped in figure 5(A) a whole series of XDMR spectra of film 2 recorded under pumping powers ranging from 99 mW up to 378 mW. More precisely, we compare in figure 5(A) the XDMR PSDs with the PSD of the microwave absorption measured simultaneously during the same scan. It is immediately seen that the peak intensity of the XDMR spectra never coincides with the maximum absorption of the foldover distorted FMR line. We would like to draw attention, however, to the quite remarkable splitting of the XDMR lines at low pumping power. This is particularly obvious in figure 5(A)(a) in which the XDMR spectrum (pumping power: 99 mW) exhibits a very sharp signal (labeled 1) peaking at 2891.6 Oe together with a series of well-resolved low-field satellites (marked with arrows) that we tentatively assigned to magnetostatic modes. Note that the linewidth of the latter sharp XDMR signal does not exceed 6 Oe whereas the foldover distorted FMR line is considerably broader. Quite remarkable is also the existence of another sharp signal (labeled 2) peaking at 2915.3 Oe, i.e. very close to the foldover jump of the FMR line. Note that peaks 1 and 2 get slightly shifted toward higher fields in figure 5 figure 5(B)(a). Note that the dispersion spectra associated with Im [XDMR] are rather weak. The relative phase shifts obtained for peak 1 do not seem to vary significantly with the pumping power as long as the two lines are well resolved. Given that peaks 1 and 2 are no longer resolved in figure 5(A)(c), it is not really surprising that vector analyses of the experiment performed under a pumping power of 378 mW yield a meaningless phase shift ( = +53 • ). On the other hand, it is worth underlining here that there should be a phase difference approaching 180 • between peaks 1 and 2.
3.2.2.
La L 3 -edge XDMR spectra. XDMR experiments were also performed on film 2 at the La L 3 -edge, but only for pumping powers of 246 and 378 mW: unfortunately, not enough beamtime was left available to carry out a similar experiment at low pumping power (99 mW), i.e. under conditions where the resonance lines could have been better resolved. We have regrouped in figures 6(A) the PSD spectra recorded under pumping powers of 246 mW (a) and 378 mW (b), respectively. If one compares figures 5(A) and 6(B), it immediately appears that the XDMR spectra recorded at the La L 3 -edge look different from the spectra recorded under identical conditions at the Fe K-edge. In order to make such differences more directly perceptible, arrows were added in figures 6(A) which refer to the resonance fields of peaks 1 and 2 in the Fe K-edge XDMR PSD spectra. Whereas peak 1 had been found to largely dominate the Fe K-edge XDMR spectra at low pumping power, it would contribute at best to a weak structure in figure 6(A)(a). The signal labeled 2 looks now much more intense, whereas additional signatures tentatively labeled 3 or 4 grow at higher fields, i.e. very close to the foldover jump of the microwave absorption PSD spectrum. Vector analyses of the latter XDMR experiments figure 6(B)(a), its contribution becomes much stronger and highly structured at high pumping power. As discussed in much more detail in the next section, this could be a valuable indication that nonlinear four-magnon scattering processes start perturbing the resonant precession.
At this stage, we would like to summarize below a few points that definitely call for more discussion in section 5: (i) the differences between the FMR and XDMR PSD spectra are considerably more marked for film 2 (Y-La-LuIG) than for film 1 (YIG); (ii) in addition to the uniform mode which contributes to a surprisingly narrow resonance line (peak 1) in the Fe XDMR spectra, there is another well-resolved signal peaking at higher field (peak 2) and which we did not observe in YIG; (iii) this additional mode, which dominates over the uniform mode at the La L 3 -edge, is clearly in antiphase with the latter. Recall that XDMR experiments performed at the La L-edges essentially probe the precession of (induced) spin components which may well couple by exchange to the spins located at the tetrahedral (S 4 ) or octahedral (S 6 ) sites of iron. Indeed, one should expect the latter coupling to be much stronger in the case of GdIG, which is considered in the next section.
XDMR spectra near the Gd ordering temperature T B
Indeed, one may anticipate that the replacement of all diamagnetic Y 3+ cations with paramagnetic Gd 3+ ( 8 S 7/2 ) cations in the dodecahedral (24c) sites should cause a much more severe perturbation than that which resulted from a partial substitution with La 3+ or Lu 3+ in film 2. In particular, we found it attractive to check whether a similar line splitting could be observed (or be even enhanced) in site-selective XDMR spectra of GdIG. We would like to show in this section that this seems to be the case but only in a restricted temperature range. On the other hand, it should be kept in mind that our previous attempts to record XDMR spectra at the Gd L 2 -edge either failed or resulted in very noisy spectra [4]. What makes XDMR experiments at the Gd L 2 -edge rather challenging is the poor signal-to-noise ratio, which results from the contamination of the XMCD signal by a large and noisy background at photon energies slightly exceeding the Fe K-edge and Gd L 3 edge. We tried to compensate for such a loss of sensitivity by increasing the microwave pumping power up to 650 mW. Unfortunately, under such a high pumping power, we failed to cool the sample below 100 K and there is some doubt left regarding the true temperature of the sample when the temperature monitoring of the cold finger was set to 100 K. This resulted in a severe limitation owing to the fact that the gadolinium ordering temperature in GdIG was reported to be in the following range [32]: 69 K T B 100 K.
In The XDMR PSD spectrum recorded at the Fe K-edge (figure 7(B)(b) similarly splits into several resolved resonance lines which, however, peak at slightly higher fields than in the Gd XDMR PSD spectrum: H R (1) = 2848 Oe, H R (2) = 3148 Oe and H R (1s) = 2945 Oe. There could be an additional shoulder at H R (2s) = 3283 Oe. Let us emphasize that the resonance lines labeled 1 and 1s are considerably weaker than the resonance line labeled 2, which exhibits a narrow linewidth ( H = 80 Oe) and largely dominates the XDMR spectra recorded at the Fe K-edge. On the other hand, it appears from figures 7(D) that vector analyses of the FMR and Fe K-edge XDMR spectra require phase shifts ( ) that are significantly different. Moreover, it looks like the resonances labeled 1 and 2 would now contribute to signatures with opposite signs in the dispersive spectral component Im [XDMR]: this might indicate that the relevant magnetization vectors could precess with opposite angular velocities.
The narrow linewidths of the XDMR signatures support our guess that the experiments were performed only slightly above the Gd ordering temperature (T B ): following Belov [32], the gadolinium sublattice might thus be in a so-called exchange-enhanced paramagnetic regime in which the resonance lines could undergo some exchange-narrowing effect. What causes the peculiarity of this regime is that exchange should be dominated by intersublattice interactions which are strongly site-selective: molecular field predictions [42] as well as NMR experiments [43,44] on GdIG let us expect the exchange integral between the tetrahedral d-sites of iron (S 4 ) and the dodecahedral c-sites of gadolinium (D 2 ) to be considerably smaller (J dc −4 K) than the exchange integral between the octahedral a-sites (S 6 ) and the tetrahedral d-sites (S 4 ) of iron (J ad −36 K); however, J dc itself would be one order of magnitude 1 and 2) being well resolved and of comparable intensity. The Fe K-edge XDMR spectrum also splits into several lines but is largely dominated by the high field resonance a, b). Whereas the FMR and Gd L 2 edge XDMR spectra can be analyzed with the same phase shift, this is surely not the case for the Fe K-edge XDMR spectra.
larger than the exchange integral between the octahedral (S 6 ) a-sites of iron (J ac −0.3 K) and the c-sites of gadolinium. Recall that the exchange integral between the gadolinium sites (J cc −0.13 K) should be even smaller, as inferred from Mössbauer spectroscopy [45]. This appears a propitious situation for the excitation of non-uniform precession modes subject to unequal anisotropy fields: it is our interpretation that this is the primary cause of the line splitting of the Gd L 2 -edge XDMR spectra. A priori, it seems natural to envisage that the line labeled 1, which is slightly more intense and exhibits the narrowest linewidth, could be assigned to gadolinium sites that would be more strongly exchange coupled to the iron d-sites.
A very preliminary test experiment tentatively carried out at lower temperature under lower pumping power (35 mW) would suggest that the XDMR line would shift down toward even lower resonance fields, just like the microwave absorption PSD spectrum. Whereas the XDMR experiments performed at the Gd L-edge probe essentially the precession of spin magnetization components that directly experience exchange interactions, one would expect the XDMR spectra recorded at the Fe K-edge to look different for two reasons: (i) the precessing magnetization components being of orbital nature are not intrinsically affected by exchange interactions; (ii) the iron d-sites (S 4 ) should contribute to much larger XDMR signatures than the a-sites (S 6 ) as a consequence of the site selectivity illustrated in figure 1(A). Regarding exchange, the reality is, however, more subtle since most of the XDMR signal arises from spin-orbit interactions which implicitly are affected by exchange through the relevant spin component. It is therefore tempting to assign the strong peak labeled 2 in figure 7(B)(b) mostly to orbital components precessing at the tetrahedral d-sites of iron and that would be indirectly coupled by exchange and spin-orbit interactions with the gadolinium c-sites. The assignment of the much weaker signatures labeled 1 and 1s in figure 7(B)(b) is more ambiguous: our guess is that those lines might possibly be assigned to iron sites that would be weakly coupled (or even uncoupled) to the gadolinium sites: indeed, this includes iron ions in octahedral a-sites that should be poorly coupled by exchange interactions to the gadolinium sites, and which would contribute to only very weak XDMR signatures through electric quadrupole (E2) allowed transitions. Let us stress that, regarding XDMR experiments carried out at the Fe K-edge, there is actually no chance to pick up a signal from the octahedral (S 6 ) coordination sites of iron unless the magnetic resonances of the S 4 and S 6 sites are fully resolved: this is totally hopeless in the case of YIG films, but this could become possible near or below the ordering temperature T B of GdIG if the exchange interactions between the gadolinium c-sites and iron d-sites turn out to be strong enough to cause the excitation of a non-uniform mode subject to a strongly perturbed anisotropy field. In such a case, the broad band encompassing the weak signatures labeled 1 and 1s in figure 7(B)(b) might well be assigned to contributions of the octahedral (S 6 ) coordination sites of iron.
If the latter picture holds true, we would face a totally unanticipated situation where the resonance line labeled 1 in the Gd L 2 -edge XDMR spectrum and the resonance line labeled 2 in the Fe K-edge XDMR spectrum would correspond to cross-coupled sites. This could be the case if the spins located at the gadolinium in D 2 sites and the orbital magnetization components located at the iron S 4 sites would precess around slightly tilted axes. This may happen if the anisotropy fields are significantly different at the gadolinium and iron sites. On the other hand, our interpretation is supported by the rather similar lineshapes of the whole bands [1 + 1s] in figure 7(A)(b) and [2 + 2s] in figure 7(B)(b). In contrast, one would certainly expect the somewhat broader resonance labeled 2 in figure 7(A)(b) to be much more intense than the weak signatures labeled [1 + 1s] in figure 7(B)(b) because the spectroscopic selection rules at the Gd L-edges do not cause any difference between gadolinium c-sites coupled with iron dand a-sites.
Saturation of XDMR spectra above T cp
We have reproduced in figures 8(A) standard PSD analyses of the Fe K-edge XDMR spectra measured in TRD geometry with sample 3. Vector analyses of the same data are displayed in figure 8(B). It was our initial goal to compare XDMR spectra recorded either below the compensation point, e.g. at T 150 K < T cp = 290 K (figures 8(a)), and above the compensation point, e.g. at T = 450 K > T cp (figures 8(A)(b) and (B) (b). Indeed, the microwave pumping conditions were kept strictly identical for both experiments. Obviously, the two sets of XDMR spectra are fairly different: everything looks like we had either some kind of saturation or a splitting of the XDMR PSD spectrum at high temperature (450 K). With FMR linewidths in excess of 200 Oe, the pumping power could be increased up to 500 mW (or even higher) without any detectable foldover lineshape distortion. Note that these experiments were performed with a rather high BPSK modulation frequency: F p = 8452.3 MHz; IF = 556.4 kHz; F bpsk = RF/(34 × 32 × 3 × 2) = 53.9525 kHz; F Trigger = F bpsk /(9 × 31) = 193.378 Hz.
Since our primary motivation was to look for an eventual change of the precession helicity at high temperature, we paid much attention to properly recovering the phase information. The phase shifts ( ) derived from the vector analyses were determined according to strictly identical criteria: we assumed that the imaginary (dispersive) part of the XDMR spectrum had to pass through zero at resonance, whereas the real (absorptive) part should remain positive over most of the resonance spectral range. This required us to vary from +15 • to −177 • in the vector analyses displayed in figure 8(B). Recall that two modes with inverted precession helicities should differ by the sign of the time-reversal odd dispersive part (Im XDMR), the sign of the time-reversal even part (Re XDMR) remaining unchanged. A puzzling problem with figure 8(B) is that a variation of close to 180 • implies that both Im XDMR and Re XDMR changed sign. Moreover, at T = 450 K, the contribution of the absorptive part (Re XDMR) is getting anomalously weak with the typical consequence that the modulus (|XDMR|) and XDMR PSD spectra become largely dominated by the contribution of the dispersive part (Im XDMR). Interestingly, the microwave absorption PSD spectrum measured at T = 450 K is clearly much weaker than the corresponding PSD spectrum measured at low temperature (T 150 K).
Such unexpected results did call for investigations, which, unfortunately, could only be performed one year later but with the same sample in the same geometry. We decided to concentrate our efforts on high-temperature measurements (T > T cp ) in order to make sure that the previous experiments were fully reproducible and to clarify the eventual role of various parameters such as the microwave pumping power or the sample temperature. We deliberately kept strictly the same experimental conditions affecting the detection sensitivity: Everything looks like the XDMR spectra would undergo a peculiar saturation In figure 9(A), we compare the whole series of Fe K-edge XDMR PSD spectra recorded: (i) at T = 450 K under pumping powers of 67.5 mW (a), 244 mW (b) and 929 mW (c); (ii) at T = 510 K under a pumping power of 240 mW (d). The XDMR PSD spectrum recorded at low microwave pumping power (67.5 mW) is the only one which does not show any anomaly: it consists of a rather sharp line with a typical linewidth of 86 Oe. Everything looks like the XDMR spectra measured under pumping powers exceeding 150 mW would undergo a peculiar saturation effect. Of course, we carefully checked that we could rule out any instrumental problem. There are clearly some differences between the XDMR PSD spectra displayed in figures 8(A)(b) and 9(A)(b) which refer to similar experimental conditions, but the key point is that both XDMR PSD spectra unambiguously exhibit quite a reproducible spectral distortion. Note that the corresponding spectral anomaly becomes much more spectacular at high pumping power (929 mW). A weaker effect still persists at high temperature (T = 510 K): it is our guess that a saturation effect as intense as the effect measured at 450 K would probably be observed at higher pumping power simply because the resonance inherently becomes weaker and much broader on approaching the Curie temperature.
Vector analyses are again displayed in figures 9(B) for the whole series of XDMR spectra. Note that the phase shift determined for the XDMR experiment carried out at low pumping power ( −72 • ) is only one half of the phase shifts ( −144 • ) found for all experiments performed under high pumping power and which exhibit characteristic spectral anomalies. In this respect, the vector analysis reproduced in figure 9(B)(c) looks most typical since it confirms our previous observation that the contribution of the absorptive component, i.e. Re [XDMR], tends to vanish. There is still another detail which we initially neglected but which calls for more attention in the future: this concerns the inverted signs of all dispersive components (Im [XDMR]) if one takes figure 8(B)(b) as reference. Whereas we kept the pumping frequency (F p ), the beating frequency (IF), the modulation frequency (F bpsk ) and the trigger frequency (F Trigger ) all strictly identical for both sets of XDMR experiments, it is only much later that we realized that the small tuning mismatch ( F = F cav − F p ) between the cavity resonance frequency (F cav ) and the pumping frequency has opposite signs for both sets of experiments: F(2) = −0.3 MHz against F(1) = +0.5 MHz. It is clearly seen that comparing the phase information of different spectra is a risky exercise whenever the XDMR experiments are not conducted with strictly the same cavity tuning. For samples that are strongly dispersive such as the GdIG single crystal, one should also worry about eventual temperature drifts, especially over long data acquisition times. Such a small temperature drift might well explain the slow increase of quoted in figures 8(B)(b)-(d). The key objective of this section is, indeed, to elucidate the origin of the spectral anomaly observed in the XDMR spectra recorded under high pumping power at T = 450 K or higher. It is our interpretation that it is the typical signature of a destructive interference between two resonant modes in which the magnetization vectors would precess out of phase and with opposite angular velocities. For simplicity, let us assume that the two modes can be represented with complex conjugated Lorentzian susceptibilities χ (±) : in which H 0 is the scanned bias field, H R denotes the resonance field and 2α defines the resonance linewidth. If we further assume that there is a 180 • phase-shift between the two precessing magnetization vectors, what should be probed by XMCD is then the complex vector difference: Obviously, this describes a fully coherent interference process in which the real part of m should cancel out, whereas its imaginary part should be twice as large as the individual contribution of m. Note that this is precisely what should happen if one were to try probe with XDMR two non-uniform modes of opposite helicities and respectively associated with the wavevectors k and −k. This immediately reminds us that, at high pumping power, Suhl's famous second-order instability process is expected to annihilate two uniform magnons (k = 0) while creating a pair of non-uniform magnons with opposite wavevectors (k, −k) [40,41]. Energy and momentum conservation implies that, in the latter nonlinear process, the non-uniform modes should have the same precession frequency as the uniform mode and opposite helicity [5,6]. It is therefore our interpretation that the spectral anomaly observed in figures 8 and 9 is basically the typical signature of the four-magnon nonlinear interaction process that is usually regarded as the main source of saturation of the FMR spectra [41]. Of course, it would be totally unrealistic to expect complete annihilation of the uniform mode (m 0 ) and what should be probed by XDMR is a signal resulting from the complex vector addition: (1 − β)m 0 + β m. As illustrated by figure 10(A), this causes some imbalance between the magnetization vectors precessing with opposite angular velocities and leaves a residual contribution of Re XDMR. On the other hand, one should expect the non-uniform modes to have a much longer lifetime than the uniform mode: this point, which was recognized first by Schlömann [46], is relevant here since it contributes to lowering the weight of the uniform mode and thus, in fine, it should further reduce the residual contribution of Re XDMR.
These considerations provide us with a reasonable basis to explain the typical distortions which we noticed for the XDMR PSD (or modulus) spectra recorded under high pumping power. However, we have to admit that the rather crude model used to simulate the XDMR spectra shown in figure 10(A) fails to reproduce further structures that are nicely resolved in the vector analyses of the experimental XDMR spectra, e.g. in Re XDMR spectra. As illustrated in figure 10(B), the simulated XDMR spectra would better match the experimental reality if one further assumes that every resonance would split into two lines according to: The spectra reproduced in figure 10(B) were obtained under the assumption that the signal probed by XDMR would result from the extended vector sum: (2) ). Of course, it remains to be clarified what sort of interaction might cause such a splitting of the uniform mode. Dipole-dipole interactions that are at the origin of magnetostatic modes look like the best candidate even though interlattice exchange could still be envisaged.
Implications and conclusion
In this paper, we report on XDMR experiments carried out in the TRD mode with three different samples corresponding to increasing levels of complexity. In the case of the YIG film 1, sharp lines could be rather easily measured at the Fe K-edge, even under a pumping power as low as 1 mW. Once again, our experiments confirm that the satellite resonances that are commonly assigned to magnetostatic modes in the FMR spectra are only very weak in the XDMR spectra. Significant differences were noticed between XDMR and FMR spectra measured simultaneously at low pumping power: typically, the peak resonance fields (H R ) and lineshapes were slightly different. At this stage, we have no firm interpretation for this experimental observation, and obviously, it would be premature to relate it to a small tilt angle that could possibly exist between the precession axes of spin and orbital magnetization components at the iron sites in YIG. On the other hand, let us insist that one should not take for granted that XDMR spectra recorded at the Y L-edges would peak at strictly the same resonance field (H R ) as the XDMR spectra recorded at the Fe K-edge. Unfortunately, there is no hope as yet of recording XDMR spectra at Y L-edges under such a low pumping power. High-quality XDMR spectra were also recorded at the Fe K-edge on the Y-La-LuIG film 2 but only under a much higher pumping power (99 mW). In these experiments, the differences between XDMR and FMR PSD spectra became much more spectacular. As opposed to the broad, foldover distorted lineshape of the microwave absorption PSD spectrum, the XDMR PSD spectrum was found to exhibit a very sharp and narrow line ( H 6 Oe) peaking at the typical resonance field of the uniform mode in FMR spectra recorded under very low pumping field. Satellite resonances of low-order magnetostatic modes turned out to be even better resolved than in FMR spectra. Most intriguing, however, was the presence of a well-resolved second resonance line shifted by +14 Oe and with about 180 • inverted phase. XDMR spectra were also successfully recorded at the La L 3 -edge: in addition to the split resonance lines which already contributed to the Fe K-edge XDMR spectrum, we noticed the presence of new features peaking very close to the foldover jump of the FMR spectrum and which strongly depend on the pumping power. In contrast with what was observed at the Fe K-edge, the relative intensity of the high field signatures now largely exceeded the contribution of the pseudo uniform mode resonance. These results, which could hardly have been anticipated, raise at least two questions: (i) how is it possible to detect narrow lines in XDMR spectra while the microwave absorption spectrum measured simultaneously is so broad and foldover distorted? (ii) What is the origin of the additional signatures observed at high field?
In film 2, strong perturbations stem from the large size of the La 3+ cations and from the small size of the Lu 3+ cations which allow lutetium to partly replace iron in the octahedral sites [22,30]. The non-uniform distribution of the RE cations induces fluctuations in the lattice parameter, i.e. dynamical strains and stresses resulting in the excitation of magnetoelastic waves [47]. This is supported by the much larger growth anisotropy of film 2 as compared to film 1. Such a large growth anisotropy incidentally enhances the probability of foldover distortions of the FMR lineshape at high pumping power. Lattice fluctuations also act as defects that strongly favor unwanted two-magnon scattering processes in which a uniform magnon is annihilated while a non-uniform magnon is created [41]. Two-magnon scattering processes have long been recognized to be a very efficient spin-spin relaxation mechanism affecting T 2 in FMR [41,48], but not T 1 . The Fe K-edge XDMR spectra carried out on film 2 would suggest that two-magnon scattering processes would poorly affect the relaxation of orbital magnetization components, which would remain primarily controlled by slow orbit-lattice relaxation mechanisms. A fairly different situation is expected for spin magnetization components, which play a key role in the XDMR experiments carried at the La L 3 -edge.
Since two-magnon processes are most often regarded as elastic scattering processes conserving energy, they cannot explain the excitation of additional, well-resolved resonance modes peaking at higher field. By analogy with the case of GdIG, we suspect that the XDMR spectra recorded at the La L 3 -edge could be dominated by the precession of spin components antiferromagnetically coupled to the spins located at the tetrahedral (S 4 ) sites of iron. If this is true, then the spin components located at the lanthanum sites should thus precess in antiphase with both the spin and orbital magnetization components located at the iron d-sites. This seems to be supported by our XDMR spectra. It cannot be taken for granted, however, that the resonance fields should be strictly identical since the magnetic anisotropy could differ for both sublattices. Furthermore, exchange interactions may weakly couple in a slightly different way the lanthanum c-sites with either the dor a-sites of iron: this may well cause the La L 3edge XDMR spectrum to split into two lines of unequal intensity with a probable imbalance in favor of the d-sites, but also with antiphase resonance given that the aor d-sites of iron are inherently antiferromagnetically coupled. Conversely, the Fe K-edge XDMR spectra may split in the same way if dipole-dipole interactions couple the orbital magnetization components precessing at the tetrahedral (S 4 ) sites of iron with the spin components located at the lanthanum c-sites independently of which non-uniform precession mode is excited at those c-sites. Indeed, at very high pumping power, foldover lineshape distortion and further nonlinear processes (e.g. four-magnon scattering) would rapidly spoil any chance to resolve these modes.
From a technical point of view, the XDMR experiments carried out with the GdIG single crystal (sample 3) were particularly challenging owing to the broad linewidths of the resonance lines. Nevertheless, a number of subtle effects have been observed which nicely illustrate the potentiality of XDMR experiments. In this respect, experiments performed at low temperature (T 100 K) but slightly above the gadolinium ordering temperature provided us with a unique opportunity to investigate what happens in the so-called exchange-enhanced paramagnetic regime in which exchange is dominated by intersublattice interactions that are inherently siteselective. A careful comparison of the XDMR spectra recorded successively at the Gd L 2 -edge and at the Fe K-edge revealed a similar splitting of the resonance lines which allowed us to resolve the large signal due to the tetrahedral (S 4 ) iron sites that are strongly coupled to the gadolinium c-sites, from the weak contributions of the octahedral (S 6 ) iron sites which, in contrast, should be poorly coupled to the RE c-sites. Further work is in progress to check whether reliable quantitative information could be extracted from XDMR spectra concerning site-selective exchange integrals. If this turns out to be possible, one could extend such analyses to other REs, especially those with empty 4f shell (La, Lu and perhaps Y) for which no alternative source of experimental information is available.
At high temperature (T 450 K), vector analyses of XDMR spectra recorded at the Fe K-edge under high pumping power produced the first experimental evidence of destructive interferences developing within the nonlinear four-magnon scattering process that is classically given as the main source of saturation of the FMR lines.
In conclusion, ample demonstration is made in this paper that XDMR should not be perceived as just another exotic, time-consuming method for replicating FMR spectra. Concrete examples are given that support our claim that site-selective XDMR experiments can yield a far more detailed picture of magnetic couplings and nonlinear mode interactions in complex ferrimagnetic systems than standard FMR. In this respect, XDMR offers a valuable complement to spin-echo NMR that, unfortunately, is restricted to ultra-low temperatures (T 10 K).
|
v3-fos-license
|
2023-12-16T16:39:23.605Z
|
2023-11-11T00:00:00.000
|
266308920
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://ijpe.eu/article/download/18260/19873",
"pdf_hash": "c80dd6e1b8779ea1eace2300aebb3a065abd8ecd",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2264",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"sha1": "bd9f105c4533da2991f6b72ba3e5e05c018b1252",
"year": 2023
}
|
pes2o/s2orc
|
Constructing “the parents” in primary schools in Greece: special education teachers’ whispers
The Greek educational system is highly centralized and the communication between parents and teachers is considered as a marginal or a grey area in the school function. Educational legislation favours parents’ involvement rather than participation and implies an imbalance between parents and professionals. Drawing on a study using a life history approach, this paper considers the ways in which special education teachers in both special education schools and integration units perceive the parents of their students. The boundaries between expert knowledge and personal experience form a contested terrain of power relationships. These relationships are shaped by circumstances and are highly contextualized. Special school teachers deem parents in a deficit discourse while Integration unit teachers deem parents in a client discourse. The construction of the “parents” seems to be a gendered and social class issue as well. This work also reveals some implications for the teachers’ training programmes in order to ensure a more equal partnership at more inclusive and democratic schools.
The Greek educational system is highly centralized and the communication between parents and teachers is considered as a marginal or a grey area in the school function.Educational legislation favours parents' involvement rather than participation and implies an imbalance between parents and professionals.Drawing on a study using a life history approach, this paper considers the ways in which special education teachers in both special education schools and integration units perceive the parents of their students.The boundaries between expert knowledge and personal experience form a contested terrain of power relationships.These relationships are shaped by circumstances and are highly contextualized.Special school teachers deem parents in a deficit discourse while Integration unit teachers deem parents in a client discourse.The construction of the "parents" seems to be a gendered and social class issue as well.This work also reveals some implications for the teachers' training programmes in order to ensure a more equal partnership at more inclusive and democratic schools.
Discourses on constructing the other
Discourses are constructed through ways of talking about the other through the media, policies and social practices.Through discourses we see groups of 'other' people in a particular way and refer to them as if they were 'really' thus (Parker, 1992).Through discourses we also shape our opinion about how 'other' people are and what it means to be a part of a group (Fulcher, 1999).Such discourses usually support the status quo and "common understandings" (Gramsci, 1971, pp326) while at the same time they work towards the concept of deviance making some groups out as different.
They 'homogenise' people within a group and create the 'norm or the 'ideal' hiding the interests of certain groups who assumes 'legitimacy' over others for example professionals over parents.The creation of a dominant discourse allows the creation of 'counter' discourses which challenge the hegemonic discourse.The social construction model of disability is an example of questioning the authority of a discourse (Moore, Beazley & Maelzer, 1998) and the 'orthodoxy' (Sikes, 1997) of the professionals.
Within the official discourse of schooling the home-schools relations have been silenced in the Greek legislation.
Correspondence concerning this article should be adressed to Evangelia Boutskou, e-mail ebutsuku@otenet.grActually the communication between parents and teachers/ professionals is considered as a marginal or grey area.The 2817 Law of 2000 avoids the discourse "parents" and uses the phrase "the ones who care for the people with special needs".
The only paragraph referring to parents is when it is mentioned that "(parents) are invited by the relevant services in order to state their opinion so that the course of actions should be formed" (my translation).Parents' involvement and not participation implies an imbalance of power between parents and professionals and uses parents as facilitators of the procedures (Fulcher, 1999).The work of schools is underpinned by the powerful discourses created by 'experts' and simultaneously silences the voices of parents (mainly women) who are perceived as childcarers (Smith, 1987).These boundaries are perceived by feminist writers as artificial constructions (Cole, 2004).The school is where the home-private domain meets the public-professional world (David, 1993;Sikes, 1997) and as such it is a contested terrain.The aim of this work is to report the way that special education teachers of primary schools in Greece construct the notion of the "parents" of children with special needs.
Research
This paper is based on a life history approach (Boutskou, 2006) but the purpose of this article is to present the way special education teachers refer to parents of children with special needs.I chose the words "teachers' whispers" and not "teachers' perceptions" because teachers were not asked directly about parents.Teachers referred to parents during the interviews while talking I interviewed six teachers from different types of Special Education schooling (Special Schools and Integration Units) because there is a dynamic interplay between person and context.
The choice of school context was purposive but the choice of teachers was opportunistic, at random (Erben, 1998).I interviewed three teachers from Special Schools with different types of Special Education Needs in each of the following: School for the blind, School for children with motor difficulties, and School for children with severe learning difficulties.I also interviewed three teachers from Integration Units at mainstream schools each of the following: Integration Units situated in a rural area, in an area with low socio-economic status, and in a high socio-economic status area.
The Special Education teachers who took part had a working experience between 5-17 years (Table 1).Plummer (1983) claims that a good informant should be someone who is fully aware and involved in the particular culture.Working for 5 years in Special Education is adequate time to have many experiences that help one build his/her theory and attitude towards Special Education and difference (Erben, 1998).The analysis was based on grounded theory and the constant comparative method.
Discussion
The way teachers talk about parents, depends on the context.At special schools parents of children with disabilities seem not to have many choices.Schooling seems to be for them a privilege rather than right.Teachers deem parents of children with disabilities in a deficit approach.
"Parents think that they do their duty, they send their child at a special school.They do not have another choice since their child is not accepted anywhere else." (John) Teachers who work at integration units deem parents of children with learning difficulties in a client approach.This happens because parents have to give their permission in order their children to withdraw from the mainstream class and attend some hours at the Integration Unit.This means that teachers have to persuade parents that their teaching will be beneficial to the child.
Teachers from special schools
Teachers who work at special schools construct the parents according to a deficit approach.It is assumed that the disability of the child per se causes hardship for the family (Todd, 2003) and that the child with disabilities "disables" the whole family.Sometimes it is implied that the more disabled the child the more difficult the relation with their parents.
"Both children and parents are of low level, they also have problems" (John) "The relations with the blind children are good.The relations with the parents of children with multiple disabilities are bad because some parents can not accept their problem.
I acknowledge the dual problem they face; they as parents and their child's as well. Because the parents face the problem too…" (Ann)
There is a vicious circle.Children's disabilities are deemed as the reason for parents' deficit.On the other hand it is viewed that parents' problems create obstacles to children's progress and affects the relation with teachers.In many cases in professional literature parents (especially of children perceived as being on the autistic spectrum) were blamed and pathologized concerning their children's disabilities (Roll-Patterson, 2001).Some professionals argue that parents went through different stages (denial, isolation, reaction formation, projection and regression) and if they were not, they perceived as dysfunctional (Roll-Patterson, 2001).Foucault (1973) talked about the professional gaze as a way to show the deliberate medicalization in order to obtain power.Teachers from special schools use the deficit discourse to describe both children and parents.
The discourse of care is prominent in teachers' talking and it is used in an apolitical way that implies needs rather than rights and entitlement (Blackmore, 1999).Parents are categorized by teachers into "not caring parents" and "caring parents".The criterion of seems to be parents' attitude to teachers/professionals.If parents do not cooperate with the professionals/teachers they are viewed as not caring and if parents cooperate with professionals/teachers they are deemed as caring.
"First of all, they (parents) do not come at school…Very few parents care and they have better results with their children."(John) "I can not stand the fact that they (parents) say that their child does thousands of activities and exercises at home and the child can not do these at school.This is something that I can not stand and our relations are in conflict.Of course there are parents that are normal, others that are indifferent.
It depends" (Ann)
Parents who are deemed as "caring" are the ones who help their child at home and cooperate with the school professionals.It is also assumed that if parents do not question the expert's work, the child has progress.It is interesting the fact that some of the above' comments reveal that the child's progress and the home-school relation is a class issue (Hanafin & Lynch, 2002) as well."Parents" seems to be a gendered term that usually means mothers (Cole, 2004).Mothers are the ones that are deemed responsible for children's education at home.
Teachers from integration units
The nexus of teacher-parent is deemed by the teachers at Integration Units as a nexus of service provider and service recipient.Teachers think that parents behave as consumers of the educational service and intervene into teachers' work.They negotiate about the educational process and complain as informed consumers.They are driven by individualistic concerns, private interests (Vincent, 1996) rather than by collective welfare and rights (Munn, 1993).Teachers think that parents worry about the stigma of their child.They do not want children of lower ability at the Integration Unit and make comparisons all the time… This derives by the fact that both children with disabilities and children with learning difficulties attend the Integration Unit.Once I discussed it with some parents and they accepted that they worried about what people would say for their child…" (Michael) "Parents do not want their children to attend this class.They do not even want to discuss it.Although the mainstream teacher told parents that "the child has some problems and he will receive more help there (at Integration Unit) and he can overcome his problem."They answered "No, I do not want my child to be mocked by the others and call him stupid."(Leo) Education is also deemed as an outcome that parents should be happy and satisfied with it.This raises issues about ethical dimensions of the job and it reveals that teachers' work is seen not as the outcome of pedagogic choices but the outcome of pressures outside school.The market ideology makes teachers think of working class parents as helpless or passive or on the periphery of the school function (Corbett, 1998;Hanafin and Lynch, 2002) and rich parents as active, energetic confirming the social class divisions (Gilbourne & Youdelll, 2000).
"There is no welfare from the state.Primary education lasts for some years, after that?At this point money plays an important role.I mean rich parents can make thousand of things and interventions whereas the poor ones do not have the money."(Michael) "The parents of the children are well informed and they know their rights and the laws."(Mary) "Parents want their children to attend a mainstream school.They like it because they see that their children may be blind but they can compete with the seeing kids…Parents use their public relations and I do not know what else they do and they try their children to attend the mainstream school.This is how this system functions."(Leo) Parents and teachers are actors in social fields and they create and negotiate their boundaries all the time.There is unequal power divide between the public space of school and the private space of home (Cole, 2004).The power of the teachers as professionals lies in the possession of the expertise and specialized body of knowledge and skills because of their training.On the other hand parents who share unpaid, unlimited time and effort with their children are not acknowledged as partners.However, if parents have the power and the status their voice can be heard at schools sometimes.
Concluding thoughts
Although in the literature parents' experience is placed at the center (McIntyre, 2004;Roll-Peterson, 2001) the gap between theoretical professional knowledge and practical experience it is not narrowing.Although there is a growing literature which offers insiders' perspectives of disability (Clough & Barton, 1998;Moore et al 1998) teachers seem to ignore it.There are professional assumptions which overestimate the problems of the parents in coming to terms with the child's needs (Roll-Peterson, 2001).However it is not the caring of the child but the procedure (time, money, effort, information) through which the family can claim what it has the right to acquire.Through the deficit discourse parents' practices are treated as problematic while teachers' as non problematic.Through the client discourse the teachers' and parents' practices are driven by the market ideology and individualistic aims.
This work argues that listening to teachers is a way of gaining understandings and interpretations of the perceived imbalance of power.In both contexts Special Schools and Integration Units parents are not deemed as resources where professionals can report and exchange important information about the child.Partnership was not seen as a goal to be achieved.Although it is acknowledged that the best results are achieved when home school and professionals cooperate, such relationships are missing from the schools.Teachers should be educated in ways to question their authority and reinvent their role showing empathy and respect to the parents.They should be willing to negotiate with a shared sense of purpose.The teacher's training programmes should try to explore issues related to teachers -parents' relations and their roles in an effort to ensure equal partnership between parents and professionals at more inclusive settings.
"
Parents are interested in the way you work with the child.Parents intervene in the teachers' work.Parents complain about the classmates in the Integration Unit.
Table 1 : Characteristics of participants
* Pseudonyms have been used.
|
v3-fos-license
|
2018-09-25T18:43:05.066Z
|
2018-09-25T00:00:00.000
|
52813298
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.02224/pdf",
"pdf_hash": "35edd2b53a5f179ffd2d0d33d2c45b59b71c34d0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2265",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "35edd2b53a5f179ffd2d0d33d2c45b59b71c34d0",
"year": 2018
}
|
pes2o/s2orc
|
Optimized Lysis-Extraction Method Combined With IS6110-Amplification for Detection of Mycobacterium tuberculosis in Paucibacillary Sputum Specimens
Background: When available, nucleic acid tests (NATs) offer powerful tools to strengthen the potential of tuberculosis (TB) diagnosis assays. The sensitivity of molecular assays is critical for detection of Mycobacterium tuberculosis (MTB) in paucibacillary sputum. Materials and Methods: The impact of targeting repetitive IS6110 sequences on the PCR sensitivity was evaluated across mycobacterium strains and reference material. Six lysis-extraction protocols were compared. Next, 92 clinical sputum specimens including 62 culture-positive samples were tested and the results were compared to sputum-smear microscopy, culture, and Xpert MTB/RIF test. Finally, the capacity to detect low MTB DNA concentrations was assessed in 40 samples containing <1.5 × 102 copies/ml ex vivo or after dilution. Results: The lower limit of detection (LOD) using the IS6110 PCR was 107 genome copies/ml (95% CI: 83–130) using MTB H37Rv as a reference strain, versus 741 genome copies/ml (95% CI: 575–1094) using the senX3 PCR. The proportion of recovered MTB DNA after lysis and extraction ranged from 35 to 82%. The Chelex® method appeared as a more efficient protocol among the six different protocols tested. The sensitivity and specificity in clinical sputum samples were 95.1% (95% CI: 90.7–99.6) and 100% (95% CI: 96.2–100.8), respectively. Among 40 samples with low MTB DNA concentration, 75% tested positive for IS6110 PCR, versus 55% using the Xpert MTB/RIF assay (p = 0.03). Conclusion: Laboratory assays based on an efficient MTB lysis and DNA extraction protocols combined with amplification of IS6110 repeat sequences appear as a sensitive diagnostic method to detect MTB DNA in sputum with low bacterial load.
INTRODUCTION
Tuberculosis (TB) is a deadliest infectious disease, accounting for about 10.4 million new cases and 1.3 million deaths worldwide in 2016 (World Health Organization, 2017). A major priority and a challenge for TB control are to strengthen the capacity to diagnose the disease. Mycobacterial culture remains the gold standard test for TB diagnosis in high resource settings. Culture has high sensitivity with a limit of detection (LOD) to 10-100 cfu/ml, but the time-to-result ranges from 2 to 8 weeks (American Thoracic Society, 2000) and this method requires a BSL-3 laboratory facility. In most low resource settings, bacterial culture, however, is unavailable, leaving sputum smear microscopy as the major direct bacteriological test for TB diagnosis (Wejse, 2014). However, the LOD of the unconcentrated smear test is approximately 10,000 acid-fast bacilli (AFB)/ml, and microscopy has suboptimal specificity partially due to the possible contamination by non-tuberculosis mycobacteria (American Thoracic Society, 2000).
Nucleic acid tests (NATs) are viewed as a potential mean of overcoming these barriers, and as a new standard practice for TB diagnosis (Huggett et al., 2009). Accurate and sensitive detection of Mycobacterium tuberculosis (MTB) DNA in clinical paucibacillary specimens hinges on combination of efficient lysis of the bacilli, DNA extraction, removal of PCR inhibitors, and amplification of low concentration of the target sequence. MTB cell wall is resistant to conventional bacterial lysis techniques due to the complex structure of lipophilic molecules, including the long-chain mycolic acids (Brennan and Nikaido, 1995) and polysaccharides. Sputum samples contain PCR inhibitors that also contribute to make challenging DNA extraction and amplification. A wide variety of sputum processing protocols has been described using sonication, boiling, SDS treatment with lysozyme and heating, or exposure to proteinase K or chaotropic salts (Garg et al., 2003). Different approaches have been also used for NAT-based on amplification of single or repeated genomic PCR targets, such as IS6110, rpoB ( Meghdadi et al., 2015), and other. However, the IS6110 sequence remains probably the most frequently used and extensively studied PCR target. IS6110 is a coding transposase sequence present in a multiple copies number depending on the strain, ranging from 1 to 20 with a mean of 10 copies per bacilli (Gutierrez et al., 1998) and is only found in members of the MTB complex (Thierry et al., 1990). Previous studies have reported higher sensitivities of PCR methods based on amplification of the IS6110 multi-copy element when compared to methods relying on single copy genes (Luo et al., 2010). Cepheid has recently launched a new GeneXpert R cartridge including IS6110 and IS1081 amplification to improve the detection rate of smear-negative TB. Interestingly, increased clinical sensitivity was reported with this new assay, particularly in children and in HIV co-infected individuals, representing populations often refractory to TB diagnosis, compared to the rpoB based cartridge (Dorman et al., 2018).
A systematic approach evaluating each step of MTB molecular assays is requested to better understand the determinants of the assay performance on low bacterial load specimens. In this study, we first developed and assessed in details in-house IS6110 assay versus single-copy gene amplification. Second, we evaluated different lysis-extraction protocols to determine the most efficient methods. Finally, we assessed the performance of the optimized molecular assay on clinical samples and compared the results to sputum-smear microscopy, culture, and a commercial automated PCR.
The efficiency of six MTB DNA extraction methods was performed in phosphate-buffered saline (PBS) and in spiked sputum. The discarded excess sputum originally submitted for routine Gram staining and bacterial culture, were spiked with diluted cultures of bacterial cfu. The previous quantified aliquots of M. tuberculosis mc 2 7000 have been thawed and suspended in (PBS) with 0.05% Tween80 (Sigma-Aldrich, St. Louis, MO, United States), then forced through a 21-gauge needle with a syringe to break up cell clumps (Stokes et al., 2004). The cells were seven 10-fold diluted in TE buffer (10 mM Tris, 1 mM EDTA, pH 8.4) and vortexed for 1 min to disrupt any residual clumps.
Two hundred microliters of each appropriate concentration of bacilli, diluted in TE buffer, was added to 1.8 ml of MTBnegative sputum to prepare the spiked sputum samples. The spiked sputum were treated according to the normal sampleprocessing protocol as if it had come from a patient suspected of having TB. The assay used 10 replicates per dilution for extraction methods, and testing of each dilution was repeated eight times.
Ninety-two sputum samples (62 TB positive and 30 negative control) were collected consecutively at the Montpellier University Hospital (NCT number: NCT02898623) and stored at −20 • C until used for the IS6110 PCR evaluation. Digested and decontaminated sputum samples with MycoPrep R kit (Becton Dickinson, Baltimore, MD, United States), were cultured in both BACTEC TM MGIT TM 960 Mycobacterial Detection System (Becton Dickinson Microbiology Systems, Sparks, NV, United States) and Löwenstein-Jensen medium (BioMérieux, Marcy l'Etoile, France). TB culture was considered as the gold standard to evaluate the clinical performances of molecular assays. Sixty-two specimens were TB culture-positive and 30 were TB culture negative, and used as negative control. Among 62 TB culture-positive, 50 were tested positive by both smear and culture and 12 were culture-positive but smear-negative. Smear grading was determined (Technical Guide, 2000). All smear negative samples and samples with M. tuberculosis DNA level below the LOD of smear test were defined as paucibacillary specimens [<10,000 acid-fast bacilli (AFB)/ml].
Methods of Extracting MTBC DNA From Culture in PBS and in Sputum
For all six extractions methods, each of which was repeated eight times, the spiked respiratory specimens were centrifuged at 3000 × g for 20 min. The supernatants were discarded, and pellets were processed for each extraction methods as follow: (i) Chelex R method: incubation with 200 µl of 20% Chelex R 100 resin (Bio-Rad, Richmond, CA, United States) prepared in TE buffer [10 mM Tris-HCl (pH 8.0), 1 mM EDTA] (Sigma-Aldrich, Germany). After vortex mixing, boiling at 100 • C for 15 min, then placed in an ultrasonic water bath for 15 min. After centrifugation at 14,000 g for 5 min, the supernatant was used for qPCR. (ii) Guanidium Isothicyanate (GTIC) method: incubation with 200 µl of lysis buffer (10 mM Tris-HCl, 1 mM EDTA, 1 M GTIC, 0.5 M NaCl) for 20 min, combined with 3 cycles of freeze-thawing (−80 • C for 5 min and 100 • C for 5 min) and boiling at 100 • C for 15 min. (iii) Tween 20 method: suspension in 200 µl of lysis buffer [0.45% Tween 20, 50 mM Tris-HCl (pH 8.0), 50 mM KCl, 2.5 mM MgCl 2 ], containing 70 µl of 10 mg/ml Lysozyme and was incubated at 37 • C for 1 h. 30 µl of proteinase K (10 mg/ml, Qiagen, Germany) and 2% SDS were added, followed by incubation for 1 h at 56 • C to remove PCR inhibitors and heated for 15 min at 100 • C to ensure complete mycobacterial lysis. (iv) Non-idet P-40 method: the cell pellet was suspended and subjected to method (iii) but NP-40 instead of Tween 20. (v) Triton method: incubation with 200 µl of lysis buffer [100 mM NaCl, 10 mM Tris-HCl (pH 8.0); 1 mM EDTA and 1% Triton X-100]; was incubated for 20 min at 95 • C. (vi) NaOH method: incubation with 200 µl lysis buffer [10 mM Tris-HCl (pH 8.0); 1 mM EDTA, 50 mM sodium hydroxide (NaOH) and 2% SDS] at 95 • C for 5 min. A total 1800 µl of pure water was added, then vortex mixed, followed by boiling for 15 min at 100 • C. After vortex mixing, the tubes were placed in ultrasonic bath for 15 min. For methods (ii) to (vi), after centrifugation at 14,000 g for 5 min, the supernatant was transferred to a new tube. Then supernatant was purified using traditional nucleic acid precipitation: precipitate DNA with 2 volume of ice-cold ethanol and 1/10th volume 3 M sodium acetate, kept at −20 • C for 20 min. After centrifugation tubes at 14,000 g for 10 min at 4 • C, the pellet was washed with 70% ethanol, air dried, then resuspended in 50 µl TE buffer and used 5 µl in PCR. Chelex R resin methods in act as chelating groups inactivating nucleases and protecting DNA by binding polyvalent metal ions such as magnesium (Mg 2+ ). After boiling the resin and cell residues are pelleted, and the supernatant containing the DNA is removed. The pellet was eluted in 100 µl of TE, and used for PCR. The efficiency of cell lysis and DNA recovery for each method was assessed using the IS6110 PCR assay, based on the proportion of DNA recovered relatively to the estimate input quantity of DNA The performance of the six DNA extractions methods were compared using mean differences in C t values and the end point PCR for each extraction methods.
Quantitative Real-Time PCR
All PCR was performed using LightCycler 480 Real-Time PCR System (Roche Applied Science, Germany). PCR was performed in 20 µl final reaction volume, containing 5 µl of DNA, and 5× DNA polymerase mix (Omunis, Clapiers, France). The following thermal profile was used: 95 • C for 15 min and amplification 95 • C for 15 s following 60 • C for 1 min during 50 cycles. A heterologous internal control (IC) using a Cy5 probe and having a target Cτ value ranging from 32 to 34 was added to control DNA extraction and amplification (Omunis, Clapiers, France). The standard curve was calculated automatically by plotting the Cτ values against four dilutions of the standard and by extrapolating the linear regression line of this curve.
Three PCR targeting IS6110 elements were developed. Two IS6110 PCRs were based on primers and probes previously reported (El Khéchine et al., 2009;Armand et al., 2011), in blue and red nucleic acid sequences (Figure 1 and Table 1). A third set of primers and probe were designed, using the Primer3Plus software, within the IS6110 targeted insertion (green nucleic acid sequence) based on the complete genome sequence of M. tuberculosis H37Rv (Genbank, number NC_000962.3). M. tuberculosis H37Rv international standard (Advanced Biotechnologies Inc., Eldersburg, MD, United States) was used to generate an accurate standard curve of PCR using concentrations. senX3 PCR was performed, from other primers and probe designed to target the senX3-regX3 intergenic region, as previously described (Queipo-Ortuño et al., 2009).
The specificity of the primers was first verified by using NCBI BLAST algorithm, followed by real-time PCR specificity testing with DNA extracted from the reference strains. Genomic DNA from 6 mycobacterial species (Mycobacterium fortuitum, Mycobacterium avium, Mycobacterium xenopi, Mycobacterium gordonae, Mycobacterium intracellulare, and Mycobacterium abscessus) was used as the template for the specificity. IS6110 probes incorporate a 5 FAM reporter, whereas the senX3 probe uses a 5 VIC reporter.
Analytical Performances of the PCR Assays on Genomic MTB DNA Performances of the assays were assessed using genomic DNA from M. tuberculosis mc 2 7000 and M. bovis BCG, then were quantified using Qubit R fluorescent dyes quantitation method (Thermo Fisher Scientific); and using clinical specimens. Direct smears were prepared from the specimens and stained using the Ziehl-Neelsen and auramine staining method. The linear dynamic range of the qPCR assays, variability inter and intra run were evaluated by plotting separately the results of 10 replicates of a 10-fold serial dilutions using the M. tuberculosis H37Rv commercial standard, Mycobacterium bovis BCG DNA and M. tuberculosis mc 2 7000 DNA. The lower LOD (LOD) of the three qPCR targeting IS6110, of duplex and triplex combination of primer and probe set, were compared. For comparisons, 40 sputum specimens culture positive were randomly chosen, and also tested for TB DNA using the Xpert MTB/RIF test a fully automated real-time PCR endorsed by WHO in 2010 for TB diagnosis and rifampicin resistance testing (Lawn et al., 2013). Thirtytwo decontaminated and digested specimens diluted in PBS at the lower LOD level (LOD) of GeneXpert test, namely 131 cfu/ml (Helb et al., 2010), whereas eight sputum samples with TB DNA concentration below 1.5 × 10 2 copies/ml were used without dilution, to evaluate the performance of the assays for low TB DNA sputum concentrations.
Xpert Protocol
Xpert MTB/RIF was used according to manufacturer's recommendations. Xpert MTB/RIF uses hemi nested realtime PCR assay to amplify the RNA polymerase β subunit gene (rpoB), which is explored with molecular beacon technology. The sample treatment and cartridge loading processes used were done according to the manufacturer's instructions. Briefly, each diluted sediment of 500 µl was mixed with 1.5 ml of a commercial NaOH-and isopropanol-containing sample reagent (Sample Reagent; Cepheid, Sunnyvale, CA, United States). The mixture was incubated for 15 min at room temperature with vigorously shaking, and then added to the sample-loading chamber of the cartridge for automatic processing, and the result was available within 2 h.
Statistics
Regression analysis between assigned and observed values was used for linearity assessment. The median values, interquartile ranges of each concentration and regression coefficient were determined. The probit method was used to determine the LOD. The LOD was read off the generated graph at the 95% probability for response. SPSS software (Statistical Package for Social Sciences; IBM, Chicago, IL, United States) was used for probit regression analysis. Bland-Altman bias plots were used to assess differences between repeated and single target PCR assays on MTB strains. For each plot, mean bias and 95% confidence interval of the bias were calculated and the mean biases were compared using Student's t-test. Statistical analyses were done with MS Excel and GraphPad Prism 6.0 (GraphPad Software, Inc., San Diego, CA, United States).
The intra and inter-run variability was assessed by evaluating the standard deviation of threshold cycles (C t ) in five independent PCR runs, of 10 replicate. Average and range were below 10% for all strains and PCR assays ( Table 2). Different format of duplex or triplex PCR, combining two or three sets of primers targeting IS6110 were compared for their capacity to detect the low concentration of M. tuberculosis mc 2 7000 genome, but without significant difference in the LOD (P = 0.067) ( Figure 3C).
Impact of IS6110 PCR Versus senX3 Sequence PCR on MTB and M. bovis BCG on DNA Quantification
Serial dilutions of genomic DNA isolated from M. tuberculosis mc 2 7000 or M. bovis BCG were tested with IS6110 and senX3 PCRs. Results were compared using Bland-Altman bias plots (Figure 4). The differences in DNA levels between IS6110 versus senX3 assays were 4.03 C T (95% CI: 1.6-6.4) for M. bovis BCG ( Figure 3A) and 7.45 C T (95% CI: 5.9-9.0) for M. tuberculosis mc 2 7000 (Figure 3B), respectively.
Comparison of the Efficiency of MTB DNA Extraction Protocols
The efficiency of cell disruption methods was compared by evaluating the quantities of MTB DNA recovered and quantified relatively to the theoretical input of M. tuberculosis mc 2 7000 strain (cfu/ml) by IS6110 PCR. The proportion of recovered MTB DNA ranged from 35 to 82% (Figure 5). Next, seven dilutions (10 −1 to 10 −7 ) of M. tuberculosis mc 2 7000 strains in PBS -0.05% Tween 80; in experimentally contaminated sputum and PBS were tested. The Chelex R method appeared as a more efficient protocol among the six methods tested. C t values of the internal control were within the recommended range (32-34) in all samples. The lowest concentration detected by the IS6110 PCR was at dilution 10 −6 using the NaOH, Tween 20, Triton X-100, NP-40 protocol, and dilution at 10 −7 using the Chelex R method (p = 0.002; Figure 6). The efficiency of DNA extraction was comparable in spiked sputum and in PBS (data not shown).
Performances of IS6110 PCR Assays Using the Chelex R Extraction Method
All the sputum were tested by real-time IS6110 PCR. The performance of Chelex R method combined with IS6110 PCR was evaluated on 62 sputum. All controls were tested negative for TB DNA. We observed an inversely proportional relationship between the C t values and the number of AFB detected in culture positive samples ( Figure 7A). The sensitivity, specificity, PPV and NPV of the IS6110 in house real-time PCR in clinical samples were 95.1% (95% CI: 90.7-99.6), 100% (95% CI: 96.2-100), 100%, and 93.7% respectively. The sensitivity of the IS6110 PCR was 100 and 75%, for smear-positive and smear-negative samples, respectively.
Forty specimens tested positive with the IS6110 PCR, were randomly selected for comparison with the Xpert assay. Of the 40 specimens, 32 were serial diluted until a target concentration close the Xpert assay LOD (131 cfu/ml) (Helb et al., 2010), whereas eight sputum samples with TB DNA concentration below 1.5 × 10 2 copies/ml were used without dilution. Samples tested more frequently positive using the IS6110 PCR than using the Xpert assay (75 vs. 55%, p = 0.03), 20 specimens tested positive for TB DNA with the two PCR assays (50%), 10 were found positive only with the IS6110 assay (25%), and 10 were found negative with the two PCR assays (25%). The median threshold cycle (C t ) value for the IS6110 PCR was 38.64 when sputum tested negative for TB DNA using the Xpert (Figure 7B).
DISCUSSION
The sensitivity of molecular assays is critical for MTB detection in low bacterial load specimens. In this study, we analyzed in a comprehensive way the different steps of PCR methods and identified the most efficient combination of MTB lysis, DNA extraction and amplification protocols to obtain a rapid and cost-effective MTB DNA detection. Testing clinical samples characterized by microscopic examination, culture and commercial NAT, confirmed the high sensitivity of the IS6110 specific PCR when used in combination with a Chelex R method.
The analytical sensitivity is an essential characteristic of molecular assays that should be constantly determined (Burd, 2010). Few studies have assessed the LOD of M. tuberculosis PCR (Barletta et al., 2014;Reed et al., 2016). In addition, the LOD were unfrequently tested by repeated measure in a narrow dilution ranges around the threshold value, as recommended ( Tholen et al., 2003). Our study confirmed and determined accurately the gain in analytical sensitivity related to the target of the IS6110 repeat sequences. A fourfold difference (0.38 log 10 genome/ml difference) was observed in the LOD of the IS6110 PCR assay testing MTB strain containing 16 sequences per genome versus one copy for M. bovis BCG. The LOD of the IS6110 was estimated around 100 genome copies/ml using the M. tuberculosis mc 2 7000 strain, which is sevenfold lower (0.83 log 10 genome/ml difference) to the senX3 LOD, confirming the gain related to the target of the repeated sequence. The comparison of different IS6110 PCR and different multiplex PCR combinations did not further improve the sensitivity. A gain in analytical sensitivity was expected since the number of primers and probes were multiplied (Armstrong et al., 2012). This result was somewhat disappointing but may be explained by the cluttering of primers and polymerase on the target sequence.
Besides nucleic acid amplification, DNA extraction is the other critical step for detection of low mycobacterium bacilli concentrations observed in sputum smear-negative specimens. Methods dedicated to mycobacterial DNA extraction has to fulfill four key objectives: (1) lysis of the thick and waxy cell wall, (2) removal of non-nucleic acid organic and inorganic molecules that may impair DNA amplification, (3) minimize the nucleic acid loss, and (4) keep DNA integrity throughout the extraction/purification process. We have compared six lysis and extraction protocols adapted from previous studies (Honore et al., 2001;Heginbothom et al., 2003;Honoré-Bouakline et al., 2003;Bahador et al., 2004;Leung et al., 2011) to select the best performing method. According to our results, the Chelex R method provides the best efficiency for recovering M. tuberculosis DNA from sputum samples. Sputum specimens pose challenges for specific microbial detection because of the presence of endogenous PCR inhibitors and contaminating DNA from the normal flora (Amicosante et al., 1995;Böddinghaus et al., 2001). Interestingly, comparable MTB detection in both PBS and sputum samples confirmed the (near) absence of PCR inhibition using the Chelex R method. Moreover, being rapid, without the requirement of detergents, this method is reliable, reproducible, and less labor-intensive than chemical and enzymebased protocol. All the procedure can be carried out in a single tube, reducing the risk for laboratory-induced contamination. The expense for Chelex R method is negligible as compared to the one of commercial methods based on silica columns or magnetic beads costing from 2 to 6$ per test, which makes it particularly advantageous for future developments in low-income countries.
We also explored the clinical performances of our in-house IS6110 PCR using the Chelex R DNA extraction method in comparison to microscopy, culture and Xpert test. Results of the IS6110 PCR appeared well correlated to smear microscopic semi-quantitative results (stratified from − to +++). The FIGURE 3 | Detection limits of IS6110 and senX3 PCR assays. Curves determined by probit analysis (95% probability detection). (A) IS6110 and senX3 PCR using M. tuberculosis mc 2 7000 DNA. LOD were estimated at 2.03 log 10 (107 copies/ml) and 2.89 log 10 (741 copies/ml), respectively. (B) The detection limit of M. bovis BCG was 2.40 log 10 and 2.90 log 10 for IS6110 and senX3 PCR assays, respectively. (C) IS6110 PCR detection limits using two or three sets of primers compared to a single set of primers based on M. tuberculosis mc 2 7000 strain were estimated at 2.1, 2.2, 2.3, 2.35, 2.4 log 10, respectively for ISP, Triplex, ISP+ISM, ISP+ISL, and ISM+ISL. minimum concentration of M. tuberculosis DNA in microscopypositive samples was estimated at 4 log 10 genome equivalent/ml, in agreement with the minimal bacilli concentration request for smear microscopy (Palomino, 2005). Importantly, two third of the sputum smear-negative/culture positive tested positive for MTB DNA using the IS6110 PCR. Previous studies reported a clinical sensitivity ranged from 91 to 97% in AFB positivespecimens and between 40 and 76% in AFB negative-specimens but specificity was ranging from 77 to 100% in both groups (Broccolo et al., 2003;El Khéchine et al., 2009;Lira et al., 2013). An IS6110 assay is available on the m2000 Abbott system. Study by Tam et al. (2017) has recently reported high clinical performances in smear negative sputum using this assay. Other IS6110 assays for use on open polyvalent PCR platform are also available (Bhembe et al., 2014;Obasanya et al., 2017). Our results suggest that using an optimized lysis and extraction method these IS6110 in house and commercial assays may constitute a valuable option for routine molecular diagnosis. These tests contribute also to increase diversity and price competition between suppliers, and access to TB molecular diagnosis in resource-poor settings. Sputum specimens containing AFB concentrations seated around the LOD were used to compare sensitivity of the IS6110 PCR with the Xpert test. We focused the analysis on low bacterial load specimens because most of the demands FIGURE 4 | Bland-Altman bias plots for two different quantitative MTBC DNA real-time PCR assays. Five serial dilutions of M. bovis BCG (A) and M. tuberculosis mc 2 7000 (B) strain were tested for MTBC DNA quantification by IS6110 and senX3 PCR assays. The mean bias was determined to be 4.026 and 7.455-cycle threshold for M. bovis BCG and M. tuberculosis mc 2 7000, respectively.
FIGURE 5 | Efficacy of MTB lysis using six different lysis methods combined with the Chelex R resin extraction. Each column represents average DNA copy number per microliter obtained in five independent experiment with three replicate reactions.
from clinicians concern this type of specimen, which represent challenge for TB control, as a result of the difficulty of detecting smear-negative TB. Our results suggest that the combination of IS6110 PCR and Chelex R method exhibits a higher sensitivity to detect low sputum concentration of FIGURE 6 | Comparison of DNA extraction protocols in spiked sputum samples. The M. tuberculosis mc 2 7000 stock suspension was diluted and used to spike negative sputum samples. Box plots with C T median, 10th, 25th, 75th, and 90th centiles of 10 replicates. Methods are indicated by colors: brown: Chelex R method; pink: Guanidium Isothicyanate/ Tris-HCl/EDTA + 3 cycles of freeze thawing and boiling; black: Tween 20/Tris-HCl/EDTA/lysozyme+proteinase K/SDS + warming cycles 56 • C/95 • C; green: Nonidet P-40/Tris-HCl/EDTA/lysozyme+proteinase K/SDS + warming cycles 56 • C/95 • C; blue: Triton X-100/Tris-HCl/EDTA; purple: NaOH + boiling and sonication. bacilli compared to Xpert test. The low sensitivity of Xpert for smear-negative specimens was previously described by Armand et al. (2011), but discordant with Miller et al. (2011). Notably, the new version of the MTB PCR cartridge (Xpert MTB/RIF Ultra), recently launched by Cepheid has included IS6110 and IS1081 targets to improve the sensitivity of the assay in sputum smear-negative samples (Chakravorty et al., 2017). The number sputum smear negative/culture positive sample tested in our study is one of the limit of the study. Low bacterial load specimens, obtained after serial dilutions were used for the comparison between the IS6110 PCR and Xpert MTB/Rif assay. Previous studies have used diluted clinical samples to control the performance of molecular tests (Noordhoek et al., 2004;Akkerman et al., 2013).
Our results indicate that the Chelex R method is highly effective for TB lysis and DNA enrichment in sputum. IS6110 PCR combined with optimized lysis-extraction method achieved a level of analytical performances at least equivalent to these of the widely used Xpert kit on frozen sputum samples. Highly sensitive NATs are of great interest in TB diagnosis and are requested to reach an acceptable rate of MTB detection in subjects with paucibacillary specimens, a situation frequently encountered in patients with HIV and also in pediatric TB. This method should be considered as a possible alternative to fully automatized kits for TB DNA testing on open polyvalent PCR platform in central laboratories because of low cost and high throughput potential.
|
v3-fos-license
|
2016-03-01T03:19:46.873Z
|
2015-01-01T00:00:00.000
|
17401114
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.14232/ejqtde.2015.1.39",
"pdf_hash": "e000fb5f322e526f84f3c50432939906bfc4f27e",
"pdf_src": "Grobid",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2268",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "e000fb5f322e526f84f3c50432939906bfc4f27e",
"year": 2015
}
|
pes2o/s2orc
|
L 1 -maximal Regularity for Quasilinear Second Order Differential Equation with Damped Term
We investigated a quasilinear second order equation with damped term on the real axis. We gave some suitable conditions for existence of the L 1-maximal regular solutions of this equation.
By C (k) 0 (R) (k = 1, 2, . ..) we denote the set of k times continuously differentiable functions with compact support.Let C (j) Then y is called a solution of (1.1).
The purpose of this work is to find some conditions for r and q such that for every f ∈ L 1 , the equation (1.1) has a solution y which satisfies y 1 + r(•, y)y 1 + q(•, y)y 1 < ∞.
B Email: ospanov_kn@enu.kz The separability of differential operators introduced by Everitt and Giertz in [7,8] plays an important role in the study of second order differential equations.Recall that the Sturm-Liouville operator Ly := −y + q 1 (x)y, Everitt and Giertz [7,8] proved that if q 1 and its derivatives satisfy some conditions, then L is separable in L 2 (R).In the case q 1 is not differentiable function, the separability of L in L 2 (R) was discussed in [3,23].In [9], Everitt, Giertz and Weidmann give an example of non-separable Sturm-Liouville operator in L 2 (R) with strongly oscillating and infinitely smooth coefficient q 1 .The separability of linear partial differential operators was studied in [4,15,17,21,24].Some sufficient conditions of separability of operators on Riemann manifolds are obtained in [1,2,12,13].
The separability is also an important tool when dealing with quasilinear equations.In [16], Muratbekov and Otelbaev used the separability to discuss the solvability of the nonlinear equation where f ∈ L 2 (R).Grinshpun and Otelbaev showed that the solvability of the equation (1.2) in L 1 implies q 0 ≥ 1 (see [6]).This method is useful for the multidimensional (Schrödinger) equation −∆u + q(x, u)u = F(x), x ∈ R n (see [17,20] for details).
In general, the expression (1.1) can be converted neither to (1.2) nor to the form − p 2 (x, y)y + q 2 (x, y)y = f (x).
In [22], we considered the equation −y + r(x, y)y = f (x), f ∈ L 2 (R), and found some conditions for r such that this equation is solvable.In the present paper, we discuss the more general equation (1.1), in the case f ∈ L 1 .Under weaker conditions on r than in [22] the existence and regularity of solutions of (1.1) are established.
Schauder's fixed-point theorem is used to prove our main result (see [10]).
Let g and h be some functions on R and let The main result of this paper is the following.
Theorem 1.2.Let r be a continuously differentiable function and q a continuous function satisfying and sup Then for any f ∈ L 1 , the equation (1.1) has a solution y such that Example 1.3.Let r = 10 + x 10 + 5y 4 , q = x 3 + cos 4 x + 2y.Then r and q satisfy the conditions of Theorem 1.2.
Lemma 2.1.Let g and h be continuous functions on R such that Moreover, γ g,h is the smallest constant which satisfies (2.1).
Let r be a continuously differentiable function and Denote by l the closure of l 0 in L 1 .
From Lemma 2.1, (1.3) and (2.4) follows that So the inverse l −1 of l exists.
Next, we show that R(l) = L 1 .Let R(l) = L 1 .Since l is closed, (2.5) implies R(l) is closed.Hence there exists a nonzero element z 0 ∈ ⊥ R(l) such that l * z 0 = −z 0 − (r(x)z 0 ) = 0, where l * is adjoint operator of l.Then Let c 2 = 0. Without loss of generality, we can assume that c 2 = −1.Then We consider the following linear equation The function y ∈ L 1 is called a solution of (2.6), if there exists a sequence {y n } ∞ n=1 ⊂ C (2) Lemma 2.3.Let r 1 be a continuously differentiable function such that r 1 ≥ δ 1 Assume q 1 is a continuous function and γ q 1 ,r 1 < ∞.Then for every f ∈ L 1 , the equation (2.6) has a unique solution y such that where c 4 depends only on γ q 1 ,r 1 .
L will denote the closure in L 1 of the operator L 0 y := −y + r 1 y + q 1 y, D(L 0 ) = C (2) If the conditions of Lemma 2.3 hold, then the operator L is separable in L 1 .
Proof of the main theorem
Let C(R) be the space of bounded continuous functions on R with the norm y C(R) = sup x∈R |y(x)|.Let ε and A be positive numbers.Set Let v ∈ S A .L v,ε denote the closure in L 1 of the following linear differential expression We consider the equation L v,ε y = f (x).
(3.1) r1,ε,v (x) := r(x, v(x)) + ε 1 + x 2 and qv (x) := q(x, v(x)) satisfy all of the conditions of Lemma 2.3.Indeed, by (1.3), r1,ε,v (x) Therefore, for any f ∈ L 1 , the equation (3.1) has a unique solution y and where C 2 does not depend on A. By 2.1, we have that By using (3.2), (3.3) and Theorem 1 given in Chapter 3 of [18], we obtain that where C 5 also does not depend on A. (3.4), the operator P ε maps S A into itself.Moreover, the operator P ε maps S A to the set . Indeed, let γ > 0, then by (3.4) there exists l ∈ N such that for any < γ/2, (3.5) and sup x:|x|≥l We denote T l = {ϕ l z : z ∈ Q A }.By (3.5) and (3.6), T l is a γ-net of Q A .On the other hand, T l is a subset of the Sobolev space Notice that the embedding of [19,27]).So T l is a compact γ-net of Q A .By Hausdorff's theorem (see [10, Next, we show that the operator P ε is continuous on S A .Let {v n } ∞ n=1 ⊂ S A be a sequence such that sup x∈R |v n (x) − v(x)| → 0 as n → +∞.If y n (n = 1, 2, . ..) and y satisfy Since functions v and v n (n = 1, 2, . ..) are continuous, we see that r(x, v n (x)) − r(x, v(x)) and q(x, v n (x)) − q(x, v(x)) are continuous functions.Therefore, from (3.8) it follows that as n → ∞, for every a > 0. On the other hand, by (3.4), we have Since the operator L −1 v,ε is closed, by (3.9) and (3.10), we obtain that z = y.Thus P ε is continuous.
So P ε is a completely continuous operator in C(R) and it maps the ball S A into itself.By Schauder's theorem (see [10, Chapter XVI]), P ε has a fixed point y in S A , i.e.P ε (y) = y.And y satisfies the equality −y + r(x, y) + ε(1 + x 2 ) y + q(x, y)y = f (x).
y j 1 ≤Remark 3 . 1 .
C 5 f 1 .(3.11) Let (a, b) be an arbitrary finite interval.It is known that the space W 2 1 (a, b) is compactly embedded to L 1 (a, b).Therefore, by virtue of (3.11), we can select a subsequence ỹj∞ j=1 of y j ∞ j=1 ⊂ W 2 1 (a, b) such that ỹj − y L 1 (a, b) → 0 as j → ∞.By Definition 1.1, y is a solution of equation(1.1).By Lemma 2.3, we obtain that for y the estimate (1.5) holds.The condition (1.3) is natural.If (1.3) does not hold, from Lemma 2.1 it follows that the domain D(L) of L is not included in L 1 .
|
v3-fos-license
|
2020-06-04T09:12:28.555Z
|
2020-06-01T00:00:00.000
|
219748955
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2227-7390/8/6/876/pdf",
"pdf_hash": "5179552c6cc4074044b73ddaf0d701c703fc7330",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2269",
"s2fieldsofstudy": [
"Mathematics"
],
"sha1": "96255ff243d7951c4db22b5aeea4498f3091b747",
"year": 2020
}
|
pes2o/s2orc
|
On the Growth of Some Functions Related to z ( n )
: The order of appearance z : Z > 0 → Z > 0 is an arithmetic function related to the Fibonacci sequence ( F n ) n . This function is defined as the smallest positive integer solution of the congruence F k ≡ 0 ( mod n ) . In this paper, we shall provide lower and upper bounds for the functions ∑ n ≤ x z ( n ) / n , ∑ p ≤ x z ( p ) and ∑ p r ≤ x z ( p r ) .
Introduction
Perhaps the most important of the binary recurrences is the Fibonacci sequence (F n ) n . This sequence starts with F 0 = 0 and F 1 = 1 and it satisfies the 2nd order recurrence relation F n+2 = F n+1 + F n (for n ≥ 0). A well-known, explicit, formula for the nth Fibonacci number is called the Binet-formula where α := (1 + √ 5)/2 and β := (1 − √ 5)/2. It follows from this formula that the estimates α n−2 ≤ F n ≤ α n−1 , hold for all n ≥ 1.
The study of the divisibility properties of Fibonacci numbers has always been a popular area of research. For example, it is still an open problem to decide if there are infinitely many primes in that sequence. In order to study such kind of Diophantine problems, the arithmetic function z : Z >0 → Z >0 was defined by setting z(n) = min{k ≥ 1 : n | F k }. This function is called the order of appearance in the Fibonacci sequence. For more results on z(n), see [1] and references therein.
In 1878, Lucas ([2], p. 300) established that z(n) is well defined and, in 1975, J. Sallé [3] proved that z(n) ≤ 2n, for all positive integers n. This is the sharpest upper bound for z(n), since for example z(n) = 2n if and only if n = 6 · 5 k , for k ≥ 0. (1) However, apart from these cases this upper bound is very weak. For instance, z(2255) = 20 < 10 −2 · 2255. In fact, Marques [4] gave sharper upper bounds for z(n) for all positive integers n = 6 · 5 k . These upper bounds depend on the number of distinct prime factors of n, denoted by ω(n).
In the main stream of the Analytic Number Theory, we have the three following functions where Λ(n) is the well-known von Mangoldt function defined as log p if n = p r , for some prime number p and r ≥ 1, and 0 otherwise (see, e.g., [5,6]). The functions ϑ(x) and ψ(x) are called the first and the second Chebyshev functions, respectively. Note that ψ(x) can be rewritten as ∑ p r ≤x log p.
Here (and in all what follows) ∑ n≤x , ∑ p≤x and ∑ p r ≤x mean that the sum is taken over all positive integers, all prime numbers and all prime powers belonging to the interval [1, x], respectively. Probably, the main importance of the functions ψ and ϑ relies in the proof of the celebrated Prime Number Theorem which states that where π(x) = ∑ p≤x 1 is the prime counting function. Indeed, the prime number theorem and the statements ϑ(x) ∼ x and ψ(x) ∼ x are all equivalent. Here f (x) ∼ g(x) (asymptotic equivalence) means that f (x)/g(x) tends to 1 as x → ∞ (in another way, ) means a function h(x) with lim x→∞ h(x)/g(x) = 0). Actually, one has the following stronger fact (2) Here we shall use the Landau symbols in their usual meaning, i.e., we say that g means that f g and g f . Another function of great interest is the harmonic function H(x) whose image for x ∈ Z >0 is called the xth harmonic number and denoted by H x . These numbers gained much attention with their relation to the Riemann hypothesis. In fact, the Riemann hypothesis is equivalent to prove that d(n) ≤ H n + e H n log H n , for all n ≥ 1, where d(n) is the sum of the positive divisors of n (see [7]). We observe that the harmonic series, i.e., lim x→∞ H(x) is a well-studied example of divergent series. In fact, it holds that which agrees with its very slow divergence.
In this paper, we are interested in studying the growth of the following Fibonacci versions of H(x), ϑ(x) and ψ(x), thus, the functions Z H (x), Z ϑ (x) and Z ψ (x) (see Figure 1), for a positive real x, which are defined as First, observe that since 1 ≤ z(n) ≤ 2n, then the following trivial estimates hold However, we found the previous bounds by neglecting the contribution of z(n) (which is much bigger than 1 and much smaller than 2n, in almost all cases). In fact, by taking z(n) into account, we obtain Again, with an extra effort, we can improve this by proving that Since the number of prime powers in [1, x] is bigger than π(x), a similar direct inequality (that one for Z ϑ (x)) could be derived for Z ψ (x). However, by using the behavior of z(p r ), we can obtain better estimates such as Theorem 3. We have that Note that even with a larger number of possibilities in the sum of Z ψ (x), its bounds are the same (in order) than the ones for Z ϑ (x) (Theorem 2). The explanation for this, follows from the fact that the contribution, i.e., the number of powers of p (for example) belonging to [1, x] In other words, this amount is almost negligible (compared with x, in terms of order).
In a few words, the proof of the results combine some new (sharper upper bounds for z(n) due to Marques) and classical results (such as results due Abel, Sathé, Selberg) in Number Theory.
Auxiliary Results
In this section, we shall present some tools which will be very useful in the proofs. We start with some results due to Marques [4], which will be very helpful in our proof. Thus, we shall state his results as lemmas (in what follows, the 2-adic valuation of n is ν 2 (n) = max{k ≥ 0 : 2 k | n}).
where, as usual, ( a q ) denotes the Legendre symbol of a with respect to a prime q > 2.
Lemma 2. Let n be an odd integer number with ω(n) ≥ 2, then Lemma 3. Let n be an even integer number with ω(n) ≥ 2, it holds that if ω(n) = 2 and 5 | n; 2n, if ω(n) = 2 and 5 n; if ω(n) = 2 and 5 | n; n, if ω(n) = 2 and 5 n; The next lemma is a powerful result in analytic number theory which is related to positive integers with fixed number of distinct prime factors.
Lemma 4 (Sathé-Selberg Formula). For any positive constant A, we have
In the previous statement Γ(z) = ∞ 0 x z−1 e −x dx (for x > 0) is the well-known Gamma function. The proof of Lemma 4 can be found in [8,9]. Our last tool is a very useful formula due to Abel which makes an interplay between a discrete sum and an integral (continuous sum). More precisely, Lemma 5 (Abel's Summation Formula). Let (a n ) n be a sequence of real numbers and define its partial sum A(x) := ∑ n≤x a n . For a real number x > 1, let f be a continuously differentiable function on [1, x]. Then ∑ n≤x a n f (n)
Remark 1.
We remark that, throughout what follows, the implied constants in and can be made explicit. Here, we decided to use asymptotic bounds in order to leave the text more readable. However, we shall provide the explicit inequalities for convenience of the reader (they can be found in [10], for example).
As usual, from now on we use the well-known notation [a, b] = {a, a + 1, . . . , b − 1, b}, for integers a < b. Now we are ready to deal with the proof of our results.
The Proof of Theorem 1
Since, by definition, n | F z(n) , then n ≤ F z(n) ≤ α z(n)−1 and so z(n) > log n/ log α. Thus Now, we shall use Lemma 5 for a n = 1/n and f (x) = log x. Then Since H(x) = log x + O(1) and and so Z H (x) (log x) 2 . For the second part, we use Lemmas 1, 2 and 3 to derive that for all n > 1. First, let us write Z H (x) as where h(x) = max{ω(t) : t ≤ x}. By using that z(n) ≤ 7 · (2/3) ω(n) n, we have which can be written as Now, we shall use Lemma 4 to deal with the first sum in the right hand side above. Since G(z) converges uniformly and absolutely in any bounded set, we have max z∈[0,1] {|G(z)|} ≤ C, for some positive constant C. Now, by Lemma 4 for A = 1, we get |G(z k )| ≤ C (for z k := (k − 1)/ log log x < 1) and For the second sum in the right hand side of (6), we use that #P k (x) ≤ x to obtain where we used that log log x + 1 > log log x. Since 3/2 > 3 √ e, then By combining (6), (7) and (8), we obtain the desired result.
The Proof of Theorem 2
By the Prime Number Theorem, we have that ϑ(x) ∼ x. In particular, it holds that ϑ(x) x. Since ϑ(x) = ∑ p≤x log p, then where we used that z(p) > log p/ log α. For the second part, since z(p) ≤ p + 1 ≤ 3p/2, then
The Proof of Theorem 3
Note that, by Theorem 2, we have x.
For the second part, since there exist exactly log x/ log p powers of p in the interval [1, x], we can write Z ψ (x) as By using Lemma 1 (ii), we get x. Then, which completes the proof.
Conclusions
In this paper, we study some problems related to the order (of appearance) in the Fibonacci sequence, denoted by z(n). This arithmetic function plays an important role in the comprehension of some Diophantine problems involving Fibonacci numbers (the most important one is the open problem about the existence of infinitely many Fibonacci prime numbers). The problems are related to the growth of Fibonacci versions of well-known number-theoretic functions (related to the Prime Number Theorem) like the first and second Chebyshev functions, ϑ(x) = ∑ p≤x log p and ψ(x) = ∑ p r ≤x log p and the harmonic function H(x) = ∑ n≤x 1/n. These Fibonacci-like functions are defined as Z ϑ (x) = ∑ p≤x z(p), Z ψ (x) = ∑ p r ≤x z(p r ) and Z H (x) = ∑ n≤x z(n)/n. In particular, we shall find effective bounds for these three functions. The proofs combine elementary facts related to z(n) (such as Marques' upper bounds) together with some deep tools from Analytic Number Theory (such as Abel's summation formula and Sathé-Selberg formula).
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2008-04-02T00:00:00.000
|
1057959
|
{
"extfieldsofstudy": [
"Chemistry",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://journals.iucr.org/e/issues/2008/05/00/dn2329/dn2329.pdf",
"pdf_hash": "780efcdfa58e99b3fc85cb865053cb06e852abac",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2271",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"sha1": "780efcdfa58e99b3fc85cb865053cb06e852abac",
"year": 2008
}
|
pes2o/s2orc
|
Three-dimensional hydrogen-bonded supramolecular assembly in tetrakis(1,3,5-triaza-7-phosphaadamantane)copper(I) chloride hexahydrate
The structure of the title compound, [Cu(PTA)4]Cl·6H2O (PTA is 1,3,5-triaza-7-phosphaadamantane, C6H12N3P), is composed of discrete monomeric [Cu(PTA)4]+ cations, chloride anions and uncoordinated water molecules. The CuI atom exhibits tetrahedral coordination geometry, involving four symmetry-equivalent P–bound PTA ligands. The structure is extended to a regular three-dimensional supramolecular framework via numerous equivalent O—H⋯N hydrogen bonds between all solvent water molecules (six per cation) and all PTA N atoms, thus simultaneously bridging each [Cu(PTA)4]+ cation with 12 neighbouring units in multiple directions. The study also shows that PTA can be a convenient ligand in crystal engineering for the construction of supramolecular architectures.
The structure of the title compound, [Cu(PTA) 4 ]ClÁ6H 2 O (PTA is 1,3,5-triaza-7-phosphaadamantane, C 6 H 12 N 3 P), is composed of discrete monomeric [Cu(PTA) 4 ] + cations, chloride anions and uncoordinated water molecules. The Cu I atom exhibits tetrahedral coordination geometry, involving four symmetry-equivalent P-bound PTA ligands. The structure is extended to a regular three-dimensional supramolecular framework via numerous equivalent O-HÁ Á ÁN hydrogen bonds between all solvent water molecules (six per cation) and all PTA N atoms, thus simultaneously bridging each [Cu(PTA) 4 ] + cation with 12 neighbouring units in multiple directions. The study also shows that PTA can be a convenient ligand in crystal engineering for the construction of supramolecular architectures.
This work has been supported by the FCT, Portugal, and its POCI 2010 programme (FEDER funded).
Supplementary data and figures for this paper are available from the IUCr electronic archives (Reference: DN2329).
metal-organic compounds
Three-dimensional hydrogen-bonded supramolecular assembly in tetrakis(1,3,5-triaza-7phosphaadamantane)copper (I) chloride (Phillips et al., 2004). Besides, PTA and its derivatives can also be convenient building blocks for the construction of polymeric networks (Lidrissi et al., 2005;Frost et al., 2006;Mohr et al., 2006) due to several potentially available coordination sites, protonation ability of N atoms, and strong affinity towards hydrogen bonds. Nevertheless, the use of PTA ligands in crystal design and engineering has remained little explored. Hence, in pursuit of our recent studies directed towards the synthesis of new copper compounds including PTA complexes and various coordination polymers, supramolecular frameworks and host-guest systems with other ligands (Karabach et al., 2006;Di Nicola et al., 2007;Kirillov et al., 2008), we have prepared compound (I) whose crystal structure and supramolecular features are reported herein.
The moiety formula of (I) consists of the [Cu(PTA) 4 ] + cation (Fig. 1), one chloride anion and six symmetry equivalent crystallization water molecules. The [Cu(PTA) 4 ] + unit possesses a very high symmetry, being generated from only five symmetry nonequivalent atoms (Cu1, P1, N1, C1 and C2). The Cu I atom lies on -43m site symmetry and its coordination environment is filled by four equivalent P-bound PTA ligands, arranged in a perfect tetrahedral coordination geometry with the corresponding P-Cu-P angles of 109.47 (2)°. The Cu-P bond distances of 2.2598 (6) Å as well as other bonding parameters within the cage-like PTA cores are comparable to those reported for tetrahedral PTA complexes of Cu , Au (Forward et al., 1996), Pt (Darensbourg et al., 1999) and Ni (Darensbourg et al., 1997).
An interesting feature of (I) consists in the extensive intermolecular hydrogen bonding that arises from only one type of O-H···N H-bond (Table 1). Hence, each crystallization water molecule (O10) repeatedly acts as a double H-bond donor bridging to two N1 atoms of two different [Cu(PTA) 4 ] + units. This results in the extensive interlinkage in multiple directions of every monomeric copper unit with twelve neighbouring ones (Fig. 2), thus leading to the formation of a regular three-dimensional supramolecular framework (Fig. 3). That framework has the shortest Cu···Cu separation of 13.977 (1) Å and possesses the repeating channels (ca 4.8 Å diameter) filled by water molecules.
Experimental
To the ethanolic solution (5 ml) of CuCl 2 (27 mg, 0.20 mmol) was added solid PTA (126 mg, 0.80 mmol). The obtained mixture was refluxed for 3 h resulting in a white suspension. This was filtered off and the colourless filtrate was left to evaporate in a beaker in air and at ambient temperature. A small crop of the colourless X-ray quality crystals of (I) was formed in several days. 1 H NMR data are similar to those reported for [Cu(PTA) 4 ]NO 3 .
Refinement
All H atoms attached to C atoms were fixed geometrically and treated as riding with C-H = 0.97 Å and U iso (H) = 1.2U eq (C).
H atom of the water molecule were located in difference Fourier maps and included in the subsequent refinement using restraint (O-H= 0.82 (1)Å) with U iso (H) = 1.5U eq (O). In the last stage of refinement,it was treated as riding on the O atom. tetrakis(1,3,5-triaza-7-phosphaadamantane)copper(I) chloride hexahydrate Crystal data [Cu(C 6 Geometric parameters (Å, °)
|
v3-fos-license
|
2019-04-26T14:24:29.048Z
|
2017-06-30T00:00:00.000
|
133051434
|
{
"extfieldsofstudy": [
"Geology"
],
"oa_license": "CC0",
"oa_status": "GREEN",
"oa_url": "https://ddd.uab.cat/pub/artpub/2017/167611/earsurprolan_a2016struth.pdf",
"pdf_hash": "c210248a543fe79b5f45eb41feed4fd38ad97082",
"pdf_src": "WoltersKluwer",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2274",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"sha1": "c210248a543fe79b5f45eb41feed4fd38ad97082",
"year": 2017
}
|
pes2o/s2orc
|
Plateau reduction by drainage divide migration in the Eastern Cordillera of Colombia defined by morphometry and 10Be terrestrial cosmogenic nuclides
Catchment‐wide erosion rates were defined using 10Be terrestrial cosmogenic nuclides for the Eastern Cordillera of the Colombian Andes to help determine the nature of drainage development and landscape evolution. The Eastern Cordillera, characterized by a smooth axial plateau bordered by steep flanks, has a mean erosion rate of 11 ± 1 mm/ka across the plateau and 70 ± 10 mm/ka on its flanks, with local high rates >400 mm/ka. The erosional contrast between the plateau and its flanks was produced by the increase in the orogen regional slope, derived from the progressive shortening and thickening of the Eastern Cordillera. The erosion rates together with digital topographic analysis show that the drainage network is dynamic and confirms the view that drainage divides in the Eastern Cordillera are migrating towards the interior of the mountain belt resulting in progressive drainage reorganization from longitudinal to transverse‐dominated rivers and areal reduction of the Sabana de Bogotá plateau. Copyright © 2016 John Wiley & Sons, Ltd.
Introduction
Rift inversion and/or crustal thickening result in dramatic topography and drainage network changes in mountain belts that are mainly controlled by major tectonic structures ( Van der Beek et al., 2002). The evolution and development of drainage systems in active orogens has attracted much attention recently, particularly with regard to transverse and longitudinal drainages, and drainage divides (Willett et al., 2001Pelletier, 2004;Bonnet, 2009;Castelltort et al., 2012;Perron et al., 2012;Goren et al., 2014;Viaplana et al., 2015). Clear examples of drainage rearrangement in relation to progressive mountain building have been described in the inverted rifts of the High Atlas of Morocco (Babault et al., 2012) and Eastern Cordillera of Colombia (Babault et al., 2013;Struth et al., 2015). These studies highlight that drainage divides are dynamic features that progressively migrate and result in river capture events. Providing evidence for differential erosion rates on either side of a drainage divide adds much credence to developing and substantiating drainage evolution models for mountain belts . As such, the side of a mountain belt with the greatest erosion will progressively migrate and capture drainages from the side with the least erosion. Theoretically, a dynamic drainage network will evolve towards a steady state, maintaining a steady drainage network and stationary drainage divides (Howard, 1965).
The Eastern Cordillera of Colombia is an example of an orogen with a dynamic drainage network (Babault et al., 2013;Struth et al., 2015). This N-S oriented orogenic belt has two topographic domains: (i) an axial zone with low relief associated with longitudinal rivers that have gentle gradients (the Sabana de Bogotá); and (ii) high-relief flanks with steeper transverse rivers. These domains are separated by what we refer to as the eastern and western main drainage divides. Recent studies have suggested that the increase of regional slopes due to progressive crustal thickening results in fluvial reorganization from longitudinal to transverse dominated drainages in the Eastern Cordillera (Babault et al., 2013;Struth et al., 2015). On the basis of a morphometric analysis, field observations and a summary of paleodrainage data, these studies conclude that drainage reorganization takes place by progressive drainage divide migration toward the axial zone of the orogen by a step-by-step series of river captures.
In this study, we build on the work of Struth et al. (2015) by using 10 Be terrestrial cosmogenic nuclides (TCNs) in fluvial sands to determine catchment-wide erosion rates for the Eastern Cordillera to investigate the contrasting erosion dynamics between the Sabana de Bogotá axial plateau and the flanks of the Eastern Cordillera. TCN analysis is combined with a comparative analysis of steepness index, specific stream power and the integral of the drainage area (χ values). We also
Geological, morphological and climatological setting
The Eastern Cordillera of Colombia is an inverted continental rift in the northern Andes. The cordillera is composed of Precambrian-Paleozoic basement and a succession of Mesozoic and early Cenozoic sedimentary rocks within a doubly-verging thrust system (Julivert, 1970;Colletta et al., 1990;Cooper et al., 1995; Figure 1).
The Eastern Cordillera is flanked on both sides by lowlands composed of Cenozoic deposits, which infill the Magdalena Valley and the Llanos foreland basins that are at elevations of 200-300 m above mean sea level (asl; Figure 1(A)). The flanks of the Eastern Cordillera are dominated by alternating Cretaceous sandstone and shale formations. Isolated Precambrian-Paleozoic basement massifs with low-and medium-grade metamorphic rocks (mainly phyllite and schist) are present in the Quetame and Floresta massifs (Segovia, 1965;Ulloa and Rodríguez, 1979;Ulloa and Rodríguez, 1982;Parra et al., 2009a; Figure 1 Teixell et al., 2015). Total shortening along this section is 82 km, consistent with values previously reported by Tesón et al. (2013) for the segment of the Eastern Cordillera covered by the map. [Colour figure can be viewed at wileyonlinelibrary.com] large-displacement (Mora et al., 2008 and references therein; Figure 1(B)). The local relief in the Sabana de Bogotá plateau is small because up to 600 m of unconformable Pliocene-Quaternary fluviolacustrine deposits partially fill the synclinal depressions (Tilatá and Sabana formations; Julivert, 1963;Andriessen et al., 1993;Torres et al., 2005). A moderate amount of orogenic shortening, 25 to 30%, across the Eastern Cordillera has been calculated along serial transects by Tesón et al. (2013) and Teixell et al. (2015).
There is a strong precipitation gradient across the western divide, whereas no precipitation gradient exists across the eastern divide ( Figure 2). Precipitation is greatest (> 3000 mm year À1 ) near Charalá, in the eastern and western foothills and in the Quetame Massif.
Summary of the deformation and paleodrainage evolution of the Eastern Cordillera Paleogene data show a western and southwestern source for the Central Cordillera foreland basin sediments, which currently include the early successions of deposits in the Magdalena Valley and Llanos basins, and the intervening Eastern Cordillera. Sediments moved NNE in the Central Cordillera foreland basin following the regional slope and the contemporaneous structures within the basin (Cooper et al., 1995;Bayona et al., 2008;Silva et al., 2013). The drainage was longitudinal, parallel to the current Eastern Cordillera, following growing folds that thermochronological data show started to form during the Eocene (Parra et al., 2009b;Mora et al., 2010;Moreno et al., 2011;Saylor et al., 2011;Caballero et al., 2013aCaballero et al., , 2013bSilva et al., 2013).
The Magdalena and the Llanos basins became disconnected by the uplift of the Eastern Cordillera as deformation propagated eastward during the Late Oligocene to Early Miocene (Parra et al., 2009b;Mora et al., 2010). This is supported by provenance data indicating that the Eastern Cordillera developed into an effective topographic barrier that separated the Central Cordillera from the Llanos basin before the Mid to Late Miocene . The axial Eastern Cordillera was likely a closed basin south of 6°N during the late Oligocene-Miocene .
Base level started to rise in the Middle Magdalena Valley basin during the Late Oligocene to Mid Miocene, forcing its rivers to flow to the east, across the present location of the Eastern Cordillera, into the Llanos basin (Gómez et al., 2005). The drainage in the Middle Magdalena Valley basin returned to flow to the north due to the continued uplift of the cordillera during the Mid-Late and Late Miocene. Since the Late Miocene, the paleoflow pattern in the eastern foreland of the Eastern Cordillera is characterized by a transverse drainage reflecting an eastward direction and providing sediments for the Llanos and Middle Magdalena Valley basins. Most external thrust sheets of the Cordillera are strongly deformed in recent times, and there are very young apatite fission track ages for the Quetame Massif along its eastern flank (0-3 Ma; Figure 3). This attests to strong exhumation across the Eastern Cordillera margins in the latest Neogene and Quaternary (Mora et al., 2008) . Struth et al. (2015) argue that the Eastern Cordillera is experiencing fluvial drainage network reorganization by drainage divide migration. Longitudinal drainages created in the early stages of drainage developed paralleling the structural grain of the growing tectonic orogen. The potential energy and power of transverse rivers was enhanced as the regional slope progressively increased during crustal thickening. This resulted in headward erosion causing drainage divide migration towards the Sabana plateau and capturing longitudinal streams: ultimately leading to drainage network reorganization.
Analysis of the digital drainage network
We used digital elevation model (DEM) SRTM90v4 that has a horizontal resolution of 90 m (Jarvis et al., 2008) to analyze the drainage network for the Eastern Cordillera. The DEM was corrected in narrow areas that had low resolution using elevations from Instituto Geográfico Agustín Codazzi (IGAC) 1:100 000 topographic maps. Various geomorphic parameters provide information about erosion in channel networks and therefore information about the dynamism of the drainage. The most important parameters that we examine include stream power, steepness index and χ. We calculated these parameters to characterize the contrast between the topographic domains of the Sabana de Bogotá plateau and its flanks, and to identify which one best correlates with the TCN-derived erosion rates. We propose that a correlation between geomorphic parameters and TCN-derived erosion rates may allow a first estimation of erosion rates in areas devoid of TCN data.Stream power relates to the energy rate per unit distance along a river (Bagnold, 1966) and reflects the incision power of a river into bedrock under detachment-limited conditions (Howard and Kerby, 1983;Howard et al., 1994;Whipple and Tucker, 1999;Kirby and Whipple, 2001). River incision is calculated according to the stream power model expressed as: where U represents the bedrock uplift at x, K is the erosional efficiency, A is local drainage area and S is the channel slope. The positive exponents m and n describe the relative dependency of stream erosion rates on A and S.
Analysis of slope-area has frequently been used to reveal erosion trends in channel networks defined as steepness index or Ksn (Kirby and Whipple, 2001;Kirby, 2003;Snyder et al., 2003;Wobus et al., 2006;Dibiase et al., 2010). However, local slope calculations undertaken for regions that have low DEM resolution such as the Eastern Cordillera of Colombia provide scattered and noisy results. For this reason, we calculate the channel slopes using the χ gradient referred to as Mx Mudd et al., 2014). To calculate Mx, we use the river profile elevation instead of the slope as the dependent variable, against χ as the independent variable. This approach produces more reliable results. We plot values of Mx, i.e. the slope in χ-elevation space (Mudd et al., 2014), a parameter related to the ratio between erosion rate and erodibility , as a proxy to identify the distribution and magnitude of erosion. We defined the best concavity of the river profiles based on AICc-collinearity tests and χ-plots following the method of Mudd et al. (2014) to calculate Mx values. The AICc is a statistical method that selects a model that balances goodness of the fit against model complexity (Akaike Information Criterion, AICc;Akaike, 1974; Hurvich and Tsai, 1989; Burnham and Anderson, 2002). We extract the AICccollinearity test and the χ plots for each basin based on iteration through a range of concavity values for the main channel and tributaries to define the best-fit concavity. The concavity with the minimum AICc value will be the best fitting one using this method. The mean calculated concavity for all the basins is 0.45 (see SD1). We compared the Mx values between different catchments by fixing the obtained concavity. Stream power theory predicts that river profiles will have a linear χ-plot and Mx proportional to the erosion rates, assuming, first, a steadystate condition, where rock uplift is balanced by erosion, and second, that erosion, erodibility and uplift are constant through time and space (Royden and Taylor Perron, 2013;Mudd et al., 2014). In reality, U and K can be variable in space and time, which are dependent upon the tectonic and climatic history, and rock type. We calculated the concavity for all the basins following the method of Mudd et al. (2014) by extracting the steepness index, knickpoints and localize all the known active structures as well as highly erodible lithologies to solve this problem. In the case of stepped channel profiles associated with spatial or temporal changes in uplift rates, we used the colinearity test to identify the best concavity for each catchment (Mudd et al., 2014). This technique allows the magnitude and distribution of the erosion rates to be identified. We extract the χ-map following the method of Willett et al. (2014), and with a same base level determined by the elevation of the most external tectonic structures (300 m asl) and with a critical drainage area of 1 km 2 to extract information about the dynamics of the catchments. The main focus of this analysis rests on mapping differences in χ-coordinate values across drainage divides. Similar χ values on both sides of a drainage divide would suggest that the region is in equilibrium, while large differences in χ values across the drainage divide imply that river networks are in disequilibrium where divide migration or river capture is likely to occur . Drainage divides generally migrate towards the higher χ values to achieve equilibrium, and hence, catchments with high values are prone to capture and may eventually disappear . We extract the χ-map for the entire central segment of the Eastern Cordillera as a proxy for the dynamics of the drainage divides using a modified version of χ that makes a correction factor for the precipitation (Yang et al., 2015) such that the χ value is defined by the following equation: where P 0 and A 0 are arbitrary scaling factors for the precipitation rate and drainage area, respectively, P is precipitation rate, A is the upstream drainage area and m and n are empirical and non-integer constants.
We also calculated an averaged specific stream power (SSP, Equation (3), Knighton, 1999) for each catchment following the method described in Godard et al. (2012) such that: where d is density, g is acceleration due to gravity, Ksn is the steepness index slope, Q is discharge and W is the channel width.
Terrestrial cosmogenic nuclides analysis Sediment samples were collected for 10 Be TCN analysis to investigate erosional contrasts between the Sabana plateau and its flanks, including the main streams and major tributaries in the Guayuriba and Turmequé (representing the eastern flank of the Cordillera), Bogotá (Sabana de Bogotá plateau) and upper Suárez catchments. We followed the sampling methods of Carretier et al. (2015) who collected fluvial sand from the subcatchments to minimize issues associated with local lithologic variations. Rivers in the eastern flank and plateau have their headwaters on the same drainage divide, which allows us to make direct comparison between them. We also analyzed four 10 Be TCN samples in the lower Suárez and Chicamocha catchments ( Figure 3, Table I) to examine the effect of a different vegetation cover, bedrock erodibility and climate. These four samples are not included in the comparative analysis between the plateau domain and the eastern flank.
Larger sampled catchment areas have higher likelihoods of including mixed fluvial signatures. In such cases, the erosion rate obtained may not be representative of all the reaches of that river, which may have experienced a diverse reorganization history.
Catchments including a relict flat area in their upper parts (Chicamocha, Suárez, Bogotá and Turmequé) reflect two different erosion regimes separated by a knickpoint (white stars in Figure 6). As such the upper part the catchment, above the mean knickpoints, should have lower erosion rates (related to lower relief, gentler slopes and lower Mx values) compared with those downstream. The Turmequé catchment illustrates this well, with an ancient plateau domain above the main knickpoint where the erosion rates are low (LUCN-45, 26) and higher erosion rates in its lower reaches 27,28). We only sampled the upper part of the Bogotá catchment 19,30,31,33) above the Tequendama Falls (see Figure 4 for location) to define the amount of erosion in it lowest erosion rate domain. We sampled its gently sloping upper reaches 50), and the steeper lower reaches below the knickpoint (LUCN-53, 55) in the Suárez catchment. Only one sample (LUCN-53) was collected in the Chicamocha valley catchment, which was located downstream of the main knickpoint.
We collected~1 kg of sand for each sample. Catchment areas for each sample were large (>100 km 2 ) as recommended by Niemi et al. (2005) and Yanites et al. (2009) to provide a good representation of the 10 Be TCN inventory and the erosion rates. We use the erosion values obtained in the upstream reaches of the rivers that flank the Eastern Cordillera to compare the erosion rates between the rivers of the Sabana and the flanks of the cordillera. These flank rivers and those located in the Sabana, flow through alternating sandstone and shale formations in proportions that are similar in each different catchment. This similarity reduces the possibility of lithological biasing in sampling between our chosen catchments. The erosion rates determined by 10 Be reflect erosion on timescales of tens of thousands of years (Von Blanckenburg, 2005). In addition we sampled rivers across the western flank of the Eastern Cordillera, but we could not determine 10 Be concentrations in this region because we were unable to extract quartz from samples which were dominated by argillaceous sediment.
We also collected five quartzite samples for 10 Be TCN surface exposure dating from a well-preserved river terrace in the Guayuriba Basin (Figure 3(C), Table II). Such landforms are rare in our study area. Our sampled river terrace, which lies at~406 m above the current stream, is not deformed, and it shows no evidence of erosion or landsliding. We collected 500 g rock from the upper surfaces of fresh quartzite boulders inset into the river terrace. Topographic shielding was calculated by measuring the inclination to the skyline for every cardinal and intercardinal direction following the approach by Balco et al. (2008).
Quartz isolation, purification, dissolution and preparation of BeO were undertaken in the geochronology laboratories at the University of Cincinnati following the methods of Kohl and Nishiizumi (1992) and described in detail in Dortch et al. (2009). All river sediment and river terrace boulder samples were sieved and crushed to 250-500 μm for the analysis. Samples were cleaned using HNO 3 , HCl and HF and passed through a Frantz magnetic separator. Density separation was undertaken using lithium heteropolytungstate followed by an additional HF leach (Brown et al., 1991;Kohl and Nishiizumi, 1992;Cerling and Craig, 1994). For each sample, 15-30 g of clean quartz grains was dissolved in HF and HNO 3 with 350 mg of 9 Be carrier. Beryllium was separated using anion and cation exchange columns. Beryllium hydroxide was obtained after fuming with HClO 4 acid and passing through anion and cation exchange columns (Bourlès, 1988;Brown et al., 1992). The Be(OH) 2 was heated in an oven at 750°C to form BeO and then loaded in steel target mixed with of Niobium (Nb) powder. 10 Be/ 9 Be ratios were measured by Accelerator Mass Spectrometry in the Purdue Rare Isotope Measurement (PRIME) Laboratory at Purdue University, Indiana, USA.
We took the standard approach and assumed that the amount of 9 Be in the prepared quartz was negligible. Sometimes this may not be the case, and quartz may contain some 9 Be. Such cases are rare, and almost invariably occur when quartz is derived from beryl-bearing granites or pegmatites, so that traces of Be and/or small amounts of beryl exist in the supposedly pure quartz sample, as documented in some areas of the Himalaya (Portenga et al., 2015). However, pegmatites and beryl-bearing granites are not present in our study area. Any significant native 9 Be is therefore highly unlikely in our sampled quartz. If native 9 Be is present then denudation rates calculated from 10 Be concentrations will be overestimates, and the calculated erosion rates should be considered as apparent; however these rates can still be used for comparisons of variation across a region of similar lithologies (Corbett et al., 2013;Portenga et al., 2015).
To model catchment-wide erosion rates from 10 Be results we assumed that: (i) the sediment volume is proportional to the erosion rate, i.e. the catchment is close to steady-state; (ii) all the sediment collected at the sample locality was well mixed; (iii) the contribution of quartz was homogeneous in the catchment; (iv) that there is an isotopic equilibrium within the catchment, i.e. TCNs production in the catchment equals the transport of TCNs out of the catchment; and (v) the erosional timescale was significantly larger than the sediment transfer through the catchment (Lal and Arnold, 1985;Granger et al., 1996;von Blanckenburg, 2005).
The corrected catchment-averaged production rates for 10 Be were calculated for each catchment using the SRTM90 digital elevation model following the method of Dortch et al. (2011) using MATLAB v. 2008 and the scaling factors provided in Lal (1991) and Stone (2000). With this method, the production rate for each pixel is calculated accounting for shielding to make an average for all the pixels in the catchment to obtain a spatially average production rate for the entire catchment (Lal, 1991). We use a 10 Be half-life of 1.36 AE 0.07 Ma (Nishiizumi et al., 2007), the scaling model of Lal (1991) and Stone (2000) and a sea-level high-latitude production rate of 4.49 AE 0.39 10 Be atoms/g SiO 2 /a, for catchment erosion rates and river terrace dating. 10 Be ages for the river terrace samples were calculated using the CRONUS calculator (http://hess.ess.washington.edu/math/; Balco et al., 2008). These ages will be minimum values and hence the resulting incision rates are maximum values. We are aware that the production rates and scaling models for 10 Be are being refined Marrero et al., 2016). Borchers et al. (2016) recently revised the production rate for sea level at high latitudes to 4.01 10 Be atom/g SiO 2 /a for the Lal (1991) / Stone (2000) scaling model. However, we prefer to use the established scaling model and production rates in Balco et al. (2008) until there is community-wide agreement on the appropriate production and scaling. The new production rate results in an~10% difference in the erosion rate from our preferred values.
Digital drainage analysis
The axial zone, which includes the Sabana de Bogotá and the southern part of the Sogamoso basin, is characterized by the lowest SSP values (Figure 4). In contrast, higher SSP values occur on the Eastern Cordillera flanks and in the northern areas of the Sogamoso basin, which argue for higher erosion rates than in the axial zone of the cordillera. We extend the Mx map for the Eastern Cordillera of Colombia of Struth et al. (2015) that was limited to the area between 4°N and 5°30′N to between 4°N and 7°N. The χ values in the axial region of Sabana de Bogotá of the Eastern Cordillera ( Figure 5) are higher than those for its flanks in the area of the drainage divide. A strong differentiation in χ values exists within the same domain: while the Sabana de Bogotá has low values near the central divide, the Sogamoso basin shows high values near the divide (Figure 5(B1)). In addition, χ values indicate disequilibrium between the Turmequé (with low values) and the Chicamocha (with higher values near the divide) basins (Figure 5(B2)).
Low SSP, Mx and precipitation, and high χ values in the headwaters of the rivers characterize the plateau, while high SSP, Mx and precipitation, and low χ values in the headwaters of the rivers characterize the flanks. In summary, the geomorphic indices clearly differentiate the plateau and flank domains within the Eastern Cordillera.
Landscape evolution rates
The large erosional contrast between the plateau and its flanks is evident from the variation in erosion rates derived from 10 Be analysis ( Figure 6). The 10 Be analysis shows that for similar drainage areas, the erosion rates are lower in the Sabana de Bogotá plateau region (< 20 mm/ka) than on its flanks (>400 mm/ka (Table III). In more detail, the upper half of the Guayuriba catchment is eroding at a rate of~75 mm/ka (samples LUCN-01, 03, 04, 08; Figure 6), whereas downstream the erosion rates are~100, 479 and 670 mm/ka (LUCN-07, LUCN-05 and LUCN-06). Two samples (LUCN-05 and LUCN-06) show that erosion rates are an order of magnitude higher than the rest of the flank samples and provide the highest erosion rates (see discussion below). In the Turmequé catchment, the erosion rates increase progressively downstream from 5.5 mm/ka in the upper part of the catchment in the longitudinal tract (LUCN-45) to 15.7, 48, 53 and 59 mm/ka (LUCN-26, 28, 27 and 25) in the transverse tract ( Figure 6).
The Sogamoso catchment drains the axial plateau of the Eastern Cordillera to the north, and is eroding at a rate of 167 mm/ka (LUCN-54; Figure 6). This catchment is divided into the Suárez catchment in the west and the Chicamocha catchment in the east, which are eroding at a rate of 64 and 228 mm/ka (LUCN-55 and 56), respectively. The upper part of the Suárez catchment is eroding at a rate of 8 and 10 mm/ka (LUCN-46 and 50). The erosion rate for the middle part of the Sogamoso catchment is 46 mm/ka (LUCN-53), and is similar to the erosion rate for the Suárez catchment.
TCN exposure ages for terrace boulders in the Guayuriba basin, assuming zero erosion and considered as minimum ages, range from 257 to 697 ka (Figure 7). However, the ages will be significantly older if we estimate an erosion rate using the oldest boulder age (LUCN-12) by applying the methods of Lal (1991). The erosion rate obtained from the oldest boulder is 1 mm/ka and if we assume that all the samples erode at this rate, the initial ages with zero erosion will increase by up 50% for ages up to~700 ka. However, since we sampled large boulders with no apparent weathering, we can assume that a correction factor for erosion is not really necessary for our ages.
The ranges of ages show significant scatter, but the three oldest samples (LUCN-11 to LUCN-13) clustered particular well given their antiquity, with a mean age of 656 AE 69 ka. We argue that the two youngest ages probably reflect exhumation of the boulders from the terrace and so we do not use those ages in our incision rate calculation. Given the mean elevation of the three oldest samples is 2129 AE 3 m asl and the elevation in the adjacent valley is~1723 m asl, we use a height difference of 406 AE 5 m to calculate an incision rate of~62 mm/ka. This is comparable with the catchment-averaged erosion rates calculated from river sediment from the flank streams.
Discussion
Calculated erosion rates for the Eastern Cordillera using 10 Be and values of SSP, Mx and χ show clear erosional contrasts between the axial plateau and its flanks. Intra-domain and intra-basin differences are also apparent between regions.
Drainage network dynamics
Rivers across the flanks of the Eastern Cordillera need more erosion capacity than the longitudinal rivers in the Sabana plateau for river capture to occur (Struth et al., 2015). Higher erosion rates along the flanks of the Eastern Cordillera (>50 mm/ka) compared with the Sabana plateau (<20 mm/ka) suggests that retreat of the main divides and capture of the longitudinal plateau rivers by the transverse flank rivers is occurring. The Guayuriba and the Bogotá catchments provide examples of flank catchments with a drainage area of 200 km 2 yielding an erosion rate of 79 mm/ka (LUCN-01), whereas the plateau catchments with comparable drainage areas of 163 and 245 km 2 yield erosion rates of 18 (LUCN-30) and 4 mm/ka (LUCN-31), respectively (Figure 6). For a plateau catchment with an area of 240 km 2 , an erosion rate of 15 mm/ka (LUCN-15) was determined, which is lower than a similar size catchment (293 km 2 ) on the eastern flank that has an erosion rate of 77 mm/ka (LUCN-08). Comparisons between the Turmequé and Sogamoso catchments also illustrate this contrast; sampled catchments for Suárez, with areas of 313 and 1186 km 2 , provided rates of 7 and 10 mm/ka (LUCN-46 and 50). This is markedly different from the Turmequé flank samples from drainage areas of 460 and 1291 km 2 that have higher erosion rates of 48 and 59 mm/ka (LUCN-28 and 25), respectively. These results confirm that the main drainage divides in the Eastern Cordillera will migrate towards the axial plateau because rivers in the flanks have more energy (indicted by SSP values) and erosion capacity (reflected in the Mx values).
A special case is illustrated by comparing the Sabana catchment where sample LUCN-19 was collected (with a drainage area of 265 km 2 and an erosion rate of 11 mm/ka) with the Turmequé catchment flank where samples LUCN-45 (65 km 2 , 6 mm/ka) and LUCN-26 (346 km 2 , 15 mm/ka) were collected. This sampled area round these samples was interpreted by Struth et al. (2015) as a drainage capture zone on the basis of topography, featuring a depressed topographic profile and a mapped reentrant toward the west of the main divide, and of the occurrence of knickpoints upstream of a fluvial elbow (sharp change in the river channel direction) in map view. Similar erosion rates on both sides of the drainage divide in the Turmequé area argue in favor of drainage capture. In addition, the χ-values map shows that the upper part of the Turmequé catchment was part of the plateau area in the past and is now incorporated into the flank domain ( Figure 5(B1)).
As documented by Struth et al. (2015), the Eastern Cordillera of Colombia has clear geomorphic differences between its flanks and axial zone, which includes the Sabana de Bogotá and the southern low-relief part of the upper Sogamoso catchment. Comparison of the geomorphic indices and the erosion rate results with the drainage divide features argue for capture processes. This confirms the view of Struth et al. (2015) and Babault et al. (2013) who proposed drainage divide migration and a longitudinal to transverse drainage rearrangement in the Eastern Cordillera.
Correlation of erosion rates with Mx values (Figure 9) is stronger than with the SSP (Figure 8). The SSP is, at a first order, a function of precipitation, slope and drainage area. Calculation of the SSP from a series of raster measurements and the low-resolution data of the DEM or the precipitation provide equivocal results. Mx is solely a function of the elevation and distance of the river profile, providing a more realistic value to describe the steepness of the river reaches.
Calibration of geomorphic parameters
The links between erosion rates and SSP or Mx have been examined in previous studies (Kirby and Whipple, 2012;Perron and Royden, 2013;Safran et al., 2005;Bookhagen and Strecker, 2012). Our data reveal high erosion rates derived from samples located on the flanks of the Eastern Cordillera (which show high SSP) and in the northern parts of the Sogamoso basin, and low rates for the axial plateau (Sabana de Bogotá, upper part of Turmequé basin and southern part of the Sogamoso basin). Geomorphic indices cluster into four groups (A, B, and C in Figure 8(B)). Group A comprises Sogamoso catchment samples from the most incised northern part of that catchment 54,55,56). Group B is associated with flanks that are characterized by high relief, and group C includes the lowest relief area samples. Group C is subdivided into C′, including samples with flank characteristics but with lower relief 27,28 and 01), and C″, with plateau characteristics 26,31,33,45,46,50).
Erosion rates and the catchment-wide average Mx values correlate positively (Figure 9). Despite the four samples having different lithologies, climate and uplift conditions (dashed box in Figure 9), a polynomic equation is defined for the rest of samples (Trend A = red line in Figure 9). Trend A reflects samples with similar lithologies and in areas of similar precipitation and uplift. This trend is linear and can be expressed in the form: where a 1 is 12.28. Equation (4) might be used as a proxy for estimating erosion rates when TCN data are not available or cannot be obtained. However, the applicability of Equation (4) for the shale-rich western flank of the Eastern Cordillera of Colombia might be problematic. This is because the western flank is a shaledominated area with more erodible rocks while the eastern flank is composed of alternating sandstone and shale formations that are more resistant; hence application of Equation (4) for the western side will provide underestimates of the erosion rates.
Local climatic, bedrock and tectonic effects Classifying our samples into four groups, according to their erosion rates, provides an additional means of differentiating between the axial plateau and the cordillera flanks ( Figure 10): Group I represents the plateau domain, with the lower erosion rate values (<20 mm/ka). This area is a relatively little tectonically deformed belt of the cordillera (Mora et al., 2008, and is associated with longitudinal, structure-controlled fluvial drainage. Samples LUCN-26 and 45 are not located on the plateau, but the low relief of these catchments and the longitudinal trend of the rivers suggest they were part of the plateau in the past. Group II samples represent an incised domain, including some of the samples located in the flanks and in the Sogamoso catchment (Suárez River). The samples located in this group are from high-energy rivers that are transverse to the chain, and cut across the structural grain.
Group III includes only two samples, LUCN-54 (167 mm/ka, Sogamoso river) and LUCN-56 (228 mm/ka, Chicamocha river). The difference between erosion rates for these samples and the adjacent catchments, 46 mm/ka (LUCN-53) compared with 64 mm/ka (LUCN-55), correlates with a difference in climate, vegetation cover and rock strength. Basement composed of low-and medium-grade metamorphic rocks is exposed in the upper and medium part of the Chicamocha basin (LUCN-56), and the region has little vegetation. In contrast, the Suárez river basin (LUCN-55) mainly traverses alternating Cretaceous sandstone and shale formations. In terms of erodibility, and according to Castro (1992) and González and Jiménez (2015), basement rocks are more erodible than Cretaceous sedimentary rocks. Moreover, erosion is enhanced because of the lack of vegetation cover in the upper and middle part of the Chicamocha basin. The Chicamocha valley has an arid microclimate unlike any other part of the Eastern Cordillera. The valley is very narrow and the obtained data ( Figure 2) from the few pluviometer stations do not reflect the real conditions inside it, resulting in overestimates of precipitation. The Cocuy-Santander massif to the east and the Floresta massif to the west bound the Chicamoca valley (Figure 1). These massifs form an orographic barrier and result in a rain shadow zone within the Chicamocha valley. The Chicamocha river is characterized by a very high mean Mx (~8.13) with steep hillslopes and an erosion rate of 228 AE 32 mm/ka. The Suárez river (sample LUCN-54) mainly flows across Cretaceous formations and Mx values are low (~5.92), with an erosion rate of 63.8 AE 8.6 mm/ka. The Chicamocha catchment, with higher erosion rate, has lower precipitation than the Suárez catchment, which is similar to the Sabana de Bogotá. If high precipitation results in large erosion rates, then higher erosion rates would be expected in the wetter Suárez catchment than in the Chicamocha catchment; but this is not the case. We propose that the high erosion rate in the Chicamocha valley is related to the local 'semiarid' conditions, low vegetation cover and more erodible bedrock, all of which will enhance the erosion capacity. We underline the importance of rock type in determining the magnitude of the erosion within similar topographic domains, as shown in other studies in dynamic settings where higher denudation rates that have been related to high erodibility of the rocks (Salgado et al., 2008;Chadwick et al., 2013;Bierman et al., 2014;Pupim et al., 2015). Safran et al. (2005) showed that the rock type might play a secondary non-dominating role in catchment-averaged erosion rates in the Bolivian Andes, where high erosion rates were found in catchments that have weak or resistant bedrock. That work proposed that uplift had a first-order effect on erosion rates, with rock type playing a second-order effect, and climate had little if any influence on erosion. However, rock type, vegetative cover and climate have a first-order effect on erosion in the Eastern Cordillera of Colombia.
Group IV is composed only by two samples, LUCN-05 (479 mm/ka) and LUCN-06 (670 mm/ka), which yielded strongly different erosion rates compared with the rest of the Guayuriba catchment samples, as for example the sample LUCN-03 with an erosion of 73 mm/ka. Group IV samples are located in the Quetame basement massif (that is easily eroded), characterized by higher precipitation values and younger exhumation ages (Mora et al., 2008Parra et al., 2009a) than in the upper part of the catchment, as indicated by greater erosion and incision (see Figure 2). The combination of greater precipitation and exhumation defines a closed feedback erosive cycle, in accordance with the high erosion rates determined for this area.
Up to 150 mm/ka differences in erosion rates are observed in catchments within the same topographic domain (Figure 6). The difference between the Suárez (64 AE 9 mm/ka) and Chicamocha (228 AE 32 mm/ka) catchments is probably due to a combination of precipitation and lithologic effects. Erosion rates in the eastern flank catchments are not homogeneous, as is, reflected by the Guayuriba basin showing higher erosion rates, relief and incision values than the Turmequé catchment (Figures 2 and 11).
The contrast in erosion rates described above argues for different fluvial dynamics between the Guayuriba and Turmequé catchments. The upper part of the Turmequé catchment has clearly been captured, as suggested by geomorphic analysis in Struth et al. (2015). Our current work adds new analysis using the χ and SSP values suggesting a rearrangement of the axial longitudinal drainage by transverse rivers that traverse the flanks of the cordillera. The disequilibrium in the χ values for both sides of the eastern and central divides suggests drainage divide migration toward the inner part of the plateau's interior and to the north, respectively. This drainage divide migration is more evident for the Turmequé catchment than for the Guayuriba catchment. This supports the view of Struth et al. (2015) who suggested that the Turmequé area was a captured reentrant.
The different geomorphic characteristics of the eastern drainage divide and between the two catchments are interpreted as the result of the competition between uplift and erosion. In the Guayuriba catchment, the erosive potential results from the high contrast between slopes during mountain building and the active tectonics in the basin that has young thermochronologic ages (Mora et al., 2008Parra et al., 2009a; Figure 2), arguing for high erosive potential to compensate the high uplift in the foothills of the flanks. In this way, the upstream propagation of erosion and then, the capture and divide migration is less probable in a catchment with low exhumation such as the Turmequé catchment. Struth et al. (2015) interpret the different river dynamics between the flanks and the axial plateau of the Eastern Cordillera as a product of mountain building and drainage development. As such there was progressive increase in the regional slope caused by the accumulation of crustal shortening and thickening, as documented for the Moroccan High Atlas (Babault et al., 2012). Essentially, the contrast in regional slopes results in different erosion rates and topographic dynamics across the Eastern Cordillera. Variations in local precipitation, tectonics and bedrock within a basin may also influence its erosion dynamics, and may have played a secondary role in the dynamism of divide migration and landscape evolution.
Conclusions
A smooth axial plateau flanked by steep topographic belts characterizes the Eastern Cordillera of Colombia. New 10 Be TCN data reveal high erosion rates for catchments along the highrelief flanks of the Eastern Cordillera, with a mean value of 70 AE 10 mm/ka (exceeding 400 mm/ka in some catchments). In contrast, the mean erosion rate is 11 AE 1 mm/ka for the low-relief axial plateau. This argues for erosional contrasts between the two domains and a migration towards the plateau of the N-S oriented plateau-flanks drainage divides. Results of digital morphometric analysis, including specific stream power, steepness index and χ values confirm the view that the drainage divide in the Eastern Cordillera is asymmetric and is moving by processes of river capture. Drainage reorganization from longitudinal to transverse, dominated by means of a series of river capture events, will lead to a progressive reduction of the extension of the axial plateau of the Eastern Cordillera. The erosional contrast between the two morphologic domains of the Eastern Cordillera was primarily driven by the increase in the orogen regional slope by progressive accumulation of crustal shortening and thickening. Local climate, tectonics and rock type play a secondary role in controlling the erosion rates and the basin dynamics at a local scale.
Comparison of the TCN-derived erosion rates with the geomorphic digital parameters show positive correlations with SSP and/or Mx. Using our derived Equation (4) the Mx-erosion rate plot for the Eastern Cordillera can be applied to help acquire first-order estimates of erosion rates in areas where there are similar lithologic (layered sedimentary rocks in alternating competent and incompetent formations) and pluviometric characteristics (but where no TCN data are available
|
v3-fos-license
|
2023-11-26T16:06:50.048Z
|
2023-11-24T00:00:00.000
|
265433015
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2023.1275421/pdf?isPublishedV2=False",
"pdf_hash": "928505c45d11c378285fc9eff69c57992e3d5693",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2275",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "b1175e18ef464a71d2994e29839da45db9de64c8",
"year": 2023
}
|
pes2o/s2orc
|
Identification of novel SHANK2 variants in two Chinese families via exome and RNA sequencing
Background SHANK2 encodes a postsynaptic scaffolding protein involved in synapse formation, stabilization and homeostasis. Variations or microdeletions in the SHANK2 gene have been linked to a variety of neurodevelopmental disorders, including autism spectrum disorders (ASD) and mild to moderate intellectual disability (ID) in human. However, the number of reported cases with SHANK2 defects remains limited, with only 14 unrelated patients documented worldwide. Methods In this study, we investigated four patients from three families with ID. Whole-exome sequencing (WES) was performed to explore the genetic causes, while Sanger sequencing was used to confirm the identified variants. Furthermore, RNA sequencing and functional enrichment analysis were performed on patients with likely pathogenic variants to gain further insights into the molecular landscape associated with these variants. Results Two novel variants in the SHANK2 gene: a heterozygous splicing substitution (NM_012309.5:c.2198-1G>A p.Pro734Glyfs*22) in Family 1, and a heterozygous nonsense variant [NM_012309.5:c.2310dupT p.(Lys771*)] in Family 2 were identified by WES and confirmed by Sanger sequencing. RNA sequencing and cohort analysis identified a total of 1,196 genes exhibiting aberrant expression in three patients. Functional enrichment analysis revealed the involvement of these genes in protein binding and synaptic functions. Conclusion We identified two novel loss of function variants that broadens the spectrum of SHANK2 variants. Furthermore, this study enhances our understanding of the molecular mechanisms underlying SHANK2-related disorders.
Introduction
Neurodevelopmental disorders (NDDs) are a group of mental health disorders resulting from the disruptions in crucial neurodevelopmental processes, leading to abnormal brain function that can affect emotions, cognition, learning, self-regulation, and memory (Morris-Rosendahl and Crocq, 2020).The severity and behavioral phenotypes observed in NDD patients vary widely, with diagnoses commonly including autism spectrum disorder (ASD), intellectual disability (ID), developmental delay (DD), and epilepsy (Zablotsky et al., 2019).Previous research has implicated various genetic variants in NDDs, including chromosomal Wu et al. 10.3389/fnins.2023.1275421Frontiers in Neuroscience 02 frontiersin.orgrearrangements, copy number variants (CNVs), and coding-sequence variants.Although numerous genes have been associated with these disorders, each gene or genomic alteration typically accounts for less than 1% of cases.Many of the genes implicated in NDDs play a role in the development or functioning of neuronal circuits.Among the most extensively studied biological pathways in NDDs are those involving synaptic genes (Toro et al., 2010;Guilmatre et al., 2014;Hu et al., 2014;Leblond et al., 2014;Parenti et al., 2020).The SH3 and multiple ankyrin repeat domains2 (SHANK2) gene is located on chromosome 11q13.3and belongs to the SHANK gene family.SHANK2 encode a pivotal scaffold protein in the postsynaptic density (PSD) complexes of glutamatergic synapses.The PSD is a specialized structure of the postsynaptic membrane that plays a critical role in neuronal signaling.The SHANK2 protein contains multiple domains facilitating protein-protein interactions and is vital for organizing the PSD through a complex network of molecular interactions (Sheng and Kim, 2000;Sasaki et al., 2020).In Shank2 knock-out mice, both the ionotropic glutamate receptors at the synapse and the level of Shank3 are upregulated.The mutant mice exhibit reduced dendritic spines and basal synaptic transmission.Moreover, they display remarkably hyperactive behavior and manifest significant autistic-like behavioral alterations, including repetitive grooming and deviations in vocal and social behaviors (Schmeisser et al., 2012;Won et al., 2012;Yoo et al., 2014).
Variants in the SHANK2 gene have been implicated in individuals with ASD and ID.The initial discovery of de novo CNVs in the SHANK2 gene in two unrelated patients was reported by Berkel et al. (2010), using microarray analysis.Subsequent investigations involved sequencing the SHANK2 gene in a larger cohort of individuals, including 396 ASD cases, 184 cases of ID, and 659 unaffected individuals, leading to the identification of additional variants specific to ASD and ID (Berkel et al., 2010).In a study by Leblond et al.,SHANK2 was sequenced in 455 patients with ASD and 431 controls, and the findings were integrated with the previous research.A notable finding was the significant enrichment of variants affecting conserved amino acids in affected patients compared to controls.Furthermore, functional studies demonstrated a reduction in synaptic density at dendrites when neuronal cells were transfected with the variants identified in patients, as opposed to those exclusively detected in controls.These extensive investigations provide compelling evidence that certain SHANK2 variants may confer an increased risk of ASD (Leblond et al., 2012).
Recently, there has been growing interest in utilizing total RNA sequencing in conjunction with whole-genome sequencing (WGS) or whole-exome sequencing (WES) to enhance our understanding of variant pathogenicity.This integrated approach enables the detection of outliers in both expressions and splicing, facilitating the interpretation of functional consequences (Kremer et al., 2017;Liu et al., 2022;Pan et al., 2022;Peymani et al., 2022).Moreover, it provides a valuable opportunity to investigate the molecular mechanisms underlying loss of function (LOF) variants in the SHANK2 gene.In the present study, we investigated two novel SHANK2 variants identified in three patient with ID from two families.Both the variants are LOF variants.Additionally, RNA sequencing and cohort analysis were performed on these patients to gain further insights into the impact of these LOF variants on gene expression.Through comprehensive analysis, we identified numerous genes with aberrant expression, which significantly contributed to our understanding of the molecular mechanisms associated with LOF variants in the SHANK2 gene.These findings provide valuable insights into the pathogenicity of SHANK2 variants and shed light on the underlying molecular processes involved in ID.
Ethical compliance
Prior to their participation in this study, informed consent was obtained from all patients or their legal guardians.This research was conducted in accordance with ethical guidelines and regulations established by the ethics committee of the Second Affiliated Hospital of Chongqing Medical University (Approval No. 2022-549, dated 7 March 2022).
DNA isolation, whole-exome sequencing, and variant analysis Peripheral blood samples were collected from the patients using EDTA tubes, and genomic DNA was isolated using DNeasy Blood & Tissue kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions.A total of three microgram of genomic DNA was randomly fragmented and captured using the Agilent SureSelectXT V5 capture kit (Agilent Technologies, Santa Clare, CA).Sequencing was performed on an Illumina HiSeq2000 (Illumina, San Diego, CA) with 100-bp paired-end reads, following the recommended protocols.To ensure data quality, the raw sequencing reads underwent filtering using Fastp (Chen et al., 2018b) to obtain clean reads.FastQC was employed to evaluate the quality of the sequencing data in each sample (Trivedi et al., 2014).The clean DNA sequencing reads were aligned to the human reference genome hg19 (GRCh37) using the BWA-MEM algorithm (Li and Durbin, 2009).Ambiguously mapped reads (MAPQ < 10) and duplicated reads were removed using SAMtools (Li et al., 2009) and PicardTools,1 respectively.Single nucleotide polymorphisms (SNPs) and small insertions and deletions (INDELs) were identified following the best practices recommended by the Genome Analysis Toolkit software (McKenna et al., 2010).Variants were annotated using the Ensembl Variant Effect Predictor (McLaren et al., 2016).The ACGS Best Practice Guidelines for Variant Classification in Rare Disease 2020 were followed (Ellard et al., 2020).Classification of the variants into pathogenic (P), likely pathogenic (LP), benign (B), likely benign (LB), or variants of uncertain significance (VUS) was performed in accordance with the ACMG/AMP and ACGS guidelines (Richards et al., 2015;Ellard et al., 2020).All identified variants were further validated by Sanger sequencing.
RNA isolation, sequencing and data preprocessing
Peripheral blood samples from the patients were collected using EDTA tubes.Subsequently, the red blood cells were removed through centrifugation after incubation with a red blood cell lysis solution.Total RNA sample was isolated within 24 h of collection and enriched using oligo-dT bead capture.Complementary DNA synthesis was performed following the manufacturer's instructions, and libraries were prepared using the Illumina TrueSeq stranded mRNA sample prep kit (Illumina, San Diego, CA).Subsequently, sequencing of the pooled samples was conducted on a NovaSeq 6000 sequencing system.To obtain high-quality data, the raw sequencing reads underwent processing using Fastp to obtain clean reads (Chen et al., 2018a).Quality assessment of the sequencing data was performed using FastQC and mulitQC, evaluating factor such as sequence quality per base, sequence duplication level, and quality score distribution for each sample.The average quality score for the RNA sequences exceeded 30, indicating that substantial portion of high-quality sequences (Ewels et al., 2016).The clean RNA-sequencing reads were then aligned to the human reference genome (hg19) using STAR (2.4.2a) in conjunction with the Gencode v19 annotation (Dobin et al., 2013).Mapping evaluation metrics, including sequencing depth, percentage of mapped reads, and the number of expressed genes, were computed using DROP v1.21 (Yepez et al., 2021).Furthermore, the match between the RNA sequencing sample and its annotated DNA sample was assessed using DROP v1.21, with a cutoff of 0.8.Aberrant gene expressing was detected using DROP v1.21 (Yepez et al., 2021).
The clean RNA-sequencing reads were aligned to the human reference genome (hg19) using STAR (2.7.8a) along with the Gencode v29 annotation (Dobin et al., 2013).The Genomic Alignments R package's "summarizeOverlaps" function was used for read counting.To enhance statistical power, we performed aberrant expression analysis by combining our data with 367 blood samples from GTEx data.Genes with a 95th percentile Fragments Per Kilobase of transcript per Million mapped reads (FPKM) less than 1 were considered as lowly expressed and were excluded from downstream analysis.OUTRIDER was employed to identify expression outliers (Brechtmann et al., 2018).Technical and biological covariates such as sex, age, and sequencing batch were automatically controlled by OUTRIDER, which utilized an autoencoder implementation.Genes were considered to have aberrant expression if they had an adjusted value of p < 0.05.
Pathway enrichment analysis
To further explore the functional implications of the identified aberrations, we performed functional enrichment analysis using the KOBAS-I service (Bu et al., 2021).This comprehensive tool provides pathway enrichment analysis by leveraging various databases including GO, KEGG, Reactome, and GWAS catalogs.Pathways with an adjusted value of p < 0.05 were considered as significant, providing valuable insights into the biological relevance of the aberrant gene and their involvement in key pathways and biological processes.
Clinical presentation
This study included three unrelated Chinese families (Figure 1; Supplementary Figure S1).Proband 1(individual II-1 from Family 1), a 27-year-old male, was the second child of non-consanguineous healthy parents.He had an uneventful full-term birth, walked at 3 years old, and began speaking at 7 years old.Proband 1 exhibited poor learning ability, limited mathematical skills, and discontinued education after the first grade of elementary school (Figure 1).Proband 2 (individual II-1 from Family 2), a 10-year-old female, was the first child of unrelated parents.The pregnancy and delivery were normal.The primary phenotype observed in this patient was mild ID.At the time of diagnosis, she was attending a regular primary school.Her brother was unaffected, but her mother had a diagnosis of mild ID (Figure 1).
Proband 3 (individual II-1 from Family 3) is a 4-year-old boy born to non-consanguineous healthy parents as their only child.He presented with globally development delay, ID and exhibited tendencies toward ASD features including impaired social interactions, repetitive behaviors, and delayed speech development.There was no reported family history of similar conditions or disorders (Supplementary Figure S1).
WES analysis
Due to proband 1(individual II-1 from Family 1) being raised by grandparents while parents worked in another city, only singleton WES was performed, along with collection of peripheral blood from the proband's sister.Through the analysis of WES data and variant pathogenicity classification following ACMG guidelines, only one variant in the SHANK2 gene (NM_012309.5:c.2198-1G>A) were identified in the proband (Figure 1).Then, the variant was classified with criteria PVS1 + PM2 + PP3 and annotated as "LP" (Table 1).We have submitted this variant to ClinVar.It can be referenced under Submission Number: SUB13920791. 2n Family 2, trio WES was conducted for proband 2 (individual II-1) and her parents, since the mother was also affected.This revealed only one variant in the SHANK2 gene in the proband-NM_012309.5:c.2310dupT p.(Lys771*).Individual II-2, who is the younger brother of Proband 2, resides in a different region, and we were unable to obtain a sample from him (Figure 1).Then, the variant was classified with criteria PVS1 + PP1_moderate + PM2 and annotated as "LP" (Table 1).We have submitted this variant to ClinVar.It can be referenced under Submission Number: SUB13920833 (see footnote 2).
Sanger sequencing was used to confirm the presence of these variants identified in the SHANK2 gene (Figure 1; Supplementary Figure S1).The variants identified in proband 1 (individual II-1 from Family 1) and proband 2 (individual II-1 from Family 2) were novel.The NM_012309.5:c.2198-1G>Avariant is a classical splice site variant.On the other hand, the NM_012309.5:c.2310dupTp.(Lys771*) variant immediately introduces a premature termination codon (TAA) due to the presence of a downstream AAA codon (Lys).Both of these LOF variants are located in the proline-rich region of the SHANK2 protein (Figure 2).
Minigene splicing assay
SHANK2 is primarily expressed in the nervous system.However, due to ethical considerations, we were limited to obtaining peripheral blood samples from the patients.Given the low expression of SHANK2 in peripheral blood, we resorted to the minigene assay to uncover the true impact of the NM_012309.5:c.2198-1G>Avariant on pre-mRNA splicing.Subsequently, RT-PCR was employed to analyze the splicing products.Upon agarose gel electrophoresis, it was observed that cells transduced with minigene-WT produced a 240 bp band, whereas cells transduced with minigene-MT generated a 223 bp band.Subsequent Sanger sequencing verified that the minigene-WT product aligned with the reference sequence.Conversely, the minigene-MT product exhibited a skipping event of the first 17 bp of exon 19, leading to a frameshift and the generation of a premature stop codon (Figure 3).
Transcriptome and cohort analysis
In order to investigate the potential molecular mechanism underlying these novel likely pathogenic variants, we performed RNA sequencing.Cohort analysis was conducted on RNA sequencing data from three patients of family 1 and 2, comparing them to publicly available databases as controls.A total of 1,196 genes were identified to exhibit aberrant expression patterns (Figure 4; Supplementary Table S1).Notably, among these genes, several were found to be associated with SHANK2 and synapse function.One such gene is Glutamate receptor, ionotropic, N-methyl-D-aspartate associated protein 1 (GRINA).GRINA encodes a postsynaptic density protein involved in anchoring glutamate receptors (Schmeisser et al., 2012).Another gene of interest is CTTN, which encodes cortactin, an actin regulatory protein enriched at excitatory synapses (Mac Gillavry et al., 2016).
Pathway enrichment analysis
In order to further understand the functional implications of the aberrantly expressed genes, we performed pathway enrichment analysis.The functional annotation of these genes revealed their involvement in various biological pathways.Notably, a significant proportion of the genes were found to be associated with protein binding, indicating their participation in protein-protein interaction and molecular processes.Additionally, a subset of aberrantly expressed gene was found to be associated with the activation of NMDA receptors and postsynaptic events, further supporting their involvement in synaptic function and neuronal signaling.These findings are consistent with the known role of SHANK2 as a postsynaptic scaffolding protein, highlighting the potential impact of the identified variants on synaptic organization and function (Figure 5; Supplementary Table S2).
Discussion
The SHANK gene family, consisting of SHANK1, SHANK2, and SHANK3, encodes multi-domain master scaffold proteins that play critical roles in the organization and function of the postsynaptic density (PSD) complexes at glutamatergic synapses.SHANK proteins participate in various synaptic functions by interacting with many synaptic proteins (Guilmatre et al., 2014;Monteiro and Feng, 2017).Variants in SHANK genes have been repeatedly reported in individuals with a range of NDDs (Leblond et al., 2014;Doddato et al., 2022).
Among the SHANK gene family members, SHANK2 is the largest gene and is located on chromosome 11q13.3(Figure 2).Only 14 cases with SHANK2 variants have been documented before.In this study, we present the identification of two novel SHANK2 variants [NM_012309.5:c.2198-1G>Ap.Pro734Glyfs*22 and NM_012309.5:c.2310dupT p.(Lys771*)] in two unrelated Chinese families.Both variants are located within the proline-rich region (PRO) of SHANK2 peptide.Out of the total 17 cases, seven individuals carried microdeletions encompassing SHANK2 gene, while nine cases resulted in premature stop codons.Interestingly, the NM_012309.5:c.2198-1G>Ap.Pro734Glyfs*22 variant reported in this study represents the first splicing variant identified in SHANK2, and is also considered a LOF variant (Table 2).
Through cohort analysis on the transcriptomic data from three patients carrying the identified novel LOF variants in SHANK2, a total of 1,196 genes exhibiting aberrant expression were identified.This dataset, derived from patient samples, presents a valuable resource providing insights into the molecular landscape associated with the disorder.GRINA belongs to the NMDA receptors (NMDARs) family.Studies conducted on mice lacking exons 6-7 of Shank2 have demonstrated autistic-like behavioral abnormalities, which have been linked to altered N-methyl-Daspartate receptor (NMDAR) function.Furthermore, upregulation of GRINA has been consistently observed in various psychiatric diseases in human subjects (Schmeisser et al., 2012).Cortactin, encoded by the other noteworthy gene, is a Mac Gillavry et al., 2016).Our findings provides further evidence that SHANK2 disruption can lead to molecular changes related to glutamate signaling and cytoskeletal dynamics, which may contribute to the neurodevelopmental phenotypes.However, it is important to acknowledge that expanding the sample size by including more patients would be highly beneficial.This approach would lead to a more comprehensive understanding of the spectrum of gene expression abnormalities related to this disorder.Additionally, it would also facilitate the identification of additional SHANK2 variants.While NDDs caused by SHANK2 variants exhibit autosomal dominant inheritance, the severity and specific behavioral phenotypes observed in individuals display a high degree of variability, including possible incomplete penetrance.In our clinical cohort, proband 3 (individual II-1 from family 3) who carried a NM_012309.5:c.178C>Tp.(Arg60Cys) variant in SHANK2 (Table 1; Supplementary Figure S1).Interestingly, this specific variant in SHANK2 corresponds to the R12C alteration in the SHANK3 SPN domain, which has been previously implicated as a potential pathogenic variant in AD patients (Leblond et al., 2014;Sasaki et al., 2020).However, it is important to note that this variant we discovered was inherited from his father, who exhibits no clinical phenotype.The combined Annotation Dependent Depletion (CADD) score is 24.4 and multiple in-silico programs consistently predicted the deleterious effect (Table 1).However, its REVEL score is only 0.36, which categorizes it as "Uncertain" (Ioannidis et al., 2016).Therefore, this variant was classified as VUS according to the ACMG criteria (Table 1; Supplementary Figure S1).
This study represents a preliminary investigation into the transcriptional changes associated with SHANK2 variants and NDDs.A clear limitation is that the ideal neural tissues were not examined due to clinical inaccessibility and ethical considerations.Instead, we analyzed the more readily accessible peripheral blood nucleated cells, hoping to uncover valuable insights.Pathway enrichment analysis of the differentially expressed genes did reveal associations
FIGURE 1
FIGURE 1Pedigree of two families with intellectual disability.Sanger sequencing was performed on the probands (indicated by arrows).Squares and circles indicate males and females, respectively.Filled and empty symbols indicate affected and unaffected individuals, respectively.WT, wild-type; MT, mutant-type.
FIGURE 2
FIGURE 2 Location diagram of SHANK2 variants identified in this study.The SHANK2 gene is located on chromosome 11q13.3.The genomic structure of SHANK2 is outlined in the middle diagram.The bottom cartoon shows the domains of human SHANK2 peptide.Variants identified in this study are mapped onto the gene and protein domains.Ank, ankyrin repeats; SH3, Src homology 3; PDZ, PSD95/DLG/ZO1; PRO, proline-rich region; SAM, sterile alpha motif.
FIGURE 3
FIGURE 3 Minigene splicing assay for NM_012309.5:c.2198-1G>Avariant in SHANK2.(A) Schematic representation of hybrid minigenes used in the assay.(B) The plasmids used in this assay were verified by Sanger sequencing.(C) Gel electrophoresis of RT-PCR products.(D) Sanger sequencing revealed that the product of minigene-MT exhibited skipping of the first 17 bp of exon 19.E, exon.
FIGURE 4
FIGURE 4Volcano plot displaying differential gene expression in cohort analysis.Each data point represents a gene, plotted based on its fold change (log2) on the x-axis and the negative logarithm of the adjusted value of p on the y-axis.Genes with an adjusted value of p < 0.05 are considered statistically significant and are represented by red dots.Genes that do not reach statistical significance are shown in gray.
TABLE 1
Variants identified in the SHANK2 gene in probands in this study.
|
v3-fos-license
|
2023-03-09T06:16:35.761Z
|
2023-03-08T00:00:00.000
|
257404708
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1177/0271678x231152734",
"pdf_hash": "96c3fb5da6ad5bd80b7b8250685f2de0cbf01697",
"pdf_src": "Sage",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2279",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"sha1": "3e66e616904965dffdb9f8445334f27c1bce3409",
"year": 2023
}
|
pes2o/s2orc
|
Manipulation of iron status on cerebral blood flow at high altitude in lowlanders and adapted highlanders
Cerebral blood flow (CBF) increases during hypoxia to counteract the reduction in arterial oxygen content. The onset of tissue hypoxemia coincides with the stabilization of hypoxia-inducible factor (HIF) and transcription of downstream HIF-mediated processes. It has yet to be determined, whether HIF down- or upregulation can modulate hypoxic vasodilation of the cerebral vasculature. Therefore, we examined whether: 1) CBF would increase with iron depletion (via chelation) and decrease with repletion (via iron infusion) at high-altitude, and 2) explore whether genotypic advantages of highlanders extend to HIF-mediated regulation of CBF. In a double-blinded and block-randomized design, CBF was assessed in 82 healthy participants (38 lowlanders, 20 Sherpas and 24 Andeans), before and after the infusion of either: iron(III)-hydroxide sucrose, desferrioxamine or saline. Across both lowlanders and highlanders, baseline iron levels contributed to the variability in cerebral hypoxic reactivity at high altitude (R2 = 0.174, P < 0.001). At 5,050 m, CBF in lowlanders and Sherpa were unaltered by desferrioxamine or iron. At 4,300 m, iron infusion led to 4 ± 10% reduction in CBF (main effect of time p = 0.043) in lowlanders and Andeans. Iron status may provide a novel, albeit subtle, influence on CBF that is potentially dependent on the severity and length-of-stay at high altitude.
Introduction
The brain is highly oxygen (O 2 ) dependent and, as such, the related cerebral blood flow (CBF) responses to hypobaric hypoxia have been well described. [1][2][3][4] For example, upon exposure to hypoxia, CBF increases in order to compensate for the initial reductions in arterial O 2 content in order to maintain cerebral O 2 delivery. 5,6 Eventually, the acute rise in CBF is attenuated, coinciding with an increase in arterial O 2 content via erythropoiesis, ventilatory acclimatization and compensation of the initial respiratory alkalosis. 2,7,8 In hypoxia, the hypoxia-inducible factor (HIF) family -the key cellular O 2 sensor 9,10 -binds to hypoxia-responsive elements in gene promoters to upregulate expression of >100 genes to coordinate increased O 2 supply to hypoxic tissue. While HIF-1a expression within the human brain has not been quantified during hypoxia, data from rodent models shows that cortical HIF-1a expression during hypoxia follows a similar trajectory to the CBF responses -i.e., HIF-1a peak expression occurring with 6-12 h of exposure to hypoxia, which is halved by day 7, and normalized within $3 weeks. 11 The potential for HIF expression to influence cerebrovascular function also stems from murine models, that have shown that within 4-hours of exposure to extreme hypoxia, there is an increase in downstream products of HIF-1a [e.g. vascular endothelial growth factor, erythropoietin (EPO)] in the brain, that increase cerebral microvascular density and hematocrit (Hct) within 3 weeks. 12 Similarly, inactivation of prolyl-hydroxylase [the site whereby iron acts to influence the stability of HIF] 13 leads to HIF expression, neurovascular angiogenesis and pericyte proliferation in mice. 14 Finally, cerebral astrocytes (but not cerebral neurons) exposed to extreme hypoxia and desferrioxamine (DFO; iron chelator) showed an increase in EPO expression via HIF-2a up-regulation. 15 In humans, it is reasonable that the increase in CBF that occurs to counteract the reduction in arterial O 2 content during hypoxia, also coincides with HIF stabilization. However, it has yet to be determined whether acute HIF down-or upregulation can acutely modulate hypoxic vasodilation of the cerebral vasculature.
Iron and iron-chelation are typically utilized to down-and up-regulate HIF expression -due to iron's constituent role in HIF stabilization, via prolyl hydroxylase activity 13 -which have repeatedly demonstrated notable implications in pulmonary, [16][17][18][19][20][21][22][23][24] and peripheral vascular regulation. 25 However, limited data exists with respect to the cerebral vasculature. As far as the authors are aware, the only human study to date is via the assessment of intracranial blood velocity (i.e. middle cerebral artery; MCA) following iron chelation [via DFO -an up-regulator of HIF expression 26 ] by Sorond and colleagues. 27,28 While the authors conclude that DFO infusion elevated MCA velocity, compared to saline placebo, the DFO condition appeared mostly comparable to baseline in both studies. For example, MCA velocity increased by $1 cm Á s À1 (level of significance not provided) from pre-to 4 hrs post-DFO in older adults 27 and by 2 cm Á s À1 (p > 0.05) and 4 cm Á s À1 (p < 0.05) at 3 hrs post-DFO in young and older adults, respectively. 28 These extremely small changes in MCA velocity are unlikely to be of physiological significance -especially if there is constriction/dilation of the MCA. 29 Studies of volumetric blood flow to the brain during iron manipulation (via iron and chelator infusion) have not been performed during conditions where the HIF pathways are up-regulated (i.e., hypoxia).
Tibetan Sherpa have been reported to present with a unique phenotypic adaptation to high altitude characterized by a blunted ventilatory response to hypoxia, 30 a more efficient plasma volume-hemoglobin ratio to aid exercise capacity, 31 a reduced prevalence of excessive erythrocytosis, 32,33 exhibit less pronounced pulmonary hypertension, and higher lung diffusing capacity [reviewed in: 34 ]. Cerebral O 2 delivery is also lower in Sherpa -and thus, potentially suggestive that Sherpa experience less deleterious cerebral consequences to hypoxia compared to lowlanders at high altitude. 3,35 Healthy high-altitude adapted Andeans demonstrate numerous attributes that enhance their hypoxic tolerance compared to lowlanders including: elevated birth weights, increased exhaled nitric oxide (NO) concentrations, larger lungs, improved aerobic capacity and genotypic adaptations, [36][37][38][39][40] and display preserved endothelial function at high altitude. 41 While both Sherpa and Andean highlanders display adaptive characteristics to hypoxia, both also display positive selection for HIF pathway candidate genes. 42,43 It seems plausible, therefore, that iron manipulation may also differentially impact cerebrovascular function in healthy Andeans and Sherpas who have many naturally selective traits to high altitude.
To explore these possibilities, we examined the hypothesis that CBF would increase in response to acute iron depletion (i.e. increasing HIF activity) and decrease with repletion (i.e. decreasing HIF activity). Furthermore, we hypothesized that during exaggerated hypoxia at high altitude, the increase in CBF would be amplified by chelation, and attenuated by iron infusion. Finally, by assessing high altitude populations with ancestral adaptation to hypoxic exposure, we sought to explore whether the genotypic advantages of Andeans and Sherpas, related to iron metabolism, would manifest through the modulation of CBF.
Ethical approval
All experimental procedures were approved by the University of British Columbia Research Ethics Board (H16-01028, H17-02687 and H18-01404), the Nepal Health Research Council (no. 586), the Universidad Peruana Cayetano Heredia Comit e de É tica (no. 101686), and conformed to the Declaration of Helsinki, except for registration in a database. All participants received both written and oral information about the study and provided written informed consent. All highlander (Sherpa and Andean) participants read an in-depth translated study information form and had the study explained to them in their local language, and gave written informed consent prior to participating.
Experimental design
Data collected during this study have previously been published as part of an investigation that focused exclusively on the pulmonary and peripheral vascular responses to iron manipulation. 16,26,44 Thus, although the present study adopted an identical experimental design, it constitutes an entirely separate research question complemented by de novo experimental measures constrained to the cerebral vasculature.
Participants
A total of 82 volunteers participated in the study, which was conducted across two high altitude research expeditions. 45 47 ], as they did not exhibit excessive erythrocytosis (hemoglobin 19 g Á dl À1 for females and 21 g Á dl À1 for males) and had a Qinghai CMS questionnaire score of 0.5 AE 0.8. Study 1 -Lowlanders and Sherpa at 5,050 m. At 5,050 m in the Nepalese Himalaya (EV-K2-CNR Pyramid Research laboratory), 7 male lowlanders and 8 male Sherpa received an infusion of DFO (desferrioxamine; 7 mg/kg/hour over 4 hr) and 9 male lowlanders and 12 male Sherpa received an infusion of iron [iron (III)hydroxide sucrose; 200 mg over 30 min followed by 3.5 hours of slow-drip saline (0.9% NaCl); the total infusion time for iron was 4 hr to mirror the time-course of the DFO infusion]. Lowlanders receiving DFO and iron were tested after 13 AE 3 days and 12 AE 4 days, respectively at 5,050 m ( Figure 1). There were 8 Sherpa who ascended from Kathmandu and tested following 7 AE 3 days at 5,050 m, and then 12 Sherpa who ascended from 3,800-4,200 m, and were tested 1-2 days following arrival to 5,050 m. Study 2 -Lowlanders and Andeans at 4300 m. At 4,300 m in the Peruvian Andes (Cerro de Pasco -resident altitude for Andeans), 11 lowlanders (2 female) and 12 Andeans (1 female) received an infusion of saline (250 ml of 0.9% NaCl saline) and 11 lowlanders (4 female) and 12 Andeans (2 female) received an infusion iron sucrose [iron (III)-hydroxide sucrose; 200 mg in 250 ml 0.9% NaCl saline] over 30 min. Lowlanders receiving saline and iron were tested after 9 AE 5 and 6 AE 3 days, respectively ( Figure 1).
Infusion randomization. For Studies 1 & 2, blockrandomization (i.e. randomization performed in stages, across the expedition) of participants was performed for three primary purposes: 1) it streamlined data collection; 2) optimized coordination with other ongoing studies (i.e. avoid any potential confounding effects of manipulated iron levels); and 3) ensured that lowlanders allocated to iron and saline or DFO conditions were appropriately weighted, in terms of the number of days at altitude, to limit any potentially confounding influence of hypobaric exposure on iron stores. Analysis was conducted on coded data/files.
Changes in P I O 2
Measures of CBF (see experimental measures below), blood pressure, peripheral O 2 saturation via pulse oximetry (SpO 2 ) and ventilation (via Wright spirometer) were collected during resting supine breathing of: 1) ambient air (P I O 2 of 87 mmHg at 5,050 m and P I O 2 of 96 mmHg at 4,300 m), and 2) exaggerated hypoxia Figure 1. Summary of experimental protocol of each study. In Study 1, lowlanders and Sherpa hiked to 5050 m over 9-10 days. Prior to flying to Lukla: the ascending Sherpa group had already descended and been in Kathmandu for 5-15 days (median: 7 days); lowlanders had been in Kathmandu (1400 m) for 3-9 days (median: 6 days). Additional Sherpa were recruited at high altitude -typically ascending from 3800-4200 m in 1-2 days, and were tested 1-2 days following arrival to 5050 m. In Study 2, lowlanders were driven over 8 hours from Lima to 4300 m, Cerro de Pasco (where all Andeans were residents). Experimental P I O 2 conditions are included, that were repeated pre-and post-infusion.
(P I O 2 of 67 mmHg at 5,050 m and P I O 2 of 73 mmHg at 4,300 m; simulating an additional elevation gain of $2000 m each), both before and after the infusion (see Participants above). The partial pressure of endtidal CO 2 (P ET CO 2 ) was assessed using capnography (EMMA, Masimo); however, whilst Douglas bag breathing (i.e., exaggerated hypoxia), P ET CO 2 was not collected due to the increased physiological deadspace associated with our respiratory apparatus. Consequently, P ET CO 2 during exaggerated hypoxia was calculated, based on the change in alveolar ventilation (see Data Analysis below).
Experimental measures
Cerebral blood flow: Internal carotid artery (ICA) and vertebral artery (VA) diameter with synchronous measurements of blood velocity were performed using a 10-MHz multifrequency linear array probe with highresolution duplex ultrasound (Terason t3200 and Terason uSmart 3300, Teratech). The ICA was measured at least 2 cm from the carotid bifurcation, whilst avoiding turbulent or retrograde flow. The VA was measured approximately at the transverse process of C4 and the subclavian artery. Each ICA and VA measures were conducted at the same location within each subject. Images of blood vessel diameter and blood velocity were recorded as video files, which were analyzed offline using an automated edge-detection software [FMD/BloodFlow Software version 5.1, Reed C, Australia; 48 ]. All data are based on imaging of >15 cardiac cycles, with stable and repeated angle of insonation. Volumetric blood flow (Q) was quantified using the following equation: Global CBF (gCBF) was estimated as twice the sum of unilateral ICA and VA flows. Cerebrovascular conductance (CVC) was estimated using MAP: CVC ¼ Q=MAP gCBF reactivity was estimated as DgCBF/DSpO 2 . Automated blood pressure was collected in duplicate using brachial oscillometry. The absolute change in blood flow (DQ) from room air to exaggerated hypoxia was used to assess hypoxic reactivity of a particular vessel (e.g. DQ VA ) and the bulk flow to the brain (i.e. DgCBF).
Blood sampling. Venous blood samples were separated by microcentrifugation, with serum samples frozen in liquid nitrogen at À196 C for analysis. Serum iron was analysed according to clinical laboratory standards (Samyak Diagnostic Pvt. Ltd., Kathmandu, Nepal and Medlab Clinical Laboratories, Lima, Peru). Hemoglobin concentration ([Hb]) and hematocrit (Hct) were obtained from whole venous blood sample and analyzed immediately (ABL90 FLEX, Radiometer and microcentrifugation).
Data analysis
P ET CO 2 during poikilocapnic hypoxia was calculated using a mean slope of P ET CO 2 [derived from 15 min of poikilocapnic hypoxia in 22 healthy individuals 49 ] per change in V E from room air breathing to hypoxia:
Statistical analysis
Data were analyzed using a linear mixed effects model with a compound symmetry repeated measure covariance structure (SPSS v24, IBM Statistics). The fixed factors for the model were ancestry and time (i.e., pre to post infusion) -with the latter being a repeated factor with a compound symmetry repeated covariance structure. When a significant interaction effect (e.g., ancestry  time) was detected, Bonferroni adjusted post-hoc tests were utilized to test pairwise comparisons. Pre-and post-infusion iron markers were assessed using paired samples t-test. Correlations were assessed using Pearson correlation. A one-way ANOVA was used to assess the absolute change in serum iron across groups. Shapiro-Wilk normality testing confirmed primary outcome measure data (gCBF, serum iron, Hb, Hct, MAP, SpO 2 ) were normally distributed. All results are reported as mean AE SD and significance was set at P < 0.05.
Between study infusion comparability
The effectiveness of iron infusion to increase serum iron was comparable across lowlanders and Andeans at 4,300 m and lowlanders and Sherpa 5,050 m (P ¼ 0.819; Figure 2(f)). Saline infusion did not alter serum iron in either lowlanders or Andeans (main effect of time P ¼ 0.759), whereas DFO decreased serum iron by $60%, to nearly undetectable levels of 2.4 AE 0.9 mmol Á l À1 in lowlanders and 2 AE 0 mmol Á l À1 in Sherpa (main effect of time P < 0.001).
Study 1: DFO and iron at 5,050 m
In both lowlanders and Sherpa, V E , P ET CO 2 , SpO 2 and MAP were unaltered by either DFO or iron. Ultimately, there was no change in Q ICA , Q VA , gCBF or CVC following either DFO or iron, in lowlanders and Sherpa (Table 1). Moreover, and in contrast to our hypotheses, during exaggerated hypoxia (P I O 2 ¼ 67 mmHg), DFO did not increase gCBF and iron did not lower gCBF in either lowlanders or Sherpa (Table 2). When normalized to the change in SpO 2 , the reactivity of gCBF (i.e., DgCBF/DSpO 2 ) during hypoxia was also not altered by iron or DFO manipulation (Table 2).
Study 2: Saline and iron at 4,300 m
In both lowlanders and Andeans, V E , P ET CO 2 , SpO 2 and MAP were unaltered by saline and iron infusions ( Table 3). While saline infusion did not alter gCBF, iron infusion led to a 4 AE 10% reduction in gCBF (main effect of time p ¼ 0.043) and a 7 AE 13% decrease in CVC (main effect of time p ¼ 0.011) in both lowlanders and Andeans (Table 3). During exaggerated hypoxia (P I O 2 ¼ 73 mmHg), however, neither saline or iron, altered the rise in gCBF. The reactivity of gCBF during hypoxia (i.e., DgCBF/DSpO 2 ) was attenuated following iron infusion (main effect of time P ¼ 0.019; Table 4). In only lowlanders at 4,300 m, the change in VA blood flow from room air to hypoxia (i.e. DQ VA ) increased from 4 AE 13 ml Á min À1 pre-iron, to 21 AE 19 ml Á min À1 post-iron infusion (p ¼ 0.023).
Discussion
Our primary finding was iron status influences, albeit subtlety, global CBF that is inversely dependent on the severity and length-of-stay at high altitude. These data also highlight the reliance of iron and [Hb] on CBF responses to hypoxia, between lowlanders and highlanders. A secondary preliminary observation was that acute elevations in iron (and hence HIF downregulation) lead to a preferential elevation in posterior CBF in lowlanders at 4300 m. The following discussion outlines the implications and experimental considerations that underpin these findings.
Cerebral blood flow, HIF expression, and iron status at high altitude During initial ascent and arrival to high altitude, it is well established that CBF increases in proportion to the reduction in arterial O 2 content, in order to maintain cerebral O 2 delivery. Over 1-2 weeks of acclimatization at a given altitude, CBF gradually normalizes to slightly above sea level values as erythropoiesis increases the O 2 carry capacity of the blood and metabolic compensation occurs to partially correct the initial respiratory alkalosis [reviewed in: 50 ]. As illustrated in Figure 3, it is noteworthy that the kinetics of cellular HIF expression also follows a similar trajectory during hypoxic exposure -in rodent models, HIF DNA binding activity reached 77% of maximal levels within one minute, and by 15 minutes HIF activity was detectable. 51 In human endothelial cell culture models, HIF-1a and HIF-2a levels peaked after 4-6 hrs and 13 hours of hypoxic exposure, respectively. 52 Interestingly, HIF-1a protein expression within the brain is more prevalent [versus HIF-2a 53 ] and HIF-1a sensitivity to hypoxia is greater than other tissues 54for example, an F I O 2 of 0.18 was sufficient to induce HIF-1a protein expression in the brain, whereas F I O 2 of 0.06 was needed for hepatic and renal tissue cells. 54 In mouse cortical tissue, HIF-1a expression in the brain peaks at 6-12 hours and normalizes within $3 weeks. 11 Iron -due to its constituent role in HIF stabilization, via prolyl hydroxylase activity 13 -can influence the coordination of HIF-mediated responses designed to maintain O 2 delivery. 55 However, because iron deficiency commonly occurs with sojourn to high altitude, 16,17,55,56 the responsiveness at which downstream HIF responses are stimulated might be influenced by the individuals' prevailing iron status. This notion is based on our finding that iron infusion attenuated gCBF in lowlanders and Andeans at 4,300 m, but not in lowlanders and Sherpa at 5,050 m. To reconcile this discrepancy, the impacts of iron manipulation on the cerebral vasculature, between Study 1 & 2, should be interpreted within the context of degree and duration at altitude where the iron status of participants between Studies 1 & 2 were not identical. In Study 1 (5,050 m), 65% of lowlanders and 9% of Sherpa had ferritin levels of <15 ng Á ml À1 , while in Study 2 (4,300 m), only 14% of lowlanders and 4% of Andeans had ferritin levels <15 ng Á ml À1 .
While we would postulate that a greater prevalence of iron depletion at pre-infusion, would enhance the potency of iron infusion on altering cerebral vascular function, our findings that 1) iron infusion did not attenuate gCBF at 5,050 m, and 2) DFO did not markedly increase gCBF at 5,050 m. Together these data indicate that at least for the brain at this altitude, the characteristic rise in arterial O 2 content 57 plays a likely more potent role in CBF regulation since HIF expression would be expected to be declining over this timeframe. 11
Hemoglobin and serum iron
It is well established that [Hb] inversely dictates cerebral blood flow, via proportional changes to arterial oxygen content. 57,58 However, tissue HIF activity also Figure 3. Summary of global CBF during prolonged stay at high altitude (5,050 m and 4,300 m refer to Studies 1 and 2, respectively). Cortical HIF-1a expression adapted from literature 11 and CBF during early exposure to 5,050 in lowlanders adapted from literature. 3 Since most of the lowlanders tested during Study 1 of the current study, were also included in the study by Hoiland and colleagues, the dashed line is included to illustrated the change in CBF across time.
inversely coincide with arterial oxygen content, 59 and iron modulates critical cofactors (e.g. prolylhydroxylases) important for HIF activity. 10,60 At high altitude, the ubiquitous erythropoetic response disrupts iron tissue balance, which is reflected in an overall decline in iron levels. While Hb formation is certainly dependent upon available iron, changes in serum iron are earlier indicators of alternating demand/storage signals -and thus our findings of iron's influence on DgCBF is notable. Ultimately, circulating iron levels and [Hb] are inextricably linked, but based on our correlational analyses, we are unable to tease out the independent influences of iron and Hb on CBF. Future studies are warranted, potentially in anemic and nonanemic volunteers who remain iron deplete or replete, to investigate this observation further.
Exaggerated vertebral artery blood flow during hypoxia
Since the VA provides blood flow to the posterior region of the brain, including the brainstem -a key site of cardiorespiratory control -some studies have shown that posterior regions of the brain demonstrate a preferential blood flow distribution during hypoxia. [61][62][63] Therefore, our finding of greater DQ VA distribution (i.e. the elevation in VA flow from room air to exaggerated hypoxia) with iron in lowlanders at 4,300 m, may suggest that HIF activity, may aid in the regulation of flow to these highly homeostatic functioning regions of the brain.
Role of high altitude ancestry
While high altitude Sherpa and Andeans display adaptive characteristics to hypoxia, including positive selection for HIF pathway candidate genes, 42,43 there appears to be little evidence from the current study that iron manipulation differentially impacts cerebrovascular reactivity to hypoxia between partially acclimatized lowlanders and healthy highlanders. While this contrasts with our hypothesis, we acknowledge that we were only able to assess the acute (i.e., 4 hours) impact of HIF up-/down-regulation (via iron manipulation) on CBF. Since many of the downstream constituents of HIF are only evident after days-weeks (e.g. changes in cerebral microvascular density and hematocrit, 12 neurovascular angiogenesis and pericyte proliferation 14 ), the potential for cerebrovascular differences between high altitude residents and lowlanders to emerge over time warrants further investigation.
Methodological considerations
While physiological assessments were consistent across both expeditions, it must be acknowledged that ascent profiles were not identical (especially in Study 1, where some Sherpa has ascended alongside lowlanders, and others did not -see section Study 1 -Lowlanders and Sherpa at 5,050 m). However, because iron status can vary markedly between individuals, and there were no differences between Sherpa ascending versus notascending (e.g., pre-infusion serum iron P ¼ 0.134), we opted to include, rather than exclude the ascent Sherpa.
Sherpa (and Andeans) are typically smaller in height and weight, compared to westerners. Scaling gCBF (in ml/min) to brain mass (in ml/min/100g tissue) can be estimated using an allometrically scaling equation. 64 A study by Hoiland and colleagues 3 demonstrated brain mass of Sherpa to be $2-4% smaller, compared to lowlanders. While of interest, as discussed by the authors, 3 the differences in global CBF were unlikely to be dependent solely on this notion, and instead more dependent upon a down-regulation of metabolic processes. Likewise, in a separate study, brain size (via T1 MRI imaging) between Han Chinese and lowlanders were not different. 65 Globally, females typically have lower iron levels compared to males -a feature likely due to a combination of factors ranging from menstrual blood volume losses, reduced dietary intake/absorption and pregnancy. 66 Similarly, females also demonstrate low iron levels at high altitude. 67 Unfortunately, the ability to recruit more female participants in the current study was not possible. While the inclusion of females may make the sample population more heterogenous, we are unaware of any evidence of any measured or examined ironrelated sex-differences in indigenous populations at altitude, include the Sherpa or Andeans. Ultimately, there exists a gap in our understanding of sexdifferences at altitude, and as it pertains to iron metabolism -so further investigation, explicitly focused females and iron, is certainly warranted.
Conclusion
In both healthy lowlanders and highlanders at high altitude, prevailing iron status appears to contribute to the variability in cerebral hypoxic reactivity. However, acute manipulation of iron status only minimally influences cerebral blood flow and function, that is potentially dependent on the severity and length of high altitude exposure. Given the variable and progressive depletion of iron stores at high altitude, broadening the scope of iron metabolic assessments to >24 hours post-infusion, may provide new insightful into potential relationships between iron stores and cerebral blood flow control during chronic hypoxia.
|
v3-fos-license
|
2023-12-01T05:07:50.332Z
|
2023-11-01T00:00:00.000
|
265503248
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1098/rsos.230817",
"pdf_hash": "d58eeefe7cbf28f132692e35ba1ac34dae4e22b9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2281",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"sha1": "d58eeefe7cbf28f132692e35ba1ac34dae4e22b9",
"year": 2023
}
|
pes2o/s2orc
|
Reduction of wing area affects estimated stress in the primary flight muscles of chickens
In flying birds, the pectoralis (PECT) and supracoracoideus (SUPRA) generate most of the power required for flight, while the wing feathers create the aerodynamic forces. However, in domestic laying hens, little is known about the architectural properties of these muscles and the forces the wings produce. As housing space increases for commercial laying hens, understanding these properties is important for assuring safe locomotion. We tested the effects of wing area loss on mass, physiological cross-sectional area (PCSA), and estimated muscle stress (EMS) of the PECT and SUPRA in white-feathered laying hens. Treatments included Unclipped (N = 18), Half-Clipped with primaries removed (N = 18) and Fully-Clipped with the primaries and secondaries removed (N = 18). The mass and PCSA of the PECT and SUPRA did not vary significantly with treatment. Thus, laying hen muscle anatomy may be relatively resistant to changes in external wing morphology. We observed significant differences in EMS among treatments, as Unclipped birds exhibited the greatest EMS. This suggests that intact wings provide the greatest stimulus of external force for the primary flight muscles.
Introduction
Maintaining plumage quality is important for both wild and domesticated birds, as feather loss or damage can lead to reduced wing area [1] and impaired/reduced mobility [2].Adult birds of many different species will generally undergo an annual or biannual moult in the wild to replace worn feathers [1].During this period, feathers are shed, leaving some birds flightless.Subsequently, the flight muscles can atrophy due to disuse [3,4].In domesticated chicken flocks kept for egg production (laying hens), loss of feathers can come from feather pecking and abrasions from housing equipment [5].Feather pecking is present in up to 86% of commercial laying hen flocks, affecting all areas of the body, including the wing and tail feathers [5][6][7].
As ground birds (Galliformes), domestic chickens (Gallus gallus domesticus) rely heavily on bipedal walking and running and have limited flight abilities.Galliform birds are capable of flapping flight, but their flight is generally explosive, with greater wing beat frequency and larger power output requirements compared with similarly sized, non-Galliform flying counterparts (e.g.chukars [8] versus pigeons [9]).Any reduction in wing area, for example, through feather damage or loss, will increase wing loading (body mass per wing area) [10] and subsequently increase the required power output needed for flapping flight [11].León et al. [12] showed that even fully wing-feathered chickens work at their maximum power output when performing flapping flight.While the flight feathers of the wings convert muscle power into aerodynamic power [13], the two main flight muscles, the pectoralis (PECT) and supracoracoideus (SUPRA), power avian wing movement [9].The flight muscles can make up to 20% of an adult bird's total body mass [13].The significantly larger PECT makes up approximately 17% of an adult bird's body mass and sits superficial to the smaller SUPRA, which only makes up about 2-4% of the body mass in comparison [13].Both muscles originate on the keel and insert on the humerus, with the PECT inserting at the deltopectoral crest and the SUPRA inserting at the dorsal humeral head via a long central tendon that passes through the foramen triosseum [14].During flapping flight the PECT powers the lift for weight support and thrust during the downstroke [9].The SUPRA rotates, supinates and elevates the wing to overcome wing inertia but contributes much less to overall aerodynamic force than the PECT [9,14,15].
The PECT has a complex anatomy with some parallel but mostly bipennate fibres and a short tendon of insertion, whereas the SUPRA is bipennate with a long tendon of insertion [13].Bipennate muscles have fascicles that attach to opposite sides of a central tendon [13].The maximal muscle force these muscles can produce is proportional to the muscle's physiological cross-sectional area (PCSA) [3].In addition, for a given volume, bipennate muscles have a greater PCSA and, therefore, can produce more force than parallel-fibred muscles [3].Notably, muscle force exerted at the tendon of insertion is also affected by the pennation angle (α), the angle between the muscle fibre and the central tendon, with greater muscle stress required for a given force output as the angle increases [3,16,17].The proportionally long tendon of insertion of the SUPRA is estimated to store and release elastic energy to assist with inertial work requirements for upstroke [9].However, little is known about the architectural properties of the PECT and SUPRA of domestic chickens.
As a model for laying hens experiencing severe feather loss in commercial farming, we experimentally decreased wing area using controlled clipping of the primary and secondary flight feathers, which effectively reduced the use of elevated resources in aviaries [11].Hens with all flight feathers left intact had lower descent velocities and descent angles compared with hens with clipped flight feathers, which is vital for slow, safe and controlled landings [12].Additionally, wing-feather clipping reduced PECT thickness [18].In this paper, we further explore the impact of wing-feather clipping on muscle mass, average fascicle length, pennation angle (α), PCSA and estimated muscle stress (EMS) in the PECT and SUPRA.First, we describe the architectural properties of the PECT and SUPRA, which are then used to calculate the PCSA, and used simplified models of aerodynamic and inertial forces to calculate EMS (kPa).Second, we use wing feather clipping to investigate the effects of wing area loss on the PECT and SUPRA muscle characteristics.We predicted atrophy of the muscles in the clipped groups, leading to reductions in mass and, potentially, increases in pennation angle that would in turn reduce PCSA and, for a given force requirement, increase EMS as a result of the reduction of elevated resource use/ flapping flight capabilities [11].
Animals and housing
A total of 60 white domestic Lohmann LSL lite laying hens were housed in six aviary-style pens (183 cm L × 244 cm W × 290 cm H), with 10 hens per pen as part of a larger study assessing the impact royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230817 of wing-feather clipping on potential keel bone injuries [18], changes in behaviour [11], muscle depth [18] and flight kinematics [12].The floor was covered in 5 cm of wood shavings and each pen included two high platforms (122 cm L × 31 cm W) 70 cm above the ground on either side of the pen.In addition, one feeder was in the middle of the floor, and a second feeder secured to one of the two elevated platforms.Similarly, two nest-boxes were provided: one was placed against the back wall on the floor, and the second was secured to the second platform.A high perch (5 cm diameter) placed at a height of 150 cm spanning the width of the pen was installed near the back wall.Automatic nipple drinkers were provided.The pens were kept at 21°C and had a 14 : 10 h light-dark cycle and a 30-minute dawn/dusk period.
Wing-feather clipping treatment
Wing-feather clipping treatments were applied as described by Garant et al. [11] at 34 weeks of age during the fall season.Three birds from each pen were randomly assigned to one of three treatment groups (figure 1): Unclipped, where all flight feathers were left intact (table 1; wing loading 162.4 ± 11.5 kg m −2 ); Half-Clipped, where the 10 primary flight feathers were clipped symmetrically across both wings (32.5% wing area loss and 249.1 ± 29.1 kg m −2 wing loading), and Fully-Clipped where all secondary and primary flight feathers were clipped symmetrically across both wings (55.4% wing area loss and 378.9 ± 53.9 kg m −2 wing loading).The primary and secondary covert feathers were used as a guideline when clipping the feathers using scissors.
Dissection and muscle collection
All hens were euthanized using CO 2 by trained personnel at 42 weeks of age (eight weeks after the clipping treatment) and kept frozen at −18°C until further analysis.From this population, a randomly selected subsample of 18 birds (6 hens/clipping treatment) was dissected to assess PCSA and calculate EMS.Dissections took place over 3 days in the winter, with approximately eight hens dissected per day by two trained researchers (trained simultaneously using the same protocol).The same researcher collected all measurements on one hen.Hens from each treatment were dissected on each day and equally divided among the researchers who were blinded to the treatments.Carcasses were left to thaw for an average of 9.5 h at room temperature.Hens that were not immediately dissected were kept in the fridge at 1°C until dissection.Each dissection lasted, on average 2 h, and all carcasses were dissected within 6 h of thawing.
Dissections were carried out by following the methods of Casey-Trott et al. [19].In brief, whole carcasses (including feathers and viscera) were weighed immediately prior to dissection by placing the carcass in a reusable bag and using a luggage scale (Maple Leaf Travel Accessories, ACI Brands, Inc., Ontario, Canada).Muscles were removed in the order of left PECT, left SUPRA, right PECT and right SUPRA.Visible fat and connective tissue were removed as best as possible.Immediately after removal, muscles were weighed on an analytical balance (Mettler Toledo AE200 Analytical Balance, Mettler Toledo, Ontario, Canada) and placed on a piece of cardstock in preparation for measuring fascicle length and pennation angle.royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230817
Muscle measurements
The fascicle length and pennation angles (figure 2) of each muscle were measured.Ten muscle fascicles were chosen randomly on both the superficial and deep sides of each muscle (total of 20 measurements).
A flexible tape measure was used to follow the curves of each muscle fibre to measure fibre length (cm).Ten pennation angles were chosen at random and measured using an electronic protractor (Beslands 0-200 mm/8 inches Digital Protractor, Beilong Tool, Zhejiang, China) on both the superficial and deep sides of each muscle (total 20 measurements) in degrees (°).Pennation angles were measured by lining up the base/reference line of the protractor with the central tendon (figure 2).For the PECT, one of the 10 pennation angles was assumed to be 0°as a small portion of the PECT does not directly insert into the central tendon.
Physiological cross-sectional area and estimated muscle stress calculations
Muscle fascicle length was converted from cm to m, and the pennation angles were converted from degrees to radians prior to their use in calculating the PCSA and EMS.royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230817
Estimated muscle stress
We developed simplified models of aerodynamic and inertial forces to estimate muscle stress for the PECT and SUPRA, respectively.We assumed power for weight support during downstroke to be the primary role of the PECT and inertial power was the primary role of the SUPRA.Data from León et al. [12] was used for relevant whole-body and wing kinematics, as these were from the same hens and treatments used in the current study.Induced power for vertical weight support dominates requirements for slow flight [21,22]; therefore, we used a flapping-wing approximation of the Rankine-Froude momentum theory of propellers [23] and reasoned that average vertical force in a momentum jet must equal body mass multiplied by the vertical acceleration reported in León et al. [12], where proportion of weight supported was 0.73, 0.53 and 0.79 for Unclipped, Half-Clipped and Fully-Clipped birds, respectively.Each PECT provides force to support half body weight, therefore, body weight (N) was divided in half and multiplied by the proportion of weight supported in slow descending flight.As the downstroke provides most or all of the weight support during slow flight in all birds except hummingbirds [24], we then divided by 0.52, the proportion of the wingbeat cycle consisting of downstroke as calculated from data in León et al. [12].To solve the average force balance for downstroke, the centre of pressure of the wing was assumed to be located at 62.5% of the length of the wing [25].Aerodynamic moment (Nm) was calculated as the body weight supported by one wing during downstroke multiplied by the spanwise distance to the centre of pressure of the wing, assumed from empirical study [25] to be 0.625 multiplied by the wing length (table 1).Muscle force was then found by dividing the aerodynamic moment by the average moment arm of the hen PECT attaching to the deltopectoral crest of the humerus, which was measured from three birds (0.0176 ± 0.0013 m).We then used this average value to calculate EMS (see equation (2.2), below).
The upstroke in chickens and other Galliform birds is understood to be largely aerodynamically inactive such that muscle force is required only to accelerate the wing and overcome wing inertia during the first half of upstroke [9,21,24,26].The torque required to accelerate the wing should equal the moment of inertia of the wing (I; kg m 2 ) multiplied by its angular acceleration (θ; rad s −2 ; [27].We measured I for one hen in each treatment using standard, spanwise strip measurements [27,28]: Unclipped = 6.8 × 10 −4 kg m 2 , Half-Clipped = 4.2 × 10 −4 kg m 2 and Fully-Clipped = 3.3 × 10 −4 kg m 2 .The wing was stretched to have a straight leading edge emulating the posture at mid-downstroke.The wing was then cut into spanwise strips 2 cm in width, and the mass of each strip (including all muscle, bone and feathers) was measured using an analytical balance (Analytical Balance ME104E, Mettler Toledo, Ontario, Canada).Our calculation assumed point masses per strip [27].Chickens flex their wings using a 'tip reversal' upstroke, as in other Galliform birds [8].Time-resolved three-dimensional kinematics are necessary for estimating instantaneous changes in inertial force requirements.Lacking data with such resolution, we instead measured span ratio (midupstroke wrist span divided by mid-downstroke wrist span) using the kinematics results in León et al. [12] at 0.625.Each individual segment distance from the shoulder was then multiplied by this span ratio to calculate I. Angular velocity of the upstroke was measured as wingbeat amplitude (rad) divided by upstroke duration [12].Treating changes in wing velocity as a sinusoidal function, we calculated the peak angular velocity at mid-upstroke as average angular velocity during the upstroke divided by 0.64.Angular acceleration (θ) during the first half of upstroke was then changed in angular velocity from 0 rad s −1 at the start of upstroke to the peak angular velocity at mid-stroke, divided by the duration of the first half of upstroke.Dividing Iθ by the moment arm of the SUPRA operating about the shoulder, taken to be 0.0052 ± 0.00008 m (average taken from measurements of three birds), yielded the average SUPRA force applied to a wing during the first half of the upstroke.During the second half of the upstroke, we assumed force was dissipated either as a pulse of thrust [29] or as energy absorbed during active lengthening in the pectoralis [9,26].As in the PECT, we then calculated EMS using equation (2.2).Equation (2.2).Calculation for EMS (kPa).Where F (N) is muscle force calculated according to the specific muscle (PECT or SUPRA, detailed above), PCSA is the physiological cross-sectional area (m 2 ) from equation (2.1), and α is average pennation angle (radians).
Statistical methods
For ease of communication herein, we converted units of PCSA from m 2 to cm 2 and EMS from Pa to kPa.Left-and right-side muscle data, as well as superficial and deep-side data were averaged for each royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230817 outcome variable as no significant differences were found after using generalized mixed model procedures (PROC GLIMMIX) in SAS OnDemand for Academics SAS Studio v. 9.04 (2021, SAS Institute Inc., Cary, NC, USA) to analyse statistical differences between said sides ( p > 0.05, data not shown).
PECT and SUPRA data were analysed separately.Three outcome variables were assessed as part of the analysis: muscle mass (g), PCSA (cm 2 ) and EMS (kPa) for each muscle.PROC GLIMMIX was used in SAS OnDemand for Academics: SAS Studio v. 9.04 (2021, SAS Institute Inc., Cary, NC, USA) to perform statistical analysis on all outcome variables to determine significant differences between clipping groups.The wing-feather clipping status (three levels: Unclipped, Half-Clipped, Fully-Clipped) was included as the fixed effect for all three outcome variables.The hen's body weight (kg) was added as a covariate for muscle mass and PCSA, but not for EMS, as body mass is accounted for when calculating EMS.
Studentized residual plots were used to assess normality of the data and determine the best distribution that fit the data.Therefore, to meet normality assumptions, the model for PCSA was kept as a Gaussian distribution, while the models for muscle mass and EMS used the lognormal distribution for both PECT and SUPRA data.Descriptive statistics are presented with the standard deviation (s.d.).Results from the statistical analysis are presented as least-squared means (LSM) or as back-transformed LSM with the standard error (s.e.).A Tukey-Kramer p-value adjustment was used to assess multiple comparisons, the α determination of significance was 0.05.
Results
Muscle mass, PCSA and EMS for both the PECT and SUPRA followed similar patterns where they were largest in the Unclipped group and smallest in the Half-Clipped group, except the EMS of the SUPRA as it was smallest in the Fully-Clipped group (figure 3, table 2).
However, wing-feather clipping inducing wing area loss did significantly affect the EMS of the PECT (F 2,15 = 64.90,p < 0.05) and SUPRA (F 2,15 = 98.37, p < 0.05), where EMS in the PECT was highest in the Unclipped birds, followed by Fully-Clipped birds and lowest in the Half-Clipped birds (figure 3a).Similarly, for the SUPRA, EMS was also highest in the Unclipped birds; however, it was intermediate in Half-Clipped birds and least in the Fully-Clipped birds (figure 3b).EMS of the SUPRA was approximately double that of the PECT (figure 3).royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230817 Table 2. Morphological data for the pectoralis (PECT) and supracoracoideus (SUPRA) muscles of white-feathered laying hens across the three wing-feather loss/clipping groups (Unclipped, Half-Clipped, Fully-Clipped, n = 6 birds per treatment).This includes mean ± standard deviation (s.d.), minimum and maximum values of body weight (g), muscle fascicle length (cm), pennation angle (α, radians), muscle mass (g), physiological cross-sectional area (PCSA, cm 2 ) and estimated muscle stress (EMS, kPa).
Discussion
Wing-feather clipping treatments resulting in wing area loss applied to the laying hens did not significantly affect muscle mass or PCSA of the PECT or the SUPRA.The initial hypothesis predicted that the Half-Clipped (32.55% wing area loss) and Fully-Clipped (55.4% wing area loss) groups would have lighter and smaller muscles than the Unclipped group, as wing area loss would hinder flapping flight and discourage the use of the PECT and SUPRA.This lack of difference in muscle mass and PCSA is surprising as the wing-feather clipping treatment drastically reduced wing area by up to 55% per wing [11], resulting in a significant decline in elevated resource usage [11] and an approximate 6% decrease in PECT muscle thickness when ultrasound was used to measure effects in live birds six weeks after clipping [18].Therefore, we hypothesized that decreases in muscle mass and PCSA would be reasonably observable eight weeks post-clipping in the present study.In wild waterfowl such as geese and grebes, muscle atrophy is commonly observed within a single moulting period, which leaves them flightless for several weeks (approx.four weeks) [3,30].Additionally, other studies have shown that migratory birds such as red knots (Calidris canutus) see both increases and subsequent decreases in pectoralis thickness in less than 12 weeks [31].By contrast, decreases in muscle mass in the present study were not seen in eight weeks, let alone recovery of the muscle mass.However, the present study suggests that additional time may be needed to see detectable differences in muscle mass and PCSA, specifically in the domestic laying hen, or perhaps that domestic laying hen muscle architecture is resistant to change in comparison with wild species of waterfowl and migratory species.A small sample size also limits the present study, and future studies would benefit from exploring the effects of flight feather clipping on the PECT and SUPRA in a larger population.
Although decreases in PCSA due to wing area reduction were not seen, wing-feather clipping and subsequent effects on flight capability were still found to significantly affect the EMS of both the PECT and SUPRA.The EMS in this study was estimated using the amount of aerodynamic force and inertial torque the PECT and SUPRA would need to match, respectively.Therefore, EMS herein is an integration of differing amounts of wing length, area, wing second moment of area, wing moment of inertia and exhibited flight behaviour (table 1; [12]).Taken together with the fact that PCSA did not differ significantly according to treatment, EMS in this study is more a reflection of behaviour in flight (vertical support of body weight) for PECT EMS and largely invariant angular velocities of the wing coupled with large changes in moment of inertia (table 2).Large EMS may represent normal feedback that elicits flight behaviour in hens, as previous work indicates that when the wing area is significantly reduced (up to 55% wing area reduction per wing) in these birds, they may opt for other methods of locomotion, such as ground locomotion [11,18].Garant et al. [11] showed that elevated resource use/aerial locomotion is decreased by 42%, and leg muscle thickness is increased following wing-feather clipping [11,18].Furthermore, León et al. [12] demonstrated that laying hens do not adjust their wing kinematics to adapt to experimentally reduced wing area as in the present study, suggesting that the birds with intact wings are already at their maximum capacity for aerodynamic power output.Therefore, unlike wild and more volant species of birds such as the pied flycatchers (Ficedula hypoleuca) and rock pigeons (Columba livia) that are able to modulate their flight in response to feather loss by flying faster or increasing flapping frequency to maintain flight performance [32,33], domesticated laying hens may not have the same flexibility to accommodate loss of wing area.This may then pose problems for welfare in multi-tiered systems as these birds would have difficulty in reaching elevated resources [11].
The PECT of the Half-Clipped birds were estimated to produce significantly less EMS than both the Unclipped and Fully-Clipped groups.Previous work in flight kinematics using these birds showed that the Half-Clipped birds were not as adept at supporting their own body weight in comparison with the Fully-Clipped group despite the Half-Clipped group having a larger wing area [12].While the Unclipped and Fully-Clipped groups could support around 76% of their body weight, the Half-Clipped birds could only support 47% of their body weight.Additionally, they had the highest vertical acceleration during descent [12].We hypothesize that the relatively poor performance was influenced by the shape of the wings left by the Half-Clipped clipping treatment, as the shape of the wing is important in flight stabilization [13].Although the Fully-Clipped group had the highest wing area reduction, the shape left by clipping both the primary and secondary flight feathers still mimicked the shape of a fully intact wing, as there was generally the same amount of area distributed throughout the length of the wing.However, as only the primary flight feathers were cut in the Half-Clipped group, these birds were left with a wing shape where there was a large wing area close to the body contrasted with a large area missing from the outer part of the wing.Nonetheless, the effect of wing shape in this context needs to be studied further.
royalsocietypublishing.org/journal/rsos R. Soc.Open Sci.10: 230817 The laying hen is a relatively large bird with primarily fast-twitch glycolytic fibres in its PECT [34].In related birds, the PECT muscle strain increases with increasing body size to maintain flight capabilities [8].Thus, it is not surprising that EMS in the PECT and SUPRA of laying hen was greater (figure 3) than that reported for other species, such as for the PECT in pigeons (50-58 kPa; [9]) and starlings (34 kPa; [35]) and the SUPRA in pigeons (85-125 kPa; [9]) for which in vivo measures of muscle stress and strain are reported.Muscle force (F) was estimated in this study using the moment arm of each muscle and average aerodynamic force (PECT) or inertial force (SUPRA), our estimates could be improved in future studies by using in vivo measures, specifically electromyography (EMG) to measure neuromuscular activity and sonomicrometry to measure muscle strain (e.g.Tobalske & Biewener [9]).Galliform birds, including chickens, have a deltopectoral crest with a shape that precludes in vivo measures of muscle stress [8].However, new methods using an aerodynamic force platform and time-resolved measures of threedimensional wing shape could be used to calculate muscle force [26].In addition, Iyer et al. [36] present two potential methods of measuring muscle contractility and force in vivo and in situ, respectively, by measuring muscle torque around a joint and muscle tension by attaching a tendon to a load cell.Mechanomyography (MMG) is yet another potential method for future studies that measure muscle vibration and stiffness produced during contraction, which can be related back to muscle force [37,38].
Conclusion
Wing area loss significantly affected the EMS of the PECT and SUPRA.EMS of the PECT was smallest in the Half-Clipped group despite the Fully-Clipped group having less than 50% of their wing area left intact which may be due to changes in wing shape and/or unmeasured alteration of neuromuscular coordination.Loss of wing area did not significantly affect the muscle mass or PCSA of the PECT or SUPRA in domestic laying hens, despite previous studies showing loss of wing area affects their ability to access desirable elevated resources.It is possible that laying hen muscle physiology is such that mass and PCSA are more resistant to change than in wild and more volant species.Rather than increasing muscle size to compensate for increases in aerodynamic demand due to reduction in wing area, laying hens may have instead chosen alternative methods of terrestrial locomotion.
Figure 1 .
Figure 1.Diagrammatic representation of the wing-feather clipping treatments.Unclipped, where no flight feathers were clipped; Half-Clipped, where the 10 primary flight feathers on both wings were clipped; Fully-Clipped where all primary and secondary flight feathers on both wings were clipped along the coverts.
The PCSA was calculated by dividing the mass of the muscle (M, in kg) by the product of muscle density (assumed 1060 kg m −3 ; [20]) and average fascicle length (L, in m) of either PECT or SUPRA.Equation (2.1).Calculation for the PCSA (in m 2 ) where M is muscle mass (in kg), p is muscle density (1060 kg m −3 ; [20]), and L is average fascicle length (L, in m).
Figure 2 .
Figure 2. The deep side of the pectoralis (PECT) and supracoracoideus (SUPRA).(a) Dashed blue lines are examples of muscle fascicles measured in cm.(b) Solid black lines indicate pennation angles measured in degrees between two intersecting lines (orientation of the fascicle and central tendon).(c) The central tendon is indicated by the solid yellow line.
Table 1 .
Morphometrics of experimentally clipped wings of white layer hens (Gallus gallus domesticus).Values represent means ± s.d. for one wing extended as in mid-downstroke.
|
v3-fos-license
|
2019-05-10T13:03:48.449Z
|
2015-12-31T00:00:00.000
|
151749520
|
{
"extfieldsofstudy": [
"Psychology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1515/9783110471137-012",
"pdf_hash": "4a11b58652fa39fe223801e6fc377fbbfb05e1a1",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2282",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "3b23767b1e855114d0ebe472299c932c9f108b05",
"year": 2015
}
|
pes2o/s2orc
|
12 Studying Human-Human interaction to build the future of Human-Robot interaction
: Understanding human-to-human sensorimotor interaction, in a way that can be predicted and controlled, is probably one of the greatest challenges for the future of Human-Computer Confluence (HCC). This would allow, for example, the possibility of optimizing group decision-making or brain storming efficacy. On the other hand it would also offer the means to naturally introduce artificial embodied systems into our social landscape. This vision sees robots or software that smoothly interface with our social representations and adapt dynamically to social contexts. The path to such vision requires at least three components. The first, driven by cognitive neuroscience, has to develop methods to measure the real-time information flow between interacting participants – in ecological scenarios. The second, shaped by the Human-Robot Interaction (HRI) field, consists in building the proper information flow between robots and humans. Finally, the third will have to see the convergence of robotics, neuroscience and psychology in order to functionally evaluate the reality of a long-term HCC.
Sensorimotor Communication
Understanding verbal and non-verbal communication in humans is a rather easy task we master in childhood. At the same time, it is true that communication is a rather vague concept, since there are too many unknown degrees of freedom or the same measurable configuration can change in a manner we cannot comprehend, predict nor control. However, the chemistry of group interaction e.g. the subjective feeling of being "in sync" with other people -is something we can perceive easily. We do that instinctively because we're innately social creatures that by definition send and receive (implicit) socially relevant messages in all our interactions (i.e. hand gestures, facial expressions, etc.). We do that in real time in an extremely adaptive manner, at no cost.
Such a capability may be supported by the complex sensory-motor properties present in the human motor system. In fact, in addition to the clear role in movement planning and execution, some premotor neurons have activity that could be interpreted as complex visual responses. For example, "mirror neurons" discharge both when the monkey executes an action and observes another individual making the same action in front of it (Gallese et al., 1996). This discovery has stimulated basic research on how, why and when we do send and receive explicit and implicit sensorimotor messages during social interaction. Along these lines, human research showed that hand muscles cortico-spinal excitability, tested via Transcranial Magnetic Stimulation (TMS), is modulated during the passive observation of hand actions (Fadiga, et al., 1995). Similarly, it was also shown that passive listening to speech, requiring tongue mobilization, modulated the tongue cortico-bulbar excitability (Fadiga et al., 2002). In both cases, the observer/listener shares similar motor programs with the actor/speaker. Therefore, the recognition of other's action/speech may exploit knowledge on how to produce that particular stimulus.
Computational Advantages of Sensorimotor Communication
This computational general principle has supported a major shift in cognitive systems research. In fact, the human motor system was once believed to be mainly an output system. However, the motor brain could also play a role in perceptual and cognitive functions. This challenges the classical sensory versus motor separation (Young, 1970) and opens the doors to embodied cognition and robotics research (Clark, Grush 1999). However, automated systems cannot reach human-like performance when dealing with real coding/decoding of these signals. This simple fact forces us to start exploiting human brain/body solutions. All attempts that do not take this fact into account are bound to be unreliable in variable environments, to fail in the generalization to new examples and to be unable to scale up to solve more complex problems.
These considerations in HCC are pivotal to humanoid robots, in which the explicit design of morphological and functional body features must be complemented with a human-like cognition in order to elicit a human-like interaction and communication (Ferscha, 2013). More importantly, such innate human "behavioural-coupling" capability has to interact with situational intervening factors, which can dramatically hinder a successful information flow. For example, in mediated communications, many relevant cues are often filtered-out because of particular constraints of the medium. As a consequence, things are even more complicated when designing the communication with artefacts (i.e. robots). Modelling and implementation of these automatic systems imposes the additional new challenge of adaptability to context. Therefore, we suggest that the natural human-to-human coordination capability is the only guide in the development of the future human-to-robot and humancomputer interaction in general. In the following sections we will describe the most updated efforts in measuring sensorimotor human-to-human interaction. Next we will overview the strategies to build robot-to-human interaction and finally we will stress the need to quantify the efficacy of the human-to-robot interaction.
Human-to-Human Interaction
Sensorimotor communication forms the basis of unmediated communication in animals and humans too (Rands et al., 2003;Couzin, et al., 2005;Nagy et al., 2010). Complex coordinated behaviour between multiple individuals can arise without the need for verbal communication to happen (Sebanz et al., 2010;Neda et al., 2000). One important aspect of this kind of communication is the absence of any symbolic component, making such information flow automatic and implicit in nature. Behavioural research has showed an implicit relationship between the stability of intrapersonal coordination and the emergence of spontaneous interpersonal coordination (Coey et al., 2011). Furthermore, individual differences on synchronization performance plays a significant role, suggesting that temporal prediction ability may potentially mediate the interaction of cognitive, motor, and social processes underlying joint action (Pecenka, Keller, 2011). In general, behavioural coordination results from establishing interpersonal synergies. Interpersonal synergies are higher-order control systems formed by coupling movement system degrees of freedom of two (or more) participants. Characteristic features of synergies identified in studies of intrapersonal coordination are revealed in studies of interpersonal coordination in interactive tasks (Riley et al., 2011).
Interestingly, a growing body of neuroscientific evidence indicates that such interpersonal coordination or "group mind" phenomena occurring during interactive tasks are mediated by synchronized cortical activity (Hasson et al., 2012;Loehr et al., 2013;Schippers et al., 2010;Lindenberger et al., 2009). However, classical approaches in social neuroscience, and the field of hyper-scanning, typically search for the significant changes in brain activities after specific training that is supposed to augment coordination (Yun et al., 2012) or during the engagement of a rather rigid/constrained social interaction in general (Hasson et al., 2012;Riley et al., 2011). Therefore, it is important to note that typical behavioural and neuroimaging experiments, rarely implement the ecological complexity of natural interaction. Rather, for control purposes, they devise forced turn taking or a constrained communication mode.
Ecological Measurement of Human-to-Human Information Flow
However, recent studies have approached the problem from a radically different perspective (D'Ausilio et al., 2012;Badino et al., 2014). These studies have indeed measured unobtrusive motion kinematics from "real" groups embedded in "real" social interaction and extracting continuous information flow from participants. This was made possible by recording the motion kinematics of ensemble musicians, and then the use of computational methods allowed the extraction of information flow between participants. Ensemble musicians are experts in non-verbal interaction and behave like processing units embedded within a complex system. Each unit possesses the capability to transmit sensory information non-verbally, and to decode other's movement potentially via the mirror matching system. As these two flows of information occur simultaneously, each unit, and the system as a whole, must rely heavily on predictive models. Thus, the musical ensemble behaves like a complex dynamical system having, however, important constraints that turn into benefits from an experimental perspective.
The quantification of inter-individual information transfer has rarely been attempted in the context of ecological and complex interaction scenarios. To this end, violinists' and conductors' movement kinematics was recorded during the execution of Mozart pieces, searching for causal relationships among musicians by using the Granger Causality method (GC). It was shown that the increase of conductor-tomusicians influence, together with the reduction of musician-to-musician coordination (an index of successful leadership) goes in parallel with quality of execution, as assessed by musical experts' judgments. This study shows that the analysis of motor behaviour provides a potentially interesting tool to approach the rather intangible concept of aesthetic quality of music and visual communication efficacy (D'Ausilio et al., 2012).
The subsequent work found a clear positive relationship between the amount of communication and complexity of the musical score segment. Furthermore, temporal and dynamical changes applied to the musical score were devised in order to force unidirectional communication between the leader of the quartet and the other participants. Results show that in these situations, unidirectional influence from the leader decreased, thus implying that effective leadership may require prior sharing of information between participants. In conclusion, it was possible to measure the amount of information flow and sensorimotor group dynamics suggesting that the fabric of leadership is not built upon exclusive information knowledge but rather on sharing it (Badino et al., 2014).
These studies suggest that, with minimal invasiveness and during real interaction, we can possibly measure the information flow between two (or more) human participants. The next step we suggest is to build robots that are capable of eliciting a realistic robot-to-human interaction. In this sense the goal is to foster a dynamical pattern of information flow between natural and artificial agents.
Robot-to-Human Interaction
It is usually predicted that the inclusion of robots in our society will progressively become more widespread. However, one of the biggest obstacles to a pervasive use of robots supporting and helping humans in their everyday chores relies on the absence of an intuitive communication between robotic devices and non-expert users. Several attempts have been made at achieving seamless human-robot interaction (e.g., Sisbot and Alami 2012;Dragan, Lee et al. 2013) even with positive outcomes in the context of the small manufacturing industries (e.g., the manufacturing robot Baxter). However the lack of a systematic understanding of what works and why, does not allow for a generalization of this success in different domains. Therefore, in order for robotics, and in particular humanoid robotics, to become a common and functional element of our society, a deeper comprehension of the principles of human-human interaction is needed. Only this knowledge will pave the way to the design of robotic platforms easily usable and understandable by everybody.
However, the investigation of social interaction is a very challenging task. The dynamics of two agents performing a joint action together is much more complex than just the sum of the behaviours of the two individuals. The actions, the movements and even the perceptual strategies each partner chooses are substantially modified and adapted to the cooperation. Traditional research on this topic has been conducted analysing a posteriori recordings of interactions, with the disadvantage of not being able to intervene or to selectively modulate the behaviour of the interacting partners. In more constrained scenarios, a human actor has been used as stimulus. Although this approach provides an increased level of control, not all aspects of human behaviour can be actually manipulated. In particular, the automatic behaviours that constitute a great part of natural coordination are very difficult to restrain. As a potential solution, the option of using video recordings as stimuli for an interaction has been often adopted, especially in the context of action observation and anticipation. This approach guarantees more control and a perfect repeatability, but on the other hand it eliminates some fundamental aspects of real collaborative scenarios, as the shared space of actions, the physical presence, the possibility to interact with the same objects and even the potential physical contact between the two partners.
A valuable, novel solution to these problems could be represented by robots and in particular by humanoid robots. These are embodied agents, moving in our physical world and therefore sharing the same physical space and being subject to the same physical laws that influence human behaviour. Robots with a humanoid shape have the additional advantage of being able to use the tools and objects that have been designed for human use, making them more adaptable to our common environments. Moreover, the human shape and the way humans move are encoded by the brain differently with respect to any other kind of shape and motion (Fabbri-Destro and Rizzolatti 2008). Consequently, humanoid platforms can probe some of the internal models already developed to interact with people and allow studying exactly those basic mechanisms that make human-human interaction fluid and efficient. Thus, humanoid robots represent an ideal stimulator, i.e. a "physical collaborator" whose behaviour could be controlled in a repeatable way. Not only do they share the partner's action space and afford physical contact, but they can also monitor in real-time the performance of their partner through their on-board sensors and respond appropriately enabling the investigation of longer and more structured forms of interaction.
Two Examples on How to Build Robot-to-Human Information Flow
We suggest that this kind of technology is particularly suited to investigate the very basics mechanisms of interaction: how the motor and sensory models of action and perception change when an action is performed in collaboration rather than in solo, or which are the specific properties of motion that are most relevant in allowing an immediate comprehension between co-operators (Sciutti, Bisio et al. 2012). In this sense the focus is on implicit communication, one specific aspect of social interaction. A classic example of implicit communication is represented by gaze movements. We unconsciously move our eyes, fixating objects of interest or landmarks where our actions will be directed to (Flanagan and Johansson 2003). Human beings are extremely sensitive to the direction of others' gaze. For instance we follow someone else's gaze to recognize and share the focus of his attention and by looking at someone else's eyes we can infer if he is paying attention to what we are showing to him (Lohan, Griffiths et al. 2014). Moreover, we can often anticipate the intention of our partners by noticing what he is fixating, or even infer whether he is thinking or paying attention to us by observing the way he moves his eyes around.
To clarify the effect of this implicit signal on interaction, during a turn-taking game, a series of experiments manipulated the way in which the humanoid robot iCub moved its eyes. The robot either looked in the direction of the partner after its turn finished or kept its eyes fixed. We assessed whether the way participants played the game was influenced by this difference in gazing behaviour and by a different degree of robot autonomy. Interestingly, even if robot gazing was not relevant for the game play, participants modified their playing strategy, apparently attributing more relevance to robot actions when it exhibited an interactive gazing behaviour and more autonomy (Sciutti, Del Prete et al. 2013). Hence, a simple manipulation of robot's gaze motion has an actual impact on how humans behave in an interaction.
Not only eyes however, are carriers of unconscious communication. Indeed, even our common movements provide a quantity of information that it is implicitly read by human observers. When we are reaching to grasp a cup, someone looking at us can often anticipate which cup, among the ones on the table, we are going to take and whether we will drink from it or we will store it away. Moreover, when we transport the cup, our motion tells the observer whether it is full or empty or whether it is too hot to be handled. Understanding where all this information is encoded could potentially allow simplifying the design of robot shape and motion, by keeping at the same time the efficiency of a rich implicit communication.
To this aim the parameters of a lifting action were evaluated to verify whether they could convey enough information to the observer. This information might offer a cue not only to infer the weight that the agent is carrying, but also to be prepared to appropriately perform afterwards an action on the same object. On the humanoid robot iCub the velocity profile of the actions were changed to assess under which conditions robot observation was as communicative as human observation. Interestingly, even a simplified robotic movement, sharing with human actions only an approximation of the relationship between lifting velocity and object weight, was enough to guarantee both to adults (Sciutti, Patanè et al. 2014) and older children (10 years old, Sciutti, Patanè et al. 2014) a correct understanding of the load, with a performance comparable to that measured for the observation of human lifters.
Therefore, even a shape that is not exactly human-like and a motion which is a simplified version of that adopted by humans is enough to allow for a rich transfer of information through action observation. Hence, a very basic form of social intelligence, as the efficient transmission of implicit information through action execution (by gaze or by arm motion) can be achieved on humanoid robots, even with a strong simplification on the human-likeness of the robot shape and motion. We propose therefore that humanoid robots, before becoming companions or helpers, might play the fundamental role of interactive probes, i.e., of tools to derive in naturalistic contexts which human-like properties are actually relevant to foster a natural and seamless interaction. This process has two important consequences: on the one hand it allows shedding some light on the mechanisms of social interaction in humans, a complex topic which still requires the research of many neuroscientists and psychologists. On the other hand, it provides design indications, which could result in simpler and cheaper platforms, at the same time exhibiting a perfect and natural interface to their human users.
Evaluating Human-to-Robot Interaction
Humanoid robots are no humans, but their appearance and functionality sometime leans towards a human-like appearance and behaviour. In fact, here we suggest that replicating human-like features is useful only if these are central to the development of a natural interaction with humans. This functional stance on robotic design must be derived from basic research with the aim of removing superfluous computational and architectural costs, in a principled manner. The principle of replicating only the minimal human shape and motion features necessary to enable a natural interaction will have also the additional advantage of lowering the risk of entering in the Uncanny valley of eeriness (Mori 1970) in the attempt of mimicking the human being as a whole.
By following the principle of replicating only the minimal features needed, robots still have limitations. In fact, the ultimate minimalistic approach descends from a functional perspective and thus the human-likeness has to be judged by the users in a real/long term interaction and thus coping with the robots' limitations and potentialities.
Short term Human-to-Robot Interaction
Current research suggest that humans not only accept robots in social environments better when their behaviour and appearance is human-like but it also appreciates the fact that humans will be able to work with a natural interaction interface (like a humanoid robot) on a very efficient level (Sato, Yamaguchi, Harashima, 2007). To create such a natural interaction interface with a robot the appearance is supportive, but the promises given by the appearance must be kept by the robots' capabilities (Weiss, et al. 2008).
Keeping the balance between a human-like appearance and the functionality of the robot is a key point to keep in mind. In fact, the primary goal of social robotics should be that of building functionally effective and coherent implementations of behaviours on any given robotic platform. Thus, it is important to not only base a behaviour strategy for a robot on a human-based model, but also to understand the limitations of the robots capabilities and the discrepancies that have to be taken into account. Therefore, it is like saying that we need to be in the robot's shoes or look from the robots perspective.
By looking from the robots perspective, the implementation of models based on human behaviour need to take a second thought. In this second step we are not only evaluating the model implemented on the robot, but also the resulting behaviour of the robot and effects on humans. A method to evaluate the functional results of the implemented interaction is to evaluate the loop between human and robot. Hence, the idea is to evaluate the interplay presented in the interaction of human and robot (Lohan et al. 2012).
Therefore, evaluating human-to-robot interactions should be seen as a two-step process from the very beginning. First, the model based on human social behaviour is implemented on an appropriate robotic platform. This model has to be validated based on its given benchmarks and the given hypothesis to tested. Secondly, the resulting interface between human and robot in interaction may give a higher-level of insight into the capacity to establish a social connection, presented by the robot. However, even in a constrained scenario, the quantification of the effective establishment of the modelled interaction is not trivial. In this critical second step, measurements of human-human interaction could potentially be used as the ground truth to compare with the human-to-robot one.
At the same time, also methods derived from behavioural research have been recently employed. Among them, the methodology of conversation analysis is one possibility, which has been used in human-human interaction to evaluate this interplay (Hutchby and Wooffitt, 2008). Furthermore, there are other useful concepts like the measurement of contingency (Csibra and Gergely, 2009;Lohan et al., 2011) or synchronization (Kose-Bagci, et al. 2010), between interaction partners that can potentially hint towards the interaction quality. These methods are taking both interaction partners and therefore both sides into account. At the same time, it is also true that different social contextual conditions can change the meaning of interaction, not only based on the interaction partner's characteristics (i.e. when greeting someone close to you in a public place different social rules apply than when greeting the same person in a private space).
Long Term Human-to-Robot Interaction
When moving towards the direction of social interaction and long term or even lifelong relationships with a robot we also need to understand the long-term dynamics behind these interactions. Hence, the robot needs to be able to adapt and act within its capability on the side of the human partner. In human-human interaction small changes in the sensorimotor communication can have a drastic impact on their behaviour, therefore the sensitivity and understanding of these small cues must be important for a robot. At the same, human behaviour has a broad and hierarchical variation in the communication complexity. For such a reason, those subtle cues sent by humans are not always central to the communicative interaction, and thus the robot needs to learn all the possible variations in a given context. Therefore, the robot will have to select the saliency of very different kinds of features in order to respond appropriately in accordance to the given social situation.
As a matter of fact, the embodiment of a system like a robot in social situations is defined as not only being dependent on its own sensory-motor experiences and capabilities but also dependent on the environmental changes, caused by social constraints (Dautenhahn, 1999). Thus, when moving robots into social environments they need to be able to take their surroundings and the rules given by these surroundings into account. This is why looking from a robots perspective, the interplay between its behaviour and the behaviour of its interaction partner, needs to be considered carefully.
When concentrating on a long-term perspective of social interaction, the evaluation of a robot that can create a relation with a human is a very complex problem, still requiring a credible solution. When looking towards methodologies used in developmental psychology we can see that it is difficult to create a quantitative strategy to evaluate the long-term evolution of the interactions. Current state of the art robotics is facing exactly these problems with the evaluation of long-term interaction. Therefore, models like social symbioses or emotional states are explored in creating strategies to give a robot the capability of dynamic adaptation (Rosenthal et al., 2010).
Overall, evaluating human-robot interaction has different levels of complexity. When evaluating human-to-robot interaction, we need to take also the robots perspective and therefore its capabilities into account to take a look at the loop created in the interaction. Furthermore, social rules created by environmental constraints and therefore the full embodiment of a social interaction, needs to be taken into account to functionally evaluate the success of the robotic design appropriately.
Conclusion
In conclusion here we are proposing the need for the field of HRI, to move from a "rigid" social contact between (social) bodies towards a "soft" interaction. By "soft" interaction we mean the dynamical compliance and long-term adaptability to human sensorimotor, cognitive, affective non-physical communication and interaction. Speaking in terms of control/planning, robotics is already moving from avoiding contacts with the environment to exploiting them, i.e., using them in motion control, thanks to the new compliant actuations and the sensors (Tsagarakis et al., 2010;Ferscha, 2013).
Along these lines, the field of HRI may be stimulated by current attempts to measure the real-time implicit information flow between human agents, embedded in a complex and ecological scenario. In fact, basic research in cognitive neuroscience we outlined earlier, may serve two critical functions. The first regards the principled building of a functionally effective human-robot interaction. The second technical advantage is that the same methods and models used to measure human-human flow can be applied to the future HRI implementations, as a benchmark to evaluate successful interaction with robots.
This means that the robot is not anymore studied in separation from its physical human-centred environment. Conversely, robot and environment (or other agents) are blended together to plan a new optimal and collaborative way to move. Similarly, in social cognition, both for human-human and human-robot interaction, we feel the need for a change: from the study of individual in isolation, to the study of complex systems of two or more people interacting together. Most importantly, these latter needs not to be treated as a linear sum of single individualities, but require again a "blending" which is manifested by the subtle mechanisms of implicit communication and which is modulated by the context and the long term interaction.
In general, we propose an integrated approach (as sketched in Figure 12.1) that starts with methods to quantify human-human interaction. The knowledge derived for this basic research is then translated into basic principles to build better robots. Finally, and by closing the conceptual loop, it uses those very same methods to functionally evaluate the efficacy of human-robot interaction. Figure 12.1: This graphical depiction represents the need to rigorously quantify human-to-human interaction for two main purposes. The first is to derive useful principles to guide the implementation of robot-to-human interaction. The second is that such artificially built interaction needs to be evaluated against its natural benchmark. Closing this conceptual loop, in our opinion, is the only way to establish an effective HCC with an embodied artefact.
|
v3-fos-license
|
2018-04-03T04:23:11.911Z
|
2014-11-03T00:00:00.000
|
14459318
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1155/2014/639856",
"pdf_hash": "a82087b06d33c9175fe8a6c3f5e26476f9536ce7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2284",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "064c962a9941174dc451db204911b6f6975f5d75",
"year": 2014
}
|
pes2o/s2orc
|
Brazilian Red Propolis Induces Apoptosis-Like Cell Death and Decreases Migration Potential in Bladder Cancer Cells
Natural products continue to be an invaluable resource of anticancer drug discovery in recent years. Propolis is known for its biological activities such as antimicrobial and antitumor effects. This study assessed the effects of Brazilian red propolis (BRP) on apoptosis and migration potential in human bladder cancer cells. The effect of BRP ethanolic extract (25, 50, and 100 μg/mL) on 5637 cells was determined by MTT, LIVE/DEAD, and migration (scratch assay) assays. Apoptosis induction was investigated through flow cytometry and gene expression profile was investigated by qRT-PCR. Results showed cytotoxicity on MTT and LIVE/DEAD assays, with IC50 values of 95 μg/mL in 24 h of treatment. Cellular migration of 5637 cells was significantly inhibited through lower doses of BRP ethanolic extract (25 and 50 μg/mL). Flow cytometry analyses showed that BRP induced cytotoxicity through apoptosis-like mechanisms in 5637 cells and qRT-PCR revealed increased levels of Bax/Bcl-2 ratio, p53, AIF, and antioxidant enzymes genes. Data suggest that BRP may be a potential source of drugs to bladder cancer treatment.
Introduction
Cancer is one of the leading causes of death in both developing and developed countries and is a worldwide concern. A total of 1,660,290 new cancer cases and 580,350 cancer deaths are projected to occur in the United States in 2013 [1] and by 2050, 27 million new cancer cases and 17.5 million cancer deaths are projected to occur in the world [2]. An analysis of the anticancer drugs revealed that 47.1% of the approved anticancer drugs were either unmodified natural products or their semisynthetic derivatives or synthesized molecules based on natural product compound pharmacophores [3].
Natural products tend to present more structurally diverse "drug-like" and "biologically friendly" molecular qualities than pure synthetic compounds at random [4] and have been considered as an "unlimited" resource for future drug discovery [5].
Propolis is a resinous mixture of substances collected by honey bees (Apis mellifera) from various plant sources. It has been used in folk medicine for centuries mostly due to its antimicrobial and anti-inflammatory activities [6]. Notable chemical differences are often found between propolis samples and Brazil has the widest chemical diversity of propolis types [7]. Brazilian red propolis (BRP) is the 2 Evidence-Based Complementary and Alternative Medicine newest variety of Brazilian propolis and is a promising source of new bioactive compounds [8] like chalcones, pterocarpans, isoflavonoids, and polyphenols [9].
Since its discovery, BRP has been studied to elucidate its several biological properties. Studies have shown antitumor properties of red propolis against several types of cancer both in vitro and in vivo [8,[10][11][12][13][14]. The mechanisms involved on potential anticancer effects of propolis are suppressing cancer/precancerous cells proliferation via its immunomodulatory effect; decreasing the cancer stem cell populations; blocking specific oncogene signaling pathways; modulating the tumor microenvironment; and, lastly, being an adjunct or complementary treatment to existing mainstream anticancer therapies [15]. Besides that, BRP had also shown a potent antiangiogenic activity by targeting key steps that are required for new blood vessel development [16,17], showing a natural chemopreventive activity.
The aim of this study was to investigate whether Brazilian red propolis ethanolic extracts have cytotoxic effect and study the underlying cell death mechanisms in human bladder cancer cells.
Red Propolis Sample and Extract Preparation.
The red propolis was collected from a geographic region on northeast of Brazil known as Brejo Grande (S 10 ∘ 28 25 and W 36 ∘ 26 12 ). The samples of red propolis were collected in September 2011 and frozen at −20 ∘ C. For extract preparation, 1 g (dry weight) of raw red propolis was mixed with 10 mL of EtOH-H 2 O 70% (v/v) and shaken at room temperature for 24 h. After extraction, the mixture was filtered and the solvent was evaporated and produced a red fine powder. This dry extract was kept frozen at −20 ∘ C. The BRP final concentrations (25, 50, and 100 g/mL) were prepared immediately before use with EtOH-H 2 O 50% (v/v).
Chemical Characterization of Red Propolis Extract (Mass Analysis).
The dries extracts were dissolved in a solution of 50% (v/v) chromatographic grade acetonitrile (Tedia, Fairfield, OH, USA), 50% (v/v) deionized water, and 0.1% formic acid. The solutions were infused directly individually into the ESI source by means of a syringe pump (Harvard Apparatus) at a flow rate of 50 L min −1 . ESI(+)-MS and tandem ESI(+)-MS/MS were acquired using a hybrid high-resolution and high accuracy (5 g/L) microTof (Q-TOF) mass spectrometer (Bruker Scientific) under the following conditions: capillary and cone voltages were set to +3500 V and +40 V, respectively, with a desolvation temperature of 100 ∘ C. For ESI(+)-MS/MS, the energy for the collision induced dissociations (CID) was optimized for each component. Diagnostic ions in different fractions were identified by the comparison of their ESI(+)-MS/MS dissociation patterns with compounds identified in previous studies. For data acquisition and processing, compass software (Bruker Scientific) was used. The data were collected in the m/z range of 70-800 at the speed of two scans per second, providing the resolution of 50,000 (FWHM) at m/z 200. No important ions were observed below m/z 180 or above m/z 650; therefore ESI(+)-MS data is shown in the m/z 180−650 range.
Antiproliferative
Assay. The proliferation of the 5637cell line after treatment was determined by measuring the reduction of soluble MTT to water insoluble formazan. Cells were seeded at a density of 2 × 10 4 cell per well in a volume of 100 L in 96-well plates and grown at 37 ∘ C in a 5% CO 2 atmosphere for 24 h before being used in the cell viability assay. Cells were then treated with the red propolis extract EtOH-H 2 O 50% (v/v) at concentrations of 25, 50, and 100 g/mL or EtOH-H 2 O 50% vehicle alone, for 24 h. Following incubation, 20 L of MTT was added to each well, and the cells were incubated for an additional 3 hours at 37 ∘ C. Differences in total cellular metabolism were detected at a wavelength of 492 nm using a microplater reader. The inhibition (%) of cell proliferation was determined as follows: inhibitory growth = (1 − Abs492 treated cells /Abs492 control cells ) × 100% [18]. The IC 50 (concentration g/mL that inhibits 50% of cell growth) was also calculated using GraphPad Prism 5.0 Software. The normal CHO-K1 cell line was used as selectivity control in this test. All observations were validated by at least three independent experiments in triplicate for each experiment.
2.5. LIVE/DEAD Assay. Cells were treated with red propolis extract EtOH-H 2 O 50% (v/v) at concentrations 25, 50, and 100 g/mL for 24 h as described above. LIVE/DEAD cell viability assay (Invitrogen, Carlsbad, CA, USA) was conducted following the manufacturer's instructions. Live cells were able to take up calcein and could be analyzed by green fluorescent light emission (488 nm). Ethidium bromide homodimer diffuses through the now permeable membrane of dead cells and binds to DNA, which was detected by the red fluorescent signal (546 nm). The LIVE/DEAD assay was analyzed with a fluorescence microscope Olympus IX71 (Olympus Optical Co., Tokyo, Japan) by multicolour imaging. After excitation at 480 nm and emission at 510 nm the fluorescent images were stored as TIFF files using a digital camera attached to a fluorescence microscope (DP 12; BX 51; Olympus, Tokyo, Japan). The recorded images were analyzed using Cell ∧ F software (Cell-F, New York, USA). The data were expressed as the mean ± SEM and the experiment was run in triplicate.
2.6. Apoptosis Assays. Apoptosis was determined by flow cytometry using Annexin V-7AAD apoptosis detection kit (Guava Technologies, Millipore Corporation) and TUNEL detection kit (Guava Technologies, Millipore Corporation), following the manufacturer's instructions. 5637 cells were exposed to red propolis extract EtOH-H 2 O 50% (v/v) at concentrations of 25, 50, and 100 g/mL for 24 h in culture media at 37 ∘ C with 5% CO 2 . A range of 2.0 × 10 4 to 1.0 × 10 5 treated cells (100 L) were added to 100 L of Guava Nexin reagent. Cells were incubated in the dark at room temperature for 20 min and samples were acquired on the flow cytometry (Guava Flow Cytometry easyCyte System; Millipore Corporation). In this assay, an Annexin Vnegative and 7-AAD-positive result indicated nuclear debris; an Annexin V-positive and 7-AAD-positive result indicated late apoptotic/death cells, while an Annexin V-negative and 7-AAD-negative result indicated live healthy cells and Annexin V-positive and 7-AAD-negative result indicated the presence of early apoptotic cells. The results were reported as the percentage of cells in each apoptotic phase (early and late) and the normal CHO-K1 cell line was used as selectivity control in this test.
For TUNEL assay 5637 cells were subjected to cells fixation procedure with 50 L of 4% (w/v) paraformaldehyde in PBS for 60 min at 4 ∘ C and then with 200 L of ice-cold 70% (v/v) ethanol at −20 ∘ C for at least 18 h. For staining procedure, 1.5 × 10 4 to 1.0 × 10 5 of fixed cells was washed twice and was added to 25 L of DNA Labeling Mix for 60 minutes at 37 ∘ C. At the end of the incubation time, cells were centrifuged and suspended in 50 L of the Anti-BrdU Staining Mix. Cells were incubated in the dark at room temperature for 30 min and samples were acquired on the flow cytometry (Guava Flow Cytometry easyCyte System; Millipore Corporation). In this assay, terminal deoxynucleotidyl transferase (TdT) catalyzes the incorporation of BrdU residues into the fragmenting nuclear DNA of apoptotic cells at the 3 -hydroxyl ends by nicked-end labeling. TRITC-conjugated anti-BrdU antibody binds to the incorporated BrdU residues, labeling the mid-to late-stage apoptotic cells.
Quantitative Real-Time PCR (qRT-PCR).
The gene expression profiles of apoptotic and oxidative stress-related genes were investigated by qRT-PCR. Cells were added to 6-well flat bottom plates at a density of 2 × 10 5 per well and grown at 37 ∘ C in a humidified atmosphere of 5% CO 2 , 95% air for 24 h. The cells were then treated with the red propolis extract EtOH-H 2 O 50% (v/v) at concentrations of 25, 50, and 100 g/mL for 24 h. Total RNA extraction, cDNA synthesis, and qRT-PCR were conducted as previously described [19]. Briefly, RNA samples were isolated using TRIzol reagent (Invitrogen, USA) and samples were DNase-treated with a DNA-free kit (Ambion, USA) following the manufacturer's protocol. First-strand cDNA synthesis was performed with 700 ng of RNA using the High Capacity cDNA Reverse Transcription Kit (Applied Biosystems, UK) according to the manufacturer's protocol. Real-time PCR reactions were run on a Stratagene Mx3005P Real-Time PCR System (Agilent Technologies, USA) using SYBR Green PCR Master Mix (Applied Biosystems, UK) and the primers described in Table 1.
Migration
Assay. The ability of cells to migrate in monolayer cultures was assessed by a scratch-wound assay [20]. Confluent 5637-cell cultures in 6-well flat bottom plates were scraped with a p200 pipet tip to create a wide cell-free zone with a straight wound edge. Cells were grown in media with 25 and 50 g/mL of red propolis EtOH-H 2 O extract for 24 h. The edge of the wound was marked at the bottom of the plate with a fine gauge hypodermic needle as a migration reference point. The distance and quantity of cell migration into the cell-free zone were evaluated on a digital camera attached to an inverted microscope for 12 h (DP 12; BX 51; Olympus, Tokyo, Japan). The recorded images were analysed using Cell ∧ F software (Cell-F, New York, USA). The data were expressed as the mean ± SEM and the experiment was run in triplicate.
Data
Analysis. Data sets were analyzed using one-way or two-way ANOVA followed by a Tukey test for multiple comparisons, except for Bax/Bcl-2 ratio that was analyzed using Student -test. Significance was considered at < 0.05 in all analyses. Data were expressed as mean ± SEM. composition of propolis extracts may differ. As reported in a previous work, high-resolution direct-infusion mass spectrometry (HR-DIMS) was used for chemical characterization of the red propolis extract [14]. The main components were maintained as follows: m/z 257.0764 (liquiritigenin); 269.0769 (formononetin); 271.0921 (medicarpin); 285.0718 (biochanin A); 523.1641 (retusapurpurin B) ( Figure 1). Exact mass, fragmentation pathway, and isotopic ratio were used for confirmation.
Red Propolis Inhibited Cell Proliferation and Increased 5637-Cell Death.
The result showed that red propolis extract significantly decreased 5637-cell viability in vitro in a dosedependent manner (Figure 2(a)). The cell growth inhibition following red propolis treatment was over 50% from 100 g/mL in 24 hours. EtOH-H 2 O vehicle alone showed no cytotoxicity or antiproliferative activity at 24 h of treatment. The in vitro cytotoxic activity of red propolis extract showed an IC 50 value of 95 g/mL in 24 h of treatment. BRP treatment also inhibited proliferation in CHO-K1 cell line at 24 h of treatment, displaying no selectivity between normal and cancer cells in terms of in vitro growth inhibition (Figure 2(a)). LIVE/DEAD assay showed an increase in cell death (red fluorescence) after red propolis treatments compared to the control group (Figure 2(b)). Additionally, a reduction in cell number can clearly be observed at concentrations of 100 g/mL (Figure 2(b)(D)). EtOH-H 2 O vehicle alone treatment promoted cell death similar to that observed in the control group (data not shown).
Red Propolis Induced
Apoptosis on 5637 Cells. The results indicated that red propolis is capable of inducing early apoptosis at concentrations of 50 and 100 g/mL (55.8% and 63.9%, resp.) when compared to the control group ( < 0.05) ( Figure 3); however no apoptosis difference is observed between these two concentrations ( > 0.05). The concentration of 25 g/mL was not effective ( > 0.05) in inducing early apoptosis, presenting levels of apoptosis similar to untreated control cells (29.9 and 6.7%, resp.). The red propolis extract induced a higher percentage of late apoptosis/dead at 100 g/mL concentration (31.1%) compared to the control ( < 0.05). At the 25 and 50 g/mL concentrations the percentage of late apoptotic/dead cells was 5.1% and 14.6%, respectively, similar to that observed in the untreated control group ( > 0.05). Exposure of the 5637 cells to EtOH-H 2 O vehicle alone had no effect in apoptosis induction (data not shown). BRP treatment was not able to induce apoptosis in CHO-K1 cell line at 25 and 50 g/mL. The percentage of apoptotic cells was 11% and 15.4% for 25 and 50 g/mL, respectively, which is not different from untreated control group ( > 0.05) (data not shown). Early and late apoptotic levels together at 100 g/mL were 47% in CHO-K1 cells. Interestingly, 51% of CHO-K1 normal cells remained alive after 100 g/mL BRP treatment but only 8% of 5637 tumoral cells remained alive after the same treatment. This result may indicate that BRP displays selectivity between normal and cancer cells in terms of in vitro apoptosis induction (Figure 3(b)).
TUNEL staining assay was performed to better elucidate the mid-to late-stage apoptosis of 5637 cells induced by red propolis, once that 7AAD staining does not differentiate between late-stage apoptoses from another cell death types. Figure 4 demonstrates that red propolis presented a tendency to increase late apoptosis inducement; however no differences ( > 0.05) on late apoptosis rates were observed between Evidence-Based Complementary and Alternative Medicine control group and red propolis treatment at 25, 50, and 100 g/mL in bladder cancer cells (Figure 4(b)).
Red Propolis Changes Apoptotic Gene Expression
Profile on 5637 Cells. The expression levels of pro-and antiapoptotic genes (Bax, Bcl-2, AIF, Endo G, caspase-3, caspase-8, caspase-9, and p53) in 5637 cells were evaluated by qRT-PCR. As shown in Figure 4, the expression of Bax was bell-shaped, with 25 g/mL treatment showing the higher fold induction ( Figure 5(a)). However, Bcl-2 in 5637 cells was also increased after 25 and 50 g/mL treatments when compared to the control ( < 0.05), showing 1.5-and 2-fold increase in mRNA expression, respectively ( Figure 5(b)). Interestingly, no effect in mRNA expression levels was observed after 100 g/mL treatment in 5637 cells when compared to the control ( > 0.05), showing that red propolis extract may trigger a negative feedback. However, the Bax/Bcl-2 ratio increased in 5637 cells after 100 g/mL BRP treatment ( Figure 5(c)), compared to that observed in untreated cells and 25 and 50 g/mL treated cells ( < 0.05). Apoptosis-inducing factor (AIF) mRNA expression was found to be significantly upregulated in 25 and 50 g/mL red propolis treated cells ( < 0.05) compared to no treated control cells and to 100 g/mL treated cells (Figure 6(a)). No differences were observed between the controls 25 and 50 g/mL of red propolis treatments in Endo G gene expression (Figure 6(b)). However, both Endo G and AIF mRNA levels were significantly lower ( < 0.05) in the 100 g/mL treatment compared to the control.
Red Propolis Increased Gene Expression of Antioxidant
Enzymes. The gene expression profiles of the following enzymes were investigated in this study as follows: catalase (CAT), Cu/Zn superoxide dismutase (Cu/Zn-SOD), manganese superoxide dismutase (Mn-SOD), glutathione-S-transferase (GST), and thioredoxin reductase-1 (TRX). Red propolis showed a tendency to increased CAT, Cu/Zn-SOD, and Mn-SOD mRNA levels significantly ( < 0.05) in cells exposed to 50 and 100 g/mL compared to the controls (Figures 8(a), 8(d), and 8(e)); however there was no difference in CAT mRNA expression between 50 and 100 g/mL treatments (Figure 8(d)). No difference between the control and 25 g/mL treatment was observed for these genes expression levels. Moreover, the GST, TRX, and GLUT mRNA expression patterns were investigated and no differences between treated and untreated cells were observed (Figures 8(b), 8(c), and 8(f)), but treated cells presented a dose-dependent tendency to increased expression levels in TRX and GST genes.
Red Propolis Inhibits Migration of Urothelial Carcinoma.
Migration of 5637 cells was significantly inhibited by 24 h of red propolis treatment. As shown in Figure 9, cellular migration was controlled in time-dependent by red propolis ethanolic extract. The width of the scratch-wound healing was inhibited by up to 20% and 30% at 8 and 24 h of incubation, respectively, in both concentrations of red propolis tested (Figures 9(a) and 9(b)). The number of cell migration to the scratch-wound healing was also smaller when cells were treated with BRP compared to those who were not treated (Figures 9(a) and 9(c)). Furthermore, inhibition of 5637 migrations occurred at lower concentrations (25 and 50 g/mL) than the aforementioned IC 50 concentrations by MTT assay.
Discussion
Natural products continue to be an invaluable resource of anticancer drug discovery [5]. The prospect of using natural products to create more selective and effective cancer treatment is a reality and propolis and its compounds possess strong antitumor potential [15,21]. In the present study we evaluated for the first time the effect of Brazilian red propolis ethanolic extract on bladder cancer cellular model. Our in vitro data demonstrated that red propolis treatment above 50 g/mL resulted in morphological changes, significant antiproliferative effect, and cytotoxic effect in bladder cancer cell line. Interestingly, our results have also shown that lower concentrations of red propolis treatment (25 and 50 g/mL) are able to significantly decrease bladder cancer cells migration in vitro. These data indicate a strong effectiveness of the BRP extract against bladder cancer cell line.
Apoptosis induction is one of the mechanisms proposed for the anticancer therapeutic effects of propolis [22,23]. Apoptosis is a well-characterized type of programmed cell death (PCD) and is considered as a highly regulated process that allows a cell to self-degrade in order to eliminate an unwanted or dysfunctional cell [24]. Conventional anticancer treatments, such as chemotherapy and radiotherapy, kill tumor cells primarily by the induction of apoptosis or apoptosis-like PCD [24,25]. In this study we demonstrate by flow cytometry that red propolis might be an important apoptosis inductor in bladder cancer cells, showing an increase in both early and late apoptosis stages in vitro. More than that, the mechanism of apoptosis induced by BRP seems to be dependent on the concentration of the propolis extract.
A single family of proteases, the caspases, has long been considered as the pivotal executioner of all programmed cell deaths [25]. When activated, the caspases cleave a series of substrates, activate DNAses, and orchestrate cell death [26]. However, there are evidences that apoptosis can occur independently of caspases activity [27]. Apoptosislike PCD is a programmed cell death that shows a less compact/complete chromatin condensation than in apoptosis and most of the published forms of caspase-independent apoptosis fall into this class of PCD [25]. More than that, one of the main characteristics of PCD is the fragmentation of nuclear DNA [27]. Apoptosis-inducing factor (AIF) is a flavoprotein that resides in the mitochondrial intermembrane space [28]. Upon induction of apoptosis, AIF is translocated from the mitochondria to the nucleus and it causes chromatin condensation and large-scale DNA fragmentation without caspases activation [28][29][30]. Herein, ethanol extract of red propolis does not induce significant caspases expression activities. On the other hand, the AIF gene expression profile in the 5637-cell line increased after BRP treatment and an increase of DNA fragmentation was observed after 24 h of BRP treatment. The apoptosis gene expression data from our experiments confirmed the results of cytotoxicity and apoptosis assays, showing that BRP extract may induce apoptosis or apoptosis-like PCD in 5637 cells and that this may occur by activation of different apoptosis pathways. The positive effect of propolis anticancer therapy is seen in its ability to initiate apoptosis in cancer cells through both the intrinsic and extrinsic pathway [22,[31][32][33][34][35]. The intrinsic apoptotic pathway is mediated by the mitochondria and is mainly controlled by the balance and interactions between pro-and antiapoptotic members of the Bcl-2 family proteins, which regulate the permeability of the mitochondrial membrane [26]. It has been proposed that the ratio between Bcl-2 and Bax genes is more important in the regulation of apoptosis than the level of each Bcl-2 family protein alone [36] and the ratio of death and survival signals sensed by the Bcl-2 family proteins determines whether the cell will live or die [26,37,38]. Although both Bax and Bcl-2 genes have shown an increase in expression profile after treatment with BRP in this study, the Bax/Bcl-2 ratio in the 5637-cell line strongly increased after 100 g/mL of BRP treatment, suggesting that Bax and Bcl-2 may be involved in the apoptotic events associated with the cytotoxic effects of BRP. More than that, our study also showed an increase in p53 gene expression after treatment with BRP extract. It is well known that p53 contributes to apoptosis induction mostly by its transcription-dependent effects. However, it has been shown that p53 can also induce cell death via direct activation of Bcl-2, Bcl-XL, and Bax [39][40][41]. These data support our speculation that Brazilian red propolis may trigger apoptosis or apoptosis-like PCD induction through p53, Bax, and Bcl-2 activation. The established role of antioxidant enzymes against cancer is in the prevention of oxidative DNA damage and reactive oxygen species (ROS) formation [42,43]. It has been shown that propolis has the ability to scavenge the free radicals in rats [44]. Oxidative stress can trigger endoplasmic reticulum (ER) stress [45] and ER stress is able to induce apoptosis without involvement of caspases [46]. Moreover, the regulation of ER membrane permeability by Bcl-2 proteins could be an important molecular mechanism of ER stress-induced apoptosis [30]. It has been shown that an ethanolic red propolis extract induces MCF-7 cell apoptosis mediated by ER stress-related signaling [13]. As shown here, BRP treatment increased the mRNA levels of the antioxidant enzymes CAT, Cu/Zn-SOD, TRx, GST, and Mn-SOD in a bladder cancer cell line. We have shown previously that hydroalcoholic extract obtained from red propolis presented high polyphenol content, important DPPH scavenging ability, and SOD-like and CAT-like activities [14]. Although further work needs to be carried out, the increased levels of the antioxidant enzymes observed in the present study might reflect the response of cells towards programmed death mediated by ER stress-related signaling.
In conclusion, our findings indicate that Brazilian red propolis induces cytotoxicity on superficial bladder cancer cells in vitro and this effect may be due to a caspaseindependent apoptosis or apoptosis-like PCD. Additionally, these results are insightful for the antitumor effect of BRP and we speculate that red propolis may represent a source of therapeutic agents for bladder cancer.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2015-11-24T00:00:00.000
|
12797746
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://ecancer.org/en/journal/article/596-risk-factors-and-biomarkers-of-life-threatening-cancers/pdf",
"pdf_hash": "ba5c6d035633168df342b7b70f1006d378814fed",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2286",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e01f0a6fc9b122e45ee60b703171d30d67bf57e7",
"year": 2015
}
|
pes2o/s2orc
|
Risk factors and biomarkers of life-threatening cancers
There is growing evidence that risk factors for cancer occurrence and for cancer death are not necessarily the same. Knowledge of cancer aggressiveness risk factors (CARF) may help in identifying subjects at high risk of developing a potentially deadly cancer (and not just any cancer). The availability of CARFs may have positive consequences for health policies, medical practice, and the search for biomarkers. For instance, cancer chemoprevention and cancer screening of subjects with CARFs would probably be more ethical and cost-effective than recommending chemoprevention and screening to entire segments of the population. Also, the harmful consequences of chemoprevention and of screening would be reduced while effectiveness would be optimised. We present examples of CARF already in use (e.g. mutations of the breast cancer (BRCA) gene), of promising avenues for the discovery of biomarkers thanks to the investigation of CARFs (e.g. breast radiological density and systemic inflammation), and of biomarkers commonly used that are not real CARFs (e.g. certain mammography images, prostate-specific antigen (PSA) concentration, nevus number).
Introduction
The environmental, lifestyle, and genetic causes of cancer have received considerable attention over the past five decades. Hundreds of studies have unveiled some of the risks factors involved in the occurrence of cancer. By comparison, few studies have examined risk factors for being diagnosed with a life-threatening cancer versus being diagnosed with an indolent cancer that would never be life-threatening. In other words, what are the characteristics of subjects likely to be diagnosed with a 'pussy cat' cancer (i.e. non-life-threatening or easily curable) or with a 'tiger' cancer (i.e. deadly in the absence of an efficient treatment)? Do subjects diagnosed with a metastatic cancer present characteristics that distinguish them from subjects without cancer or from subjects with early-stage cancer, a part from attending screening? These characteristics may be linked to a variety of personal attributes, hereditary traits, lifestyle factors and exposures to substances or circumstances associated with a greater likelihood developing of a 'tiger' rather than of a 'pussy cat' cancer.
Cancers may be life-threatening either because they are diagnosed at a late stage, when metastases have spread into the lymph nodes or in distant organs, or because the cancer has an aggressive phenotype. Indeed, it is well documented that the more aggressive a cancer is, the greater the likelihood it will be diagnosed when at a late stage. However, a sizeable proportion of subjects dying from cancer are diagnosed with a cancer that is apparently still localised. Conversely, a fraction of subjects with advanced cancer survive for a long time even though they received the same amount of attention as other patients with the same stage of cancer.
Recent epidemiological and clinical studies have documented that risk factors for cancer occurrence and for cancer death are not necessarily the same, and that pre-diagnostic, non-tumour-related risk factors could be different for slow progressing or for aggressive cancers [1,2]. Identification of subjects at higher risk to develop a potentially deadly cancer could thus be based on the recognition of specific personal genetic, lifestyle, or environmental characteristics, or on the measurement of non-tumour biomarkers, all factors that we term 'cancer aggressiveness risk factor' (CARF) hereafter.
Risk factors for cancer occurrence and for cancer death
Cancers affecting the same organ are no longer considered as a single disease entity. The wisdom that risk factors for cancer occurrence would be the same as for the risk for aggressive cancer or of cancer death is not correct. For instance, reproductive factors have a strong influence on the risk of breast cancer but little influence on the risk of breast cancer death [2]. Adiposity is associated with reduced breast cancer risk in premenopausal women. However, the risk of death from breast cancer in premenopausal women increases with adiposity [3]. High fertility is associated with reduced risk of breast cancer. However, women giving birth in their 40s have become increasingly common, and breast cancer occurring in the first 2 years after childbirth is known to be more lethal [4]. Another example is smoking in prostate cancer. If smoking is not a risk factor for prostate cancer occurrence, it seems to be associated with the occurrence of fatal prostate cancer [5].
The search for hereditary, lifestyle, and environmental factors that would be involved in the occurrence of potentially life-threatening cancers has really started only after 2000, mainly because of the longstanding false impression that risk factors for cancer occurrence and for cancer death were similar. In 1990, an International Agency for Research on Cancer (IARC, Lyon, France) Scientific Publication on the causes, occurrence, and control of cancer stated that 'the fact of death from cancer is rarely of interest in epidemiology' [6]. There was also the belief from the pre-screening era that any lesion that is labelled as 'cancer' by histological examination would necessarily be life-threatening.
CARF, health policies, and patient management
The availability of CARFs allowing risk stratification would have several implications for health policies and medical practice (Table 1).
Firstly, primary prevention efforts could concentrate on subjects with CARF(s). Personalised counselling based on the presence of CARF could be more effective than primary prevention campaigns spread over entire populations. www.ecancer.org ecancer 2015, 9:596 Table 1. Examples of domains of application of cancer aggressiveness risk factors (CARF).
Primary prevention
Definition of population subgroups most likely to benefit from primary prevention policies Personalised prevention according to CARF presented by subjects
Screening
Definition of population subgroups in which the balance between harms and benefits would be optimal Personalised screening (age at start, method(s) of choice, intensity) according to CARFs presented by subjects Identification of subjects with in benign lesion (e.g. breast hyperplasia) to develop a life-threatening invasive cancer Identification of subjects with in situ or borderline cancers most likely to develop a life-threatening invasive cancer Characterisation of subjects participating to screening most likely to develop and interval cancer
Patient management
Selection of patients with in situ, borderline and early-stage cancer for active surveillance (no CARF) rather than immediate treatment (CARF is present) More intense follow-up of patients with apparently good prognosis cancer if early detection of relapse increases survival Selection of subjects with apparently good prognosis cancer for the evaluation of adjuvant therapies
Basic research
Knowledge of biological mechanisms involved in CARFs may lead to the discovery of: biomarkers with better sensitivity and specificity for indentifying subjects at higher rish of potentially deadly cancer new prevention, screening, and treatment modalities Secondly, because the principal goal of cancer screening is the prevention of cancer death through the detection of cancers at an early, curable stage, CARF could allow the prioritisation of screening efforts towards subjects with CARF and avoid screening in subjects having a low probability to develop an aggressive cancer. CARF could boost the cost-effectiveness of screening because it could be directed to subjects at higher risk of cancer death and not just to subjects at higher risk of cancer. Moreover, if an efficient screening method exists, knowledge by subjects that they harbour a CARF might motivate them to participate in screening.
Third, subjects diagnosed with an apparently good prognosis cancer but who harbour a CARF could be at a higher risk of relapse, which could lead to recommending more intense management of these subjects. Using this logic, availability of CARF may help in selecting subjects with early stage cancer for inclusion in randomised trials for testing the efficacy of adjuvant therapy. CARF may provide useful information for the selection of patients with in situ, borderline or early stage cancer for active surveillance (e.g. if no CARF is present) or for immediate treatment (e.g. if CARF is present). For instance, a history of pregnancy in the two years preceding a diagnosis of breast cancer or being diabetic when breast cancer is found are two CARFs associated with poorer prognosis, even when the tumour has been detected at an early stage (e.g. size less than 20 mm and oestrogenic receptor positive) [4,7,8].
Fourth, the discovery of CARF can have an invaluable role in research on biological mechanisms involved in the occurrence of deadly cancers.
When would CARF be most relevant?
The availability of CARF may help to identify subjects with high or low probability of being diagnosed with an aggressive cancer.
Availability of CARF would be mostly valuable when the incidence of a cancer is much greater (say more than two times greater) than the mortality due to that cancer, meaning that the majority of cancers would not be a cause of death. Over the last 30 years, screening and the availability of imaging and biopsy methods allowing the detection of steadily smaller tumours has led to considerable increases in the incidence to mortality ratio because of over diagnosis. Over diagnosis is the detection of a lesion that has all the features of a cancer under the microscope but does not progress and will thus never be life-threatening. These cancers that do not clinically behave like cancers are pseudocancers whose frequency has been boosted by the advent of screening. Over diagnosis is common in breast, prostate, and thyroid cancer, and in cutaneous melanoma. Over diagnosis is known to be an issue for computerised tomography (CT) scan lung cancer www.ecancer.org ecancer 2015, 9:596 screening. Over diagnosis also encompasses the detection of in situ or of borderline cancers, most of which will never transform into an invasive cancer or be life-threatening. For instance, before the mammography screening era, in situ breast cancers represented less than 5% of all breast cancers. In areas where mammography screening is widespread, 15 to 20% of breast cancers are in situ. Although we still have poor knowledge of the natural evolution of untreated in situ breast cancer [9], these lesions are nearly always treated. Treatment may be aggressive, with mastectomy (sometimes bilateral), radiotherapy, adjuvant chemotherapy and search for lymph node metastases despite that the lack of evidence on the most adequate management of these tumours. Because the finding of in situ breast cancer generally entails treatment, the mammography detection of these lesions is usually considered as over diagnosis [10].
Using this logic, focusing screening efforts on subjects with CARF is likely to increase the cost-effectiveness of screening and lower the harms due to screening. CARF may also assist in the prioritisation of referral to diagnosis and specialised care.
The quest for CARF
Generally speaking, studies done so far have not identified many CARFs that could be proposed for the risk stratification of subjects. However, several CARFs are already well documented and some promising avenues for research have been identified. We outline some examples here later.
Hereditary factors
Subjects with germline mutation associated with increased risk of cancer may also be at increased risk of aggressive cancer. The best known example are mutations of the BRCA gene that confers a higher lifetime risk of breast cancer, and breast cancers found in BRCA women tend to be more aggressive, with greater frequency of the triple negative phenotype [11].
Screen detectability of cancer
In the 1980s and 1990s, it was thought that screen-detected cancers would display signatures of their aggressiveness. For instance, images suggesting large cancers on mammograms or high serum concentration of biomarkers (e.g. the PSA) would reflect the presence of more aggressive cancer. However, if screen-detected cancer can indeed be fatal, these cancers are on average less aggressive and have a better prognosis that is independent of risk factors known to predict cancer survival. Conversely, cancers that were missed by screening or developed in the interval between two screening rounds (i.e. the interval cancers) are more aggressive and have a worse prognosis than screen-detected cancers. Large suspicious mammography images containing calcium deposits often indicate the presence of in situ cancer that is seldom life-threatening.
In the 1990s, small size, single institution studies suggested that the greater the increase in PSA level between two screening rounds, the more likely the presence of an aggressive prostate cancer (i.e. Gleason Score of 7 or more). It turned out that the reverse was true. For instance, a recent re-analysis of the prostate, lung, colorectal, and ovarian (PLCO) trial showed that the magnitude of changes in PSA level between screening rounds is not a predictor of cancer aggressiveness (Table 2). Hence, for two of the most common cancers, currently available screening methods are not optimal in their ability to early detect the most life-threatening cancers, and future research needs to test the capacity of new screening methods to specifically detect potentially deadly cancers when still at a curable stage.
Radiological density of breasts
The risk of breast cancer is four to five times greater in women with radiographically dense breasts than in women with little or no density in the breast [12,13]. Breast radiological density is an attractive CARF for several reasons. Cancer detection is less sensitive in radiographically dense breasts, and thus, interval cancers tend to be more common in women with dense breast [14][15][16]. Cancers found in dense breasts have a more aggressive phenotype and are more advanced than when found in fatty, non-dense breasts [12,17,18]. Hence, there is growing support for informing women with dense breasts about their higher risk of breast cancer and greater probability of false-negative mammographic examination. There is, however, no firm evidence that other breast examinations (e.g. ultrasonography) would improve the ability of screening to detect mammographically silent cancers that could be life-threatening. In any case, for legal and financial reasons, a growing number of states in the USA requires that radiologists notify women about the radiological density of their breasts (http://www.diagnosticimaging.com/ breast-imaging/breast-density-notification-laws-state-interactive-map).
Obesity, diabetes, and low-grade systemic inflammation
Cancers diagnosed among obese and diabetic subjects have a poorer prognosis, because they tend to be of more aggressive phenotype, are more advanced at diagnosis and are less sensitive to treatments [3,[19][20][21][22][23][24][25]. More advanced stages would not just be due to a greater difficulty to detect cancer in obese subjects or to lower propensity of these subjects who participate to screening. Mechanisms by which adiposity and glucose metabolism disorders would influence cancer phenotype are not well known. The low-grade inflammation that prevails in most obese and diabetic subjects could be the factor underlying the occurrence of more aggressive cancer.
Indirect evidence for a link between systemic inflammation and aggressive cancer phenotype is provided by prospective studies on vitamin D concentration and the risk of subsequent cancer. Subjects with systemic inflammation have lower vitamin D levels [26], and low vitamin D levels are associated with more aggressive and more advanced breast, colorectal, and prostate cancer, and of cutaneous melanoma [27][28][29][30][31].
Studies like those on the toll-like receptor 4 [25,32] represent significant strides in the understanding between inflammatory processes and cancer. It is hoped that the vast body of data accumulating on inflammation and cancer will end up in the discovery of biomarkers allowing the identification of subjects at high risk of aggressive cancer and in the discovery of new cancer treatment modalities based on the control of inflammatory processes.
Nevus count and cutaneous melanoma
The number and size of skin nevi is the strongest predictor of one's chance of being diagnosed with a melanoma. Skin self-surveillance and skin screening are usually recommended to subjects with numerous nevi. The incidence of melanoma has dramatically increased over the last four decades, in part because of greater exposure to ultraviolet radiation during holidays [33], and in part because of steadily increasing rates of nevus excision [34]. The expected reductions in the risk of melanoma death associated with the screening of subjects with numerous nevi rests on the caveat that these subjects would be at higher risk of melanoma death than subjects with few nevi. However, the few available data do not suggest a higher risk of melanoma death associated with increased number of nevi [35]. A high mitotic rate of melanoma cells is associated with a poorer prognosis. A recent study found no association between nevus count and the mitotic rate [36]. Hence, there is no evidence so far that a high nevus count could represent a CARF for melanoma and preferential screening of subjects with numerous nevi is probably no more effective than screening subjects with few nevi.
Search for CARF in the context of cancer screening
Subjects participating in screening are generally diagnosed with smaller and earlier-stage cancer than subjects not participating to screening. Cancer earliness may be due to the detection of cancers that would have been more advanced and more life-threatening if they had been symptomatic (i.e. lead time cancers). Cancer earliness may also be due to the detection of pseudocancers that would have never been www.ecancer.org ecancer 2015, 9:596 symptomatic (i.e. length time cancers). Hence, a greater proportion of early-stage cancer in screened rather than in unscreened subjects may be linked to screening efficiency (ability to detect cancer at an early, asymptomatic stage) and to screening-induced over diagnosis.
In addition, screen-detected cancers are less aggressive, while interval cancers are more aggressive. Finally, subjects not participating in screening are those who, in the absence of screening, would be at highest risk of being diagnosed with an advanced cancer and to die from it.
Because of the complex relationships between screening and cancer phenotype, the search for CARF must be aware of possible biases that could be introduced by the screening history of subjects.
Ethics
Targeted screening on the basis of CARF would probably be more ethical than recommending screening to entire segments of the population that are just defined by age and sex. The harmful consequences of screening would be reduced, while effectiveness would be optimised.
A key issue for the screening of breast, prostate, and lung cancer is that subjects whose lives are saved by screening are not necessarily the same subjects that incur the harmful consequences of screening. For instance, it was estimated for the United Kingdom that for one death prevented thanks to mammography screening, there are three women with over diagnosed cancer [10]. Because in the United Kingdom, mammography screening is proposed every three years, the ratio of three over diagnosed cancers to one life saved must be higher in most other countries where mammography screening is done every year or every two years. Hence, if a CARF allowed the risk stratification of women for their likelihood to have an indolent or an aggressive breast cancer, screening could be concentrated in women at high risk to have a potentially deadly breast cancer and probably abandoned in women at low risk of such cancer.
Conclusion
The possibility of identifying subjects at high or at low risk of developing a potentially deadly cancer may represent a new frontier in cancer research that would have many implications for screening policies, chemoprevention, and decision on treatment options for early-stage cancer which could integrate a better evaluation of the risk of relapse and the aspiration to avoid overtreatment.
Conflicts of interest
The authors declare that there is no conflicts of interest. www.ecancer.org ecancer 2015, 9:596
|
v3-fos-license
|
2017-10-29T13:15:52.627Z
|
2017-09-01T00:00:00.000
|
20210020
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.aclweb.org/anthology/W17-4410.pdf",
"pdf_hash": "c9d6720aa0b43b58430910a40ed6d1c1b1ca5ef3",
"pdf_src": "ACL",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2288",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"sha1": "c9d6720aa0b43b58430910a40ed6d1c1b1ca5ef3",
"year": 2017
}
|
pes2o/s2orc
|
The Effect of Error Rate in Artificially Generated Data for Automatic Preposition and Determiner Correction
In this research we investigate the impact of mismatches in the density and type of error between training and test data on a neural system correcting preposition and determiner errors. We use synthetically produced training data to control error density and type, and “real” error data for testing. Our results show it is possible to combine error types, although prepositions and determiners behave differently in terms of how much error should be artificially introduced into the training data in order to get the best results.
Introduction
The field of Grammatical Error Correction (GEC) is currently dominated by neural translation models, specifically sequence-to-sequence translation. However, despite offering substantial improvements on the well-established statistical machine translation approach to GEC, neural networks come with their own challenges.
Firstly, neural models require a large amount of training data, however the amount of annotated learner English consisting of source (original text) and target (corrected text) is low. Models are at risk of overfitting, simply because the volume of data is not high enough. Secondly, the data that has been used up until now does not generalise very well across different test sets. This means that there has been some success in correcting errors, but only from test sets that are in some sense similar to the training data. Thirdly, it is generally unknown how erroneous the test data is, and if the training data has a different distribution of errors, it is likely that unwanted corrections will be made, or required corrections will be missed.
Currently, there is research into generating artificial data for training neural models, specifically data that resembles learner English (Cahill et al., 2013;Rozovskaya and Roth, 2010;Felice, 2016;Liu and Liu, 2016). The artificial data is generated from monolingual sentences of grammatical English by systematically introducing noise into it. This way, training data consisting of sentences with both "incorrect" and "correct" versions can be generated from monolingual data, which is easily accessible. There is also evidence that artificially generated data can generalise a GEC system better than simply using manually procured correction data (Cahill et al., 2013).
A third advantage of synthetically introducing noise into a corpus is the ability to control how much noise, and which noise, is introduced. The first main question of our research is how the amount of noise introduced into the corpus affects a neural model's behaviour at test time with respect to mismatches in error density and error type between training and test data. Artificial data lends itself to this kind of research, thanks to the control over the corpus.
Up until now, the effect of the amount of errors in the training corpus has only been explored with prepositions specifically (Cahill et al., 2013). We begin by extending this line of research to determiners. The second research question is then: how do two different types of error interact? It is quite possible that introducing many types of frequent grammatical errors one after the other would not create convincing artificial learner data, because several types of error can affect the same word, and a neural model may not be able to learn to combine them in this way.
Related Work
Currently the best results in GEC have used neural machine translation. Yuan and Briscoe (2016) achieved the best scores using a 2-layer encoderdecoder system with attention, trained on the Cambridge Learner Corpus (CLC), a large data set of two million correction Learner English sentences. The CLC is not publicly available, which has inspired the use of automatically generated data with neural models. Liu and Liu (2016) have done exactly this with 16 different types of errors. Their success, although small compared to using manually annotated supervised revision data, has inspired our investigation into the particular effects of combining error types in an artificial corpus.
One particularly interesting approach to generating artificial data is from Cahill et al. (2013), who, focusing on preposition errors, creates confusion sets for each preposition using supervised revision data, and selects replacements at random from these probability distributions. This approach was developed from Rozovskaya and Roth (2010), who first suggested the idea of probabilistically selecting likely error candidates. Interestingly, the artificial data proved to make manually annotated data more robust, meaning that it generalised better across different types of test sets, despite the fact that the overall quality of corrections was lowered. This was confirmed by Felice (2016), who also found that this kind of probabilistic error generation increases precision, and lowers recall.
One main focus of our research is the effect of the amount of errors in the training corpus on the amount of corrections made at test time. Rozovskaya et al. (2012) identify a useful technique known as error inflation, where more errors are introduced into the training data in order to improve recall. This is further explored in our work.
Data
In our research, errors are systemically introduced into "correct" English data. The correct data comes from the NewsCrawl corpus in WMT-2016. 1 It is open domain, featuring a wide variety of topics and writing styles, taken from recent ar-ticles. We used 21,789,157 sentences for training, and 5,447,288 held-out sentences from the same source for a development set.
We follow the same methodology of Cahill et al. (2013) to generate noise. Specifically, supervised revision data is used to see how often particular words are corrected into specific prepositions or determiners. The revision data which is used for our research is the Lang-8 corpus, which is available for academic purposes upon request. 2 The corpus is scraped from the Lang-8 website, where crowd-sourced grammar corrections are posted for non-native speakers of English. It is arguably more reliable than Wikipedia, which contains vandalism, however, it is noticeably smaller than Wikipedia.
The process of introducing errors into the WMT data using the Lang-8 corpus is as follows: 1. Extract plain text versions of the Lang-8 corpus, consisting solely of sentences with corrections 2. Compare source sentence with corrections using an efficient diff algorithm. 3 Note that this often included several steps of revisions.
3. Prepare a list of all prepositions/determiners. This is taken from the tags of the WMT data retrieved from the Stanford tagger. 4 4. Remove all sentences that do not contain a single revision involving a preposition or a determiner. Using a hand-crafted set of possible prepositions/determiners, it is determined for each sentence whether it involves a deletion (eg. "for" → "NULL"), an addition (eg. "NULL" → "the"), or a replacement (eg. "on" → "in").
5. Generate confusion sets for each preposition/determiner by listing all the deletions which are replaced by that word, and counting the frequency of each specific revision.
From there, generate a probability distribution for each preposition/determiner. 6. Insert the target word itself into the distribution with a frequency relative to the error rate. An 80% error rate for example means that 20% of the time, the same word is selected, effectively leaving it in its "correct" form.
7. Prepositions/determiners in the WMT corpus are systemically replaced by one of the options in their respective probability distributions, selected at random by a sampler.
Experiments
Cahill et al. (2013) have made their revision data extracted from Wikipedia available for download, which is why it is appropriate to compare it to the revision data which is extracted from Lang-8. Both sets of revision data are used to create two separate confusion sets for prepositions. They are then used to create two sets of error corpora in which 20%, 40%, 60% and 80% of prepositions are altered according to the error introduction procedure detailed above.
To compare, revision data extracted from Lang-8 is also used to create error corpora containing the same amounts of prepositional error. It is worth noting that Cahill et al.'s research does not include the empty "NULL" preposition, meaning that errors in which a preposition is missing are not accounted for. By contrast, in our work we include every case in which a preposition is inserted, as well as replaced, although we do not deal with deletions. Deleting prepositions which were inserted in the revision data simply follows the same procedure as replacements, where a preposition is replaced with the null preposition. Inserting prepositions which were deleted in the revision data is much more difficult, as it is not clear where in a sentence each preposition should be. The use of context words before and after a deletion is being explored in more current research, but does not feature in these experiments. This is nevertheless a major contribution, because insertions and deletions make up a significant part of the errors. In Lang-8, for example, there were 10054 corrections of prepositions, of which 4274 were insertions, and 2657 were deletions. This means that replacements only consist of 31% percent of the errors.
We also use determiner revision data extracted from Lang-8 to create determiner errors in a sim-ilar fashion, with 20%, 40%, 60% and 80% of errors.
A final set of synthetic error data is then generated where both prepositions and determiners are introduced into the same corpus, containing 20%, 40%, 60% and 80% of both kinds of error. This is to investigate whether the GEC system is capable of dealing with two types of error at once.
Evaluation
In order to test the effects of mismatching error density and type between training and test data, each model is tested on specially created test sets with varying amounts of error in them. Cahill et al. (2013) found that the highest scores came from models both trained and tested on similar error rates. Our research aims to build on this finding.
The first test set is made from Lang-8, which is also used to create the confusion sets for the training data. Specifically, only the sentences with prepositions, determiners, and a mix of both in the revisions are used. No other types of error are included. These sentences are mixed with corrected sentences (where the revised sentence is used as both source and target) to varying degrees. In each case, 1000 sentences of erroneous data are mixed with either 4000, 1500, 666, or 250 sentences of "correct" English, also taken from Lang-8. This is in order to create test sets in which 20%, 40%, 60%, and 80% of sentences are erroneous, similar to the training data. Table 1 shows the test sets created out of the Lang-8 corpus.
The NUCLE corpus (Ng et al., 2014) was used as training and test sets for the CoNLL-2014Shared Task (Ng et al., 2014 on GEC, and since then has been commonly used in the field for comparison with previous work. The NUCLE corpus is used in our research in order to generate test sets from a different domain, despite those test sets being smaller. Again, prepositions, determiners and a combination of both are extracted and mixed with corrected sentences from the same corpus. Due to the smaller amount of relevant errors, as many sentences containing each error as possible are taken. For prepositions, this amounts 332 sentences, for determiners, 595 sentences, and for both, 169 sentences. OpenNMT was chosen because of its ease of use, and similarity to the architecture used by the current state of the art results reported by Yuan and Briscoe (2016). The selected evaluation metric is the GLEU score, which has been shown to be the most appropriate metric for GEC (Napoles et al., 2015).
Results and Discussion
The first objective of our research is to see the difference between testing on Lang-8 and NUCLE test sets when trained on data containing varying error densities created using data from Lang-8. For prepositional errors, the GLEU scores of the four different models are in Table 3, and the results are plotted in Figure 4. When tested on corpora with only 20% error, the GLEU score remains the same on both test sets. However, the higher the error rate in the test set, the better the models perform on the NUCLE set in comparison with the Lang-8 set. This is surprising, seeing as the Lang-8 corpus was used to inform the process of error generation in the training set.
In the tables cited in this paper, it is expected that the highest scores will occur along the diagonal. A test set containing 20% error would be best handled by training data which also contains 20% error. Likewise with 40%, 60% and 80 %. Conversely, training data containing 80% error would not perform as well on test data containing 40% as the training data which also has 40% error. This data shows, however that this is not always the case. When testing on 80% er-5 http://opennmt.net/ ror, the models trained on 80% error density themselves obtain -as expected -the highest score, although only slightly. Interestingly, however, the 80% models also perform better on the 40% and 60% test sets, which seems to confirm Rozovskaya et al. (2012)'s "Error Inflation" idea. This is the idea that putting more errors than needed into the training data helps the model generalise more. One interesting observation from the data is the fact that all the models perform better on the 20% test sets. This is likely because the models are capable of recognising that a sentence need not be corrected, and doing so is simpler than finding a correction of incorrect sentences.
Testing on determiner errors revealed similar results. The results are provided in Table 4, and plotted in Figure 4. In this case, error inflation does not seem to work, as the highest scoring results for each test set is more or less the training set with the matching error density. This indicates that systems that correct determiners have different properties to those which correct prepositions. 71 Figure 1: Plot of the data in Table 3 Figure 2: Plot of the data in Table 4 Figure 3: Plot of the data in Table 5 The results of training models on data containing a combination of both kinds of error on combined preposition and determiner test data is shown in Table 5 and Figure 4. The data is consists of slightly lower scores in general, suggesting that mixing error types does not have as high a quality of correction as single errors. Also, the NUCLE test scores in particular suffer in comparison with the singular error models, showing a failure to generalise across domains. Finally, "Error Inflation" also does not appear to work here.
These results shed doubt on the "Error Inflation" present in the preposition experiment. If it were dependent on the type of error, and prepositions were the kind which encouraged the use of "Error Inflation", then it follows that it should at least be present in the combined models. Instead of different error types subtly influencing the behaviour of the combined model in a cumulative way, the behaviour seems more random. In one case, the 20% combined model performs better on the 40% NUCLE test set than the 40% one, which suggests that reducing the amount of introduced error would make an improvement. Table 6 and Figure 4 show how well the combined model performs on test sets with individual error types only. First of all, the scores are lower than the respective values attained by models trained on individual errors on the same test sets, but only slightly. Also, as seen in Tables 3, 4 and 5, the combined model testing on the combined test set returns lower scores than the individual models testing on their respective test sets with just one of the error types. However, the combined models' scores are better than those achieved by the individual models on the combined test sets, as shown in Table 7 and Figure 4. This indicates that the combined model is better suited for tackling both errors at once, and only a little worse at tackling individual errors than the individual error models. This is a predictable outcome, but the reduction in GLEU score suggests that combining errors in an attempt to correct all errors will generate noise, and the more error types that are covered, the less likely that they will be correctly revised at test time, which makes the idea of making an generalised corrector for all errors less feasible.
It is also worth mentioning that correcting determiners seems to result in higher scores than correcting prepositions. This could be due to the amount of possible prepositions that need to be considered compared to the determiners. Although many determiners are considered, the vast majority of the cases involve the three articles "a", "an" and "the", as well as the null determiner. This is evidence for the need to consider the variation between different errors types when generating errors.
The final research question is whether the confusion set generated from Wikipedia revisions by Cahill et al. (2013) is much different from the one generated from Lang-8. Table 8 and Figure 4 show the results of preposition models informed by Wikipedia and Lang-8 tested on Lang-8 test sets. Table 9 and Figure 4 show the results of the same models on the NUCLE test sets. As expected, the errors generated from the confusion set informed by Lang-8 performs better on the Lang-8 test sets than on the NUCLE test sets. What is interesting, however, is that the Wikipedia revisions performed significantly better not only on the NU-CLE test sets, but also on the Lang-8 test sets. This is surprising, because the Wikipedia revisions are not necessarily in the same domain, whereas the Lang-8 revisions are from the same dataset. Furthermore, the Wikipedia revisions do not take insertions or deletions into account. It is clear that the amount of revisions considered makes a difference: there were 10054 Lang-8 revisions, and 303847 Wikipedia revisions, 30 times more. The small amount of Lang-8 revisions could also account for the noise identified in the Lang-8 models, but this noise is also present in the Wikipedia revisions, where "error inflation" appears to only appear sometimes and not always. Table 6 Figure 5: Plot of the data in Table 7 Figure 6: Plot of the data in Table 8 Figure 7: Plot of the data in Table 9 5 Conclusion Our research aims to shed light on the issue of choosing how many errors to include in artificially generated erroneous data by tackling two specific error types. Results reveal some predictable outcomes, such as that it is easier to deal with test corpora which have smaller error rates, because leaving correct sentences alone is easier for the model to learn than making a good correction. Also, in most cases, there is a correlation between the error rate of the training data and the test data. However, some of the results revealed unexpected outcomes. Although it is possible that the data is noisy, the results, particularly for the prepositions, support a concept called "Error Inflation", which suggests that including more errors into the training data will lead to a higher GLEU score. This effect was not observed in the determiner and combined models, suggesting that there might be variation between different error types depending on the distribution of revisions made for that error type. It is possible to combine two error types together into one training set, and tackle two error types at once at test time, although the scores are not as high as when solving only individual errors. Also, the confusion set generated from Wikipedia revisions proved to yield better results than that generated from Lang-8, due to the significantly larger number of revisions. Finally, this research supports generating erroneous data as a valid approach to improving neural models for GEC, and informs future researchers about the effects of error rate mismatches in training and test data. Table 8: GLEU score according to how much preposition error in training data informed by Wikipedia (first 4 rows) and Lang-8 (last 4 rows), tested on test sets with varying amounts of preposition error from Lang-8.
|
v3-fos-license
|
2020-10-28T13:05:59.356Z
|
2020-10-22T00:00:00.000
|
225081802
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4409/9/11/2338/pdf",
"pdf_hash": "df72f92609443bd3d5174685783104973e7c311f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2289",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "66ce42a786bae8b4c238a177c9acea6063a6922d",
"year": 2020
}
|
pes2o/s2orc
|
Perivascular Inflammation in Pulmonary Arterial Hypertension
Perivascular inflammation is a prominent pathologic feature in most animal models of pulmonary hypertension (PH) as well as in pulmonary arterial hypertension (PAH) patients. Accumulating evidence suggests a functional role of perivascular inflammation in the initiation and/or progression of PAH and pulmonary vascular remodeling. High levels of cytokines, chemokines, and inflammatory mediators can be detected in PAH patients and correlate with clinical outcome. Similarly, multiple immune cells, including neutrophils, macrophages, dendritic cells, mast cells, T lymphocytes, and B lymphocytes characteristically accumulate around pulmonary vessels in PAH. Concomitantly, vascular and parenchymal cells including endothelial cells, smooth muscle cells, and fibroblasts change their phenotype, resulting in altered sensitivity to inflammatory triggers and their enhanced capacity to stage inflammatory responses themselves, as well as the active secretion of cytokines and chemokines. The growing recognition of the interaction between inflammatory cells, vascular cells, and inflammatory mediators may provide important clues for the development of novel, safe, and effective immunotargeted therapies in PAH.
Introduction
Pulmonary hypertension (PH) is a devastating vascular disease characterized by remodeling of the small pulmonary arteries, elevated pulmonary artery pressure, and subsequent development of right heart failure. Pulmonary arterial hypertension (PAH; World Health Organization Group 1) represents a specific subset of this disease that is focused on the lung vasculature, and it is the subject of this review. While debate exists regarding the specific correlations to human disease in rodent models, for simplicity, data derived from animal models in this review will still be referred to as having "PAH". Recently accumulating evidence from preclinical and clinical PAH studies have highlighted the role of inflammation in the development of the disease. At first, it was noticed that some inflammatory conditions such as connective tissue diseases are associated with an increased incidence of PAH. Next, in lung biopsies from PAH patients, virtually all lineages of inflammatory cells were detected in proximity to the remodeled pulmonary vasculature, mainly consisting of macrophages, mast cells, Indeed, it was long thought that inflammation occurred as a secondary event during PAH pathogenesis, given that proliferating pulmonary vessel cells could secret inflammatory mediators. Yet, emerging evidence suggests that inflammation may in fact play a causal role in the development of PAH. However, many fundamental questions still remain unanswered: Is the inflammatory process nonspecific or rather directed against specific antigens? Where does this response begin-"inside-out" from endothelial cells (ECs) to the media and adventitia, or "outside-in" from the adventitia to the EC [4]?
In this review, we will address these key issues from three angles: We will discuss (A) inflammatory mediators and their effects on pulmonary vascular remodeling; (B) inflammatory/immune cells and their products in PAH; and (C) phenotypic changes in vascular cells and their feedback into the inflammatory and immune responses. Understanding the role of inflammation and immunity in PAH is not only of academic but more importantly of direct clinical interest, as a greater understanding of this interaction is expected to facilitate the evolution of new targeted therapies for this devastating disease.
IL-1β
Interleukin-1β (IL-1β) is a key cytokine released in response to inflammasome activation and is an important mediator of the inflammatory response. Elevated serum levels of IL-1β have been detected in PAH patients and correlate with worse outcome [5,6]. IL-1β may in part be released from infiltrating neutrophils and T cells in diseased pulmonary vessels, as evidenced by positive staining for key components of the inflammasome system, namely Nod-like receptor family pyrin domain containing 3 (NLRP3) and apoptosis-associated speck-like protein containing a caspase-recruitment domain (ASC) within these cells in chronic hypoxia-induced PAH mice [7]. Mice deficient in ASC did not increase IL-1β when exposed to hypoxia, and they also had significantly lower right ventricular systolic pressure (RVSP) as compared to wild type [7]. See Table 1 for a brief overview of the rodent models discussed in this review.
Experimental research has shown that inhibiting IL-1β and inflammasome signaling can be an effective therapeutic avenue for PAH. Treatment with Anakinra, an IL-1β receptor (IL-1βR) antagonist, attenuated the development of PAH in monocrotaline (MCT)-treated rats [8]. Similarly, knockout of IL-1βR or the molecular adaptor myeloid differentiation primary response protein 88 (MyD88) in mice prevented against hypoxia-induced PAH [9]. Thus, in the context of PAH, neutralizing IL-1β, inhibiting IL-1β signaling, or inhibiting the upstream pathways that govern IL-1β release may be effective for mitigating disease progression.
As a potential mechanism of action, IL-1β may directly regulate the vasoconstriction and remodeling of pulmonary arteries. In pulmonary artery smooth muscle cells (PASMC), prostacyclin regulates vasodilation and has an anti-proliferative effect. This vasodilatory effect is mediated via the second messenger cyclic adenosine monophosphate (cAMP). IL-1β attenuates the conversion of ATP to cAMP in PASMC via downregulating adenylyl cyclase [10]. In addition, IL-1β could regulate PASMC growth via the IL-1R1/MyD88 pathway [11]. In line with this view, marked IL-1R1 and MyD88 expression with predominant smooth muscle cell (SMC)s immunostaining was found in lungs from patients with idiopathic PAH and mice with hypoxia-induced PAH [9].
A pilot study evaluating the safety and feasibility of anakinra for treatment of PAH was recently completed [12]. Six patients completed the study without any serious adverse events, and there was some improvement in the biomarkers and symptoms of heart failure [12]. These encouraging data will form the basis of a larger trial. Of note, this pilot study excluded patients with connective tissue disease or autoimmune disease, and it is possible that such patients would derive even greater benefit from such a treatment. Indeed, patients with connective tissue disease-associated PAH (CTD-PAH) have been shown to benefit substantially from anti-inflammatory treatments, even in trials where other groups have not. Perhaps owing to the unique pathophysiology of CTD-PAH, which clearly has a basis in dysregulated immunity, both the overall outlook and response to therapy in these patients differ from those with idiopathic disease (reviewed in [13]). Specifically in patients with systemic lupus erythematosus (SLE), anti-inflammatory treatment has yielded impressive results [14]. As such, this population merits intensive study in future trials. Table 1. Overview of rodent models of pulmonary hypertension. Four of the most commonly employed rodent models are listed along with the general extent of pulmonary inflammation observed. Of note, mouse models in general exhibit less severe disease than rat models, and hypoxic pulmonary hypertension (PH) in mice is entirely reversible on return to normoxia. For a detailed examination of animal models of PH beyond the scope of this review, please see [15][16][17].
Severity Inflammation Notes Refs
Chronic hypoxic mouse Mild -Early macrophage infiltration -Requires eicosanoids -Aggravated by IL-6 -Reversible [18] Sugen-hypoxia mouse Mild-moderate -No significant pulmonary infiltration seen -Slower to reverse than hypoxia alone [19] Monocrotaline rat Severe -Severe inflammation of lungs -Also significant extrapulmonary inflammation [20] Sugen-hypoxia rat Severe -Closest approximation of human disease in rodents -Most immune lineages seen in lung vascular lesions -Irreversible, plexiform angiopathy [21] 2.1.2. IL-6 Il-6 is a pleiotropic cytokine that is known to play a critical role in the progression of PAH. Plasma IL-6 levels are elevated in both patients and animal models of PAH [5,6,22,23]. Circulating IL-6 levels among PAH patients can be a useful prognostic marker, as several studies have shown a strong inverse correlation between serum IL-6 levels and long-term survival outcomes [6,24]. Although serum IL-6 levels may be poor predictors of hemodynamics in patients with PAH, strong correlations between IL-6 levels and RV dysfunction have been shown [23].
Experimental research over the past two decades has provided critical insights regarding the importance of IL-6 in the development of PAH. Fundamentally, the transgenic overexpression of IL-6 in the lungs of mice was sufficient to drive the development of mild PAH [25]. IL-6 overexpression in combination with hypoxia treatment in these mice resulted in a severe increase in RVSP and distal vascular remodeling similar to that seen in patients with severe PAH [25]. Likewise, the administration of recombinant human IL-6 produces a similar effect in mice, whereas IL-6 knockout protects against the development of hypoxia-induced PAH [26,27]. In addition, a more recent study showed that PASMC derived from patients with idiopathic PAH have upregulated membrane-bound IL-6 receptors (IL-6R) [28]. The overexpression of IL-6R promoted an anti-apoptotic phenotype in PASMCs of patients with idiopathic PAH (iPAH), but not in controls. Transgenic mice deficient in IL-6R in vascular smooth muscle are protected against the development of PAH, whereas the administration of an IL-6R specific antagonist reversed experimental PAH in two rat models [28]. At present, an open-label study of the IL-6R antagonist tolicizumab for the treatment of pulmonary arterial hypertension (TRANSFORM-UK) is running, and results are expected soon [29].
There are likely multiple cellular sources of IL-6 release in PAH. Recent studies have suggested that IL-6 can be produced by the pulmonary vasculature in PAH [28,30,31]. In particular, PASMC may be a source of IL-6, with IL-6 concentrations in conditioned media, as well as IL-6 gene expression, being significantly higher in PASMC than pulmonary artery endothelial cells (PAEC) in PAH cell lines [31]. Moreover, it has been demonstrated that pulmonary mast cells are a critical source of IL-6 production in two rat models or PAH and that mast cell deficiency reduced serum IL-6 [32]. Conversely, in Schistosoma-associated PAH, IL-6 was mainly colocalized with the macrophage marker Mac3, suggesting macrophages as another potential source of IL-6 [33]. Classically, NF-κB activation is upstream of IL-6 secretion, and it has been shown to be upregulated in patients with iPAH [34]. However, elevated IL-6 in the lung was not reduced by treatment with the NF-κB inhibitor, pyrrolidine dithiocarbamate, in Sugen-hypoxia-treated rats [35]. Thus, it is likely that multiple cellular sources contribute to elevated IL-6 levels, although the contribution of each cell type may depend on the type of PAH, the severity of the disease, and individual patient differences given the Cells 2020, 9, 2338 5 of 25 heterogeneous nature of PAH. Furthermore, pulmonary arterial microvascular endothelial cells from patients harboring BMPR2 mutations secreted twice as much IL-6 in response to inflammatory stimuli than control endothelial cells, indicating that the endothelium may also be a significant source of IL-6 in disease [36].
In regard to the cellular mechanisms of IL-6-mediated PH progression, classical IL-6 signaling involves soluble IL-6 binding to its membrane-bound receptor (IL-6R), causing the assembly of a complex involving two molecules each of IL-6, IL-6R and the IL-6 receptor subunit B (gp130) [37]. This complex triggers different signaling pathways, including the JAK-STAT3 pathway, PI3K/AKT pathway, and the MEK/ERK pathway, leading to the expression of pro-inflammatory and pro-survival molecules in the target cell [37]. Interestingly, it has been shown that the treatment of healthy PASMC with IL-6 leads to STAT3 activation, which can cause the further activation of other downstream effectors, including the transcription factor Krüppel-like factor 5 (KLF5) [38]. KLF5 is elevated in both human lung biopsies and cultured human PASMCs isolated from PAH patients and can promote cell proliferation and prevent apoptosis [38]. IL-6 has also been shown to exert its pro-inflammatory effects through the induction of IL-21 expression in Th17 cells and CD4 + T cells [39]. In addition, IL-6-mediated STAT3 activation has also been shown to induce the expression of a group of microRNAs (miRNA cluster-17/92) that represses bone morphogenetic protein type (BMPR2) expression, further promoting a pro-proliferative phenotype in vascular cells [40]. Hence, it is likely that IL-6 signaling leads to the downstream activation of multiple pathways centered around cellular pro-inflammatory, pro-proliferative, and anti-apoptotic effects. Importantly, as a stimulatory factor inducing B lymphocyte differentiation into antibody-producing plasma cells, IL-6 production has also been linked to increased immunoglobulin secretion and the production of autoantibodies in PAH [32,41].
IL-18
IL-18, related closely to the IL-1 family of cytokines, is similarly produced in a pro form, and it is processed by caspase-1 [42]. IL-18, largely secreted by macrophages, stimulates a wide variety of pro-inflammatory changes, including the activation of cytotoxic T cells, stimulation of interferon production, and increasing surface expression of adhesion molecules and chemokine production in target cells. IL-18 binding to its receptor complex results in NF-κB activation, and the blockade of IL-18 is presently under intensive investigation as a treatment for inflammatory bowel disease [42]. IL-18 protein is elevated in the plasma of patients with PAH compared with healthy controls [43]. The cellular sources of IL-18 seem to be medial but not intimal SMC of the pulmonary arteries. The IL-18 receptor, IL-18Rα, is expressed in the vascular wall of medial SMC, EC, and infiltrating mononuclear cells [43]. The overexpression of IL-18 in the lungs resulted in mild PAH and RV dilation, but the genetic ablation of IL-18 did not attenuate hypobaric hypoxia-induced PAH and right ventricular hypertrophy [44], suggesting that IL-18 may be a disease modifier, but it is not the causal factor for PAH development. In line with findings documented above with respect to IL-6, IL-18 interacts differently with cells possessing a BMPR2 mutation than those without. Specifically, IL-18 increases the adhesion of monocytes to pulmonary arterial microvascular endothelial cells lacking BMPR2 [45]. However, this interaction did not change endothelial barrier function [45].
Chemokines and Their Receptors
The chemokine receptor CCR7 and its ligands have a key role in the homing of T cells and dendritic cells to lymphoid organs [46]. Notably, CCR7 has been found to be downregulated in circulating leukocytes of PAH patients [47]. Substantiating a functional role for this signaling axis, mice lacking CCR7 developed PAH and showed increased perivascular infiltration of leukocytes, consisting mainly of T and B cells [47]. Analogously, the antagonism of CCR7 by CCR7-neutralizing antibodies potentiated PAH, bronchus-associated lymphoid tissue (BALT) formation, and plasma IgG levels in monocrotaline-treated rats [41]. These data suggest that chemokines and their receptors might affect perivascular inflammation by negatively regulating lymphocyte trafficking and BALT formation. As such, CCR7 agonists may bear therapeutic potential in PAH, yet this hypothesis remains to be tested in appropriate model systems.
However, mRNA levels of CCL19 and CCL21, the ligands of CCR7, were not significantly different in lungs of patients with idiopathic PAH as compared to controls [48], although CCL19 is thought to be a sensitive marker for perivascular inflammation in systemic sclerosis. Yet, as CCR ligands are typically expressed in lymphatic vessels and lymphoid organs, they may act locally rather than systemically. Chemokine CXC ligand 13 (CXCL13), which plays an analogous role as CCL19/CCL21-CCR7 for B-cell homing, was elevated in patients with idiopathic PAH and chronic thromboembolic pulmonary hypertension (CTEPH), but there was only a weak association between serum CXCL13 and markers of disease severity and outcome [49]. A possible explanation for these seemingly disparate findings could be that measurements of chemokines in whole lungs or plasma do not reflect their local expression at important sites of disease.
Another important chemokine in the development of PAH is CCR5. CCR5 is expressed in the pulmonary vascular wall and on macrophages, and it has been shown to be upregulated in PAH [50]. In human tissues, CCR5 is found in endothelial cells, smooth muscle, and macrophages in PAH patients, and it is also upregulated following chronic hypoxia in rodent models [50]. Mice deficient in CCR5 were protected from hypoxic PAH and demonstrated a decreased proliferation of PASMC [50]. Elegant experiments using bone marrow chimeras delineated the importance of both parenchymal and leukocyte CCR5 in this process. These data are particularly exciting given the availability of a CCR5 inhibitor currently approved for the treatment of HIV infection. This pathway warrants further clinical study in PAH patients. The crosstalk between macrophage and PASMC CCR5 appears to be synergistic with the CCL2-CCR2 pathway, as a blockade of both pathways in mouse models results in additional benefit when compared to blocking either CCR2 or CCR5 alone [51].
Leukotriene B4 (LTB4)
Leukotriene B4 (LTB4) was found to be significantly elevated in the bronchoalveolar lavage fluid of PAH animals and in the blood of PAH patients [52]. Macrophages, expressing high levels of leukotriene A4 hydrolase, the biosynthetic enzyme for LTB4, appear to be the main source of LTB4 [52]. Macrophage-produced LTB4 directly induced the apoptosis of PAEC and the proliferation of PASMC, via a pathway involving endothelial sphingosine kinase 1 and endothelial nitric oxide synthase [52]. In addition, LTB4 enhanced the proliferation, migration, and differentiation of pulmonary artery adventitial fibroblasts in a dose-dependent manner through its cognate G-protein-coupled receptor [53]. LTB4 activated adventitial fibroblasts by upregulating p38 mitogen-activated protein kinase as well as the Nox4-signaling pathway [53]. Blocking LTB4 formation or antagonizing its receptor reversed MCT-induced PAH and prevented PAH-related death, making this seemingly a promising avenue for translational investigation [54]. However, the reversible protease inhibitor Ubenimex (bestatin), which blocks the conversion of LTA4 to LTB4 by the leukotriene A4 hydrolase, failed to demonstrate effectiveness to improve pulmonary vascular resistance or improve exercise capacity in patients with pulmonary arterial hypertension in the Phase 2 LIBERTY study (NCT02664558) [55].
In spite of this trial failing to reach its primary endpoint, promising preclinical studies still implicate LTB4 in PAH pathogenesis in specific subpopulations. A recent study demonstrated that in BMPR2 haploinsufficient rats, the viral transduction of 5-lipoxygenase (5-LO), the enzyme that produces LTB4, results in the development of severe PAH [56]. Of note, BMPR2 mutations are among the most common human mutations found in hereditary PAH, albeit with low penetrance [57]. The transduction of 5-LO resulted in the development of PAH in these rats with similar frequency to humans with BMPR2 mutations. Additionally, the neointimal cells in these animals developed a spontaneous, endogenous expression of non-viral 5-LO, which is a finding that was also seen in patient tissue [56]. Together, these data demonstrate a fundamental interplay between leukotrienes, Cells 2020, 9, 2338 7 of 25 transforming growth factor (TGF)-β / bone morphogenic protein (BMP) signaling, and the development of PAH. Importantly, these data also suggest that perhaps LTB4-based therapies may show the most promising clinical effectiveness in patients with BMPR2 mutations.
Macrophage Migration Inhibitory Factor (MIF)
Macrophage migration inhibitory factor (MIF), originally identified as a T-cell-derived cytokine that inhibited the random migration of macrophages, has equally been found increased in PAH [58]. MIF is now considered an important pro-inflammatory mediator secreted by numerous cells including T cells, macrophages/monocytes, ECs, and SMCs that can induce in turn the production of cytokines, such as IL-1β, IL-6, and IL-8. In addition, MIF regulates vascular cells through its binding to CD74, which is highly expressed in the endothelium of muscularized pulmonary arterioles and in cultured pulmonary ECs from IPAH patients. Curative treatments with the MIF antagonist ISO-1 or anti-CD74 neutralizing antibodies partially reversed the development of pulmonary hypertension and substantially reduced inflammatory cell infiltration in the rat monocrotaline model of PAH [59]. In addition to its effects on ECs, MIF may act on the proliferation of PASMCs through the activation of the ERK1/2 and JNK pathways in hypoxic pulmonary hypertension [60].
Hypoxia-Induced Mitogenic Factor (HIMF)
Hypoxia-induced mitogenic factor (HIMF) is a well-known marker for alternatively activated (M2) macrophages [61]. HIMF expression in the remodeled pulmonary vasculature positively correlated with increased mean pulmonary arterial pressure [62]. A single systemic injection of recombinant HIMF protein caused early lung inflammation (day 7) and PAH development (day 30) [62]. HIMF stimulates EC activation and apoptosis in the lung via the HIF-1/ vascular endothelial growth factor (VEGF)-A/VEGFR2 signaling pathway [63]. Furthermore, these HIMF-stimulated ECs produce growth factors and chemokines that enhance perivascular immune cell recruitment and SMC growth. In addition, HIMF has been shown to induce expression of the pro-inflammatory cytokine IL-6 in primary lung fibroblasts via the IKK-β/NF-κB/HIF-1 pathway [63].
High Mobility Group Box-1 (HMGB1)
Generally considered to be an atypical cytokine, high mobility group box-1 (HMGB1) is a nuclear molecule that contributes to DNA stability by regulating transcription, repair, and recombination [64]. In response to cellular stress or damage (e.g., hypoxia, infection, sterile inflammation), HMGB1 can be released to the extracellular environment, where it functions as a danger-associated molecular pattern (DAMP) [65], binding multiple receptors, including toll-like receptor (TLR) TLR4, TLR2, and receptor for advanced glycation endproducts (RAGE) [65,66]. HMGB1 was recently shown to be a marker of lytic cell death that was released during cellular rupture following inflammasome activation [66]. HMGB1 levels are elevated in the serum and lungs of PAH patients and animal models of PAH [11,67,68]. Circulating levels of HMGB1 have also been found to moderately correlate with mean pulmonary artery pressure (mPAP) [67]. Histological examination of patients with severe PAH revealed strong extra-nuclear HMGB1 staining in the perivascular adventitia and intima, indicating potentially relevant sites of HMGB1 release [68]. Additionally, endothelial cells isolated from the small pulmonary arteries of patients with idiopathic PAH showed an elevated basal production of HMGB1 and RAGE [69]. Together, these data point to a potentially important role for HMGB1 in the development of PAH.
The pharmacological inhibition of HMGB1 has been an effective strategy for mitigating PAH in several animal models. Treatment of hypoxic mice and MCT-treated rats with an HMGB1 neutralizing antibody can significantly attenuate increases in RVSP and protect against pulmonary vascular remodeling [67,70]. Chronic inhibition of HMGB1 by glycyrrhizin, a natural anti-inflammatory factor that binds HMGB1 directly, also protects against the development of PAH in MCT-treated rats [11]. Similarly, inhibiting HMGB1 receptors has the potential to mitigate PAH. Mice lacking TLR4, one of the main receptors required for pro-inflammatory HMGB1 signaling, are protected against chronic hypoxia-induced PAH [67]. In comparison, the knockdown of RAGE in these mice did not appear to offer the same benefits, suggesting an important role for HMGB1-TLR4 signaling specifically [67]. However, other studies have also found a contribution downstream of RAGE, suggesting that both RAGE and TLR4 may be important for mediating the pro-inflammatory effects of HMGB1 [69,71]. The octapeptide, P5779, which specifically inhibits the interaction between HMGB1 and TLR4, is capable of reversing established disease in Sugen-hypoxia rats as well as improving RVSP, right ventricular dysfunction, and pulmonary vascular remodeling [68]. P5779 was shown to prevent PASMC migration and proliferation, suggesting a potential direct effect of HMGB1 on the lung vasculature [68]. The latter notion is supported by in vitro studies demonstrating a proliferative effect of physiological HMGB1 concentrations on PASMC and EC, presumably via the activation of p38, ERK, and JNK [72]. This potential of P5779 as a promising therapeutic for PAH will be the focus of a future translational investigation.
Complement
The complement cascade forms a crucial element of innate immunity, and it has been implicated in a variety of pulmonary and vascular disorders, including acute lung injury [73]. Recently, an important role for immune complexes and complement activation in PAH has been elucidated [74]. In both human tissue and animal models, the activation of a classical and alternative pathway complement was seen in perivascular lesions [74]. Mice deficient in several elements of the complement cascade were protected from hypoxia-induced perivascular inflammation. Of note, the expression of pro-inflammatory granulocyte monocyte colony stimulating factor (GM-CSF) was found to be downstream of complement activation, as were the proliferative responses of the pulmonary vascular tissues [74]. Additionally, the complement factors C3 and C4 have been identified as PAH biomarkers [75,76], and C3 deficiency partially protects mice from chronic hypoxic PAH, with an associated dampening of immune responses [77]. These results bear particularly important translational value, since complement inhibitors are currently approved and under investigation for the treatment of several conditions, including a variety of glomerular kidney diseases [78].
Macrophages
An early and persistent accumulation of macrophages has been observed in perivascular lesions in many patient cohorts and animal models of PAH [2,79,80]. Indeed, a recent study of unbiased computational flow cytometric analysis of human lungs from patients with iPAH and healthy donors demonstrated the profound recruitment of macrophages to isolated pulmonary artery samples [81]. Interventions targeting macrophages have confirmed their role in PAH and pulmonary vascular remodeling. An intratracheal depletion of alveolar macrophages with liposome-encapsulated clodronate attenuated the increase in pulmonary arterial pressure in response to chronic hypoxia in rats [82]. Furthermore, the depletion of macrophages normalized the increase in RVSP seen in BMPR2 knockout mice exposed to chronic hypoxia [83]. To the contrary, an intraperitoneal depletion of macrophages with clodronate liposomes resulted in worsened secondary pulmonary hypertension in a pulmonary fibrosis model; yet, no details about the change of resident alveolar macrophages or circulating macrophages were provided in this study [84]. These data indicate that the subtype and location of macrophages may be a critical factor at play, which is a view that is consistent with the fact that pulmonary macrophage phenotypes change over time in PAH models [85].
Over the past decade, macrophages have emerged as highly heterogeneous cells that can rapidly change their function in response to the local microenvironment. Accordingly, macrophages have been classified into classically (M1) and alternatively (M2) activated phenotypes. While these classifications are fluid and complex, M1 macrophages can be thought to arise during pro-inflammatory states, which are triggered by Toll-like receptor activation and interferons. On the other hand, M2 macrophages are found in more chronic states such as allergy and can be involved in non-resolving inflammation and tissue repair [86]. The pharmacological inhibition or genetic deletion of CX3CR1, which is elevated in the lungs of mice with chronic hypoxic PAH, protected mice against hypoxic PAH [87]. In parallel, the loss of CX3CR1 favored M1 macrophage polarization, and this shift from M2 to M1 abrogated the ability of macrophage-conditioned medium to induce PASMC proliferation in vitro, suggesting a pathophysiological role of CX3CR1 via M2 macrophage polarization in PAH [87]. The kinin B1 receptor, which is expressed on macrophages, was upregulated in the lung tissue of MCT-challenged pneumonectomized rats. Treatment with a specific kinin B1 receptor reduced macrophage counts in bronchoalveolar lavage fluid, as well as CD68 + macrophage counts in the perivascular area, suggesting an important role for kinins in monocyte/macrophage recruitment and differentiation [88].
In addition, circulating monocytes can take on an endothelial-like phenotype once adhering to endothelial cells [89,90]. Both macrophages and hyperproliferative endothelial-like cells were observed in plexiform lesions in PAH, and they may indicate the conversion of monocytes to endothelial-like cells that contribute to pulmonary vascular remodeling in PAH. This notion is partially supported by the finding that carboxyfluorescein diacetate-labeled RAW 264.7 macrophages were found retained in the lung vasculature of hypoxic athymic nude mice up to twelve days after injection [91].
Dendritic Cells
Circulating activated myeloid-derived suppressor cells (MDSCs) are significantly increased in PAH patients and correlate with increasing mean pulmonary artery pressure [92]. MDSCs compose a phenotypically diverse subpopulation of cells, of which dendritic cells (DCs) are important components [93]. Belonging broadly to a class of innate lymphoid cells [94], there are at least four main subsets of DCs identified in both mouse and human, including conventional cDC1 and cDC2, plasmacytoid DCs, and monocyte-derived dendritic cells (MoDCs) [95]. Patients with idiopathic PAH had a significant decrease in the number and changes in function of MoDCs [96]. Although the profile of membrane costimulatory molecules of circulating MoDCs in idiopathic PAH was similar to that of control subjects, PAH MoDCs retained higher levels of the T-cell activating molecules CD86 and CD40 after dexamethasone pretreatment. MoDCs from PAH patients induced a stronger activation and proliferation of CD4 + T cells, which is associated with a reduced expression of IL-4 (T helper 2 response) and a higher expression of IL-17 (T helper 17 response) [97]. Further work remains to be done to identify the phenotypes and roles of other circulating DCs subsets in PAH.
Upon encountering antigen, immature DC adopts a mature state. Then, the mature DC can activate T and/or B lymphocytes to induce an immunological response. Surprisingly, both in human and experimental PAH, immature DCs accumulate in remodeled pulmonary vessels, and their counts increased with the degree of pulmonary arterial remodeling [98]. As the pulmonary arteries-unlike the airways-are not thought to be frequently exposed to exogenous pathogens, with the important exception of some ultra-fine particles, the recruitment of DCs is likely in response to tissue damage and the release of damage-associated molecular patterns. Such sterile inflammation may potentially link to the development of auto-immunity, as discussed below.
Mast Cells
The accumulation and activation of perivascular mast cells in the lung are prominent histopathological features in idiopathic PAH patients and PAH rats [99][100][101]. Mast cell activation induces the release of various potent molecules via degranulation. The main granule molecules are histamine, serotonin, proteases, lipid mediators, cytokines, and chemokines. Notably, the inhibition of mast cell activation, proliferation, or degranulation has been proven effective to attenuate PAH and pulmonary vascular remodeling in several animal models [99,[102][103][104].
Mediators released by mast cell degranulation can interact directly with vascular cells and promote pulmonary vascular remodeling [105]. Mast cell chymase regulates vasomotor tone indirectly, as it can stimulate the regional production of angiotensin II, the activation of endothelin-1, and the secretion of matrix metalloproteases [106,107]. Mast cell-derived tryptase can induce PASMC proliferation and migration, fibroblast proliferation, as well as an enhanced synthesis of fibronectin and matrix metalloproteinase-1 via PAR-2 [105,108]. Mast cells also secrete several isoforms of VEGF, which may regulate the neoangiogenesis observed in PAH [109]. In idiopathic PAH patients, chymase-positive mast cells were located in close proximity to regions with a prominent expression of big-endothelin-1 in the pulmonary vessels [106]. Intervention with the mast cell inhibitors cromolyn and fexofenadine decreased total tryptase levels, and it was accompanied by a drop in VEGF and circulating proangiogenic CD34 + CD133 + progenitor cells, as well as an increase in exhaled nitric oxide [103]. Together, these data suggest that therapies targeting mast cells may be an important translational strategy for PAH treatment, and that this could perhaps be achieved using drugs that are currently available.
Additionally, mast cells sit at a major crossroads for both innate and adaptive immune responses and therefore could control PAH progression and pulmonary vascular remodeling by regulating multiple arms of the immune response. Mast cells-derived IL-6 has been shown to stimulate B cell-related immune responses in the MCT-induced PAH model in rats [32]. Furthermore, mice reconstituted with a human immune system were shown to develop severe PAH in response to chronic hypoxia, and this response could be blunted by anti-mast cell treatment [110]. This result is significant for showing that, first, a factor (or factors) exists specifically in the human immune system that render mice, which normally do not develop severe PAH following chronic hypoxia, susceptible to advanced disease. Second, these results demonstrate a difference in mast cell distribution or function between mice and humans that make mast cell targeting a potential strategy in human disease.
Recently, it has been shown that mast cells play a more complex role in the overall immune response than previously recognized. They can make direct contact with dendritic cells to regulate their antigen-presenting ability and subsequent T cell activation [111]. Increasing evidence demonstrates the interaction between mast cells and T cells in inflammatory models; for instance, mast cells promote the activation, proliferation, and cytokine secretion of CD4 + T cells, induce CD8 + T cell recruitment, and inhibit regulatory T cell (Treg) activity via secreting histamine and IL-6 [107]. The role of the interaction between mast cells and other immune cells in PAH will require further investigation.
Importantly, the contribution of mast cells to the progression of PAH appears to occur early on in disease development. In MCT-treated rats, the pharmacological inhibition of mast cell degranulation and c-kit from the onset of disease (treatment from day 1 to 21 following MCT injection) significantly attenuated RVSP increase and vascular remodeling; whereas, delayed treatment (from day 21 to 35 post-MCT injection) neither improved hemodynamics nor vascular remodeling [100]. Other effective treatments targeting mast cells in various PAH models were also given preventively in support of this observation [99,104].
T Cells
Two types of tertiary lymphoid tissues (tLT) have been reported in the lungs of PAH patients: perivascular tLT and BALT, which is comprised of B-and T-cell zones with high endothelial venules and dendritic cells. Lymphocyte survival factors and lymphorganogenic cytokines and chemokines were highly expressed in tLT from idiopathic PAH patients [48]. These tLT raise the fascinating possibility of a local adaptive immune response in PAH lungs.
The perivascular infiltration of inflammatory cells in MCT-treated rats is characterized by CD4 + T cells [112]. Rag1 −/− mice, which are devoid of T and B cells, were protected from the development of pulmonary vascular lesions when exposed to MCT, and the adoptive transfer of T cells from control mice into Rag −/− mice restored vascular injury [112]. Athymic nude rats lacking specifically T cells, given SU5416 alone, without subsequent hypoxia, developed severe PAH and vascular remodeling, which was not observed in euthymic animals [113]. The reconstitution of these animals with immunocompetent splenocytes protected SU5416-treated animals from developing severe PAH, again demonstrating the importance of an intact immune system for the normal progression of PAH [113]. Further reconstitution studies have shown that the specific re-population of T cell-deficient rats with Tregs reduces the exaggerated disease process seen in these animals [114]. This effect may be even more pronounced in females, indicating an important role for Tregs in the sex-specific differences observed in PAH patients [115]. These findings indicate that CD4 + Tregs inhibit PAH progression via negatively regulating T cell immune responses and open up an intriguing avenue for cell-based therapies in PAH.
T cell changes in PAH can also be observed in the peripheral blood, although how they relate to immune cell profiles and vascular remodeling processes in the lung remains unclear. Subclass analysis of peripheral blood from patients with idiopathic PAH showed that CD4 + CD25 high Treg cells increased, while CD8 + cytotoxic T cells decreased relative to controls [116]. However, others have found no difference in circulating CD4 + CD25 + CD127 low Treg numbers or in the overall percentage of CD4 + T cells between PAH patients and controls [117]. Notwithstanding, circulating Tregs in PAH patients appeared to be dysfunctional, with low levels of pSTAT3, which is a major signaling pathway in Tregs [117]. These findings, namely the differences between pulmonary cell distributions and those in the blood, are in line with data from cancer research showing that abundances of Tregs, CD4 + , and CD8 + T cells differ markedly between tumors and the periphery [118].
B Cells and Autoantibodies in PAH
Similar to T cells, B cells are the main components of tLT in PAH lung tissue, and the overall area taken up by tLT correlates with PAH severity and pulmonary vascular remodeling [41,48]. Rats with B cell deficiency are less susceptible to severe PAH and pulmonary vascular remodeling induced by MCT or Sugen-hypoxia [32]. A recent clinical trial of rituximab (the anti-CD20, B-cell targeted therapeutic antibody) for the treatment of PAH associated with systemic sclerosis (SSc-PAH) demonstrated safety and trends toward efficacy [119]. B cell depletion in SSc-PAH has also been shown to alter the profiles of circulating antibodies [120]. Additionally, several other reports have shown positive results in treating patients with systemic lupus erythematosus (SLE)-PAH with B-cell depletion therapy [121].
Given that B cells are fundamental to the humoral immune response, their role in PAH was mainly thought to be in the regulation of auto-antibody production. Increased autoantibody levels are commonly detected in PAH-associated autoimmune diseases [122,123]. Indeed, lymphoid tissues adjacent to vascular lesions contain germinal centers, indicating that they can locally produce antibodies in a self-sustaining fashion [48]. Serum autoantibodies (anti-RNP, anti-Sm, and antiphospholipid antibody) in SLE/SSc patients were found to be risk factors for the development of PAH [124][125][126][127]. In various PAH animal models, an increased titer of autoantibodies to pulmonary vascular cells was seen following the disease development [32,41]. Injection of control wild-type animals with autoantibodies-containing plasma or enriched IgG was sufficient to produce vascular remodeling and an increase in RVSP, indicating that such auto-antibodies are themselves sufficient to trigger PAH development [41]. Consistently, serum IgG from patients with SSc-PAH and idiopathic PAH has been shown to cause the constriction of PASMC in a collagen matrix [128]. Similarly, the transfer of SSc-IgG-containing autoantibodies into healthy C57BL/6J mice led to more abundant vascular α-smooth muscle actin expression and inflammatory pulmonary vasculopathy [129]. Anti-endothelin receptor type A and anti-Ang receptor type-1 auto-antibodies increased endothelial cytosolic Ca 2+ concentrations in isolated perfused rat lungs, demonstrating that circulating auto-antibodies are not mere bystanders in PAH but are actually themselves vasoactive and pro-remodeling [129].
Neutrophils
Little attention has so far been paid to neutrophils in the pathogenesis of PAH. In the few published reports, it could be shown that specific proteins expressed by neutrophils, such as myeloperoxidase (MPO) and neutrophil elastase (NE), are elevated in peripheral plasma from PH patients and correlated with the severity of PAH and clinical outcome [130]. Selective NE inhibitors attenuated or even reversed PAH and pulmonary vascular remodeling in the rat model of monocrotaline-induced PAH, and they are the focus of current clinical investigation [131,132]. Even more exciting from a clinical perspective, late intervention with an MPO inhibitor stopped the progression of experimental chronic obstructive pulmonary disease and partially protected against PAH [132].
Recently, the neutrophil/lymphocyte ratio (NLR) has been proposed as a new inflammatory biomarker and can be used as an indicator of systemic inflammation in many diseases, including PAH. In patients with CTEPH or PAH with sarcoidosis, NLR correlated significantly with pulmonary vascular resistance [133,134]. Similarly, NLR correlated with important prognostic biomarkers in PAH patients [135], and high NLR was associated with high morbidity and mortality in patients undergoing thromboendarterectomy [134]. These findings suggest that neutrophils may be more involved in the pathogenesis of PAH than is currently appreciated.
Notably, recent data have linked pulmonary vascular remodeling with the formation of neutrophil extracellular traps (NETs), which play an important role in pulmonary inflammation and autoimmune diseases [136]. NETs are composed of decondensed chromatin fibers coated with granular and cytoplasmic proteins from neutrophils, such as MPO, NE, and α-defensins, and they are released into the extracellular space in response to biochemical, pharmacological, or mechanical stimulation. In addition to being expressed on NET fibers, NE and MPO also regulate NET formation [137]. Increased NETosis has been documented in patients with idiopathic PAH as well as CTEPH with NET-forming neutrophils and extensive areas of NETosis in occlusive plexiform lesions and intrapulmonary thrombi [138]. NETs may contribute to vascular remodeling via different pathways in that they have been shown to induce NF-κB-dependent endothelial angiogenesis in vitro and increased vascularization of matrigel plugs in vivo but also stimulate PASMC proliferation in vitro and the release of endothelin-1 in human PAEC [138].
In addition, neutrophils have the ability to regulate BALT formation, which has been linked to PAH progression [41]. Specifically, the increased propensity of BALB/c mice to form BALT as compared to C57BL/6 mice is associated with neutrophil recruitment [139]. In fact, BALB/c mice have a higher number and percentage of neutrophils than C57BL/6, and the depletion of neutrophils with anti-Ly6G antibody attenuated lipopolysaccharide (LPS)-induced BALT formation [139]. However, so far, there have been no reports of a relationship between neutrophils and BALT in PAH models.
Neutrophils, along with pulmonary vascular smooth muscle cells, are an important source of elastase. Neutrophil elastase is critically involved in lung vascular remodeling, and the inhibition of elastase with small molecule agents or overexpression of its natural antagonist, elafin, reverse PAH in a variety of animal models ( Figure 2) [132,140]. In addition to blocking elastase, elafin has been shown to enhance bone morphogenic protein signaling and inhibit NF-κB activity, providing a profound anti-inflammatory effect. A safety and tolerability trial examining elafin as a PAH treatment is currently underway (NCT03522935).
Phenotype Changes of Pulmonary Vascular Cells
Cells residing within the pulmonary vascular wall can serve as both the source of inflammatory mediators as well as their targets. This unique role positions pulmonary vascular cells at a critically important regulatory junction for disease progression and maintenance [141].
Endothelial Cells
The pulmonary artery endothelial cell is thought by most researchers to represent the initial site of disease in PAH, although this is a topic of significant debate [4,142]. Endothelial cells, under various conditions, can themselves serve as non-professional immune cells, synthesizing and secreting inflammatory mediators, as well as being the local targets of inflammation. However, the potential mechanisms of initial endothelial cell activation that may ultimately result in the development of PAH remain unclear. Hypoxia-the cardinal trigger of type III PAH-can decrease endothelial peroxisome proliferator-activated receptor-γ coactivator-1α (PGC-1α), leading to endothelial dysfunction via increased ROS formation, mitochondrial dysfunction, NF-κB activation, and the subsequent secretion Cells 2020, 9,2338 13 of 25 of IL-6 and TNF-α [143]. Therefore, upregulating PGC-1α could potentially improve endothelial function and attenuate the inflammatory response in the endothelium [143]. Leptin, which is elevated in idiopathic PAH and SSc-PAH patients, and synthesized by dysfunctional pulmonary endothelium, inhibits Treg proliferation, while enhancing conventional T cell proliferation via modulating T cell autophagy [117,144]. These are only a few examples of endothelial roles in PAH, which have been reviewed extensively elsewhere [145,146]. (IL-1β), which is a potent inflammatory cytokine that further stimulates macrophage migration and activation to the pulmonary vasculature. Finally, elastin secretion is involved in the formation of neutrophil extracellular traps (NETs), which themselves can induce endothelial apoptosis, which is a key feature of PAH.
Phenotype Changes of Pulmonary Vascular Cells
Cells residing within the pulmonary vascular wall can serve as both the source of inflammatory mediators as well as their targets. This unique role positions pulmonary vascular cells at a critically important regulatory junction for disease progression and maintenance [141].
Endothelial Cells
The pulmonary artery endothelial cell is thought by most researchers to represent the initial site of disease in PAH, although this is a topic of significant debate [4,142]. Endothelial cells, under various conditions, can themselves serve as non-professional immune cells, synthesizing and secreting inflammatory mediators, as well as being the local targets of inflammation. However, the potential mechanisms of initial endothelial cell activation that may ultimately result in the development of PAH remain unclear. Hypoxia-the cardinal trigger of type III PAH-can decrease endothelial peroxisome proliferator-activated receptor-γ coactivator-1α (PGC-1α), leading to endothelial dysfunction via increased ROS formation, mitochondrial dysfunction, NF-κB activation, and the subsequent secretion of IL-6 and TNF-α [143]. Therefore, upregulating PGC-1α could potentially improve endothelial function and attenuate the inflammatory response in the endothelium [143]. Leptin, which is elevated in idiopathic PAH and SSc-PAH patients, and synthesized by dysfunctional pulmonary endothelium, inhibits Treg proliferation, while enhancing conventional T cell proliferation via modulating T cell autophagy [117,144]. These are only a few examples of endothelial roles in PAH, which have been reviewed extensively elsewhere [145,146]. . Released from neutrophils, as well as smooth muscle cells, neutrophil elastase influences multiple steps in the pathogenesis of PAH, and it is the subject of significant therapeutic interest. Elastase can degrade the extracellular matrix (ECM), liberating active bone morphogenic protein (BMP9) and transforming growth factor (TGF-β). These cytokines induce phenotypic alterations in smooth muscle cells (SMC), fibroblasts, and macrophages, detailed in the text. Additionally, elastin can cleave and activate interleukin-1β (IL-1β), which is a potent inflammatory cytokine that further stimulates macrophage migration and activation to the pulmonary vasculature. Finally, elastin secretion is involved in the formation of neutrophil extracellular traps (NETs), which themselves can induce endothelial apoptosis, which is a key feature of PAH.
The most common heritable mutations resulting in PAH, as well as a common locus of acquired mutations, is in bone morphogenetic protein type 2 (BMPR2) [147]. Such mutations may affect the sensitivity of PAEC to inflammatory inputs. Recent studies have shown that BMPR2 deficiency promotes an exaggerated inflammatory response in PAH progression [148]. Challenged with injections of MCT combined with intratracheal instillation of replication-deficient adenovirus expressing 5-lipoxygenase, BMPR2 +/− mice developed a sustained increase in RVSP, which was coupled with marked perivascular inflammation of the remodeled vessels and a significantly higher expression of chemokine macrophage inflammatory protein (MIP)-1α and fractalkine receptor in the lung [148]. Along similar lines, acute exposure to LPS increased lung and circulating IL-6 levels in BMPR2 +/− mice to a greater degree than in wild-type controls, and chronic LPS administration caused PAH in BMPR2 +/− mice but not in wild-type controls [147]. The recruitment of monocytes by PAEC isolated from human carriers of BMPR2 mutations was higher compared to PAEC from non-carriers and from controls, which was likely related to elevated intercellular adhesion molecule-1 (ICAM-1) expression [36]. Such data show the complex interplay between the inflamed, dysfunctional endothelium, which overexpresses adhesion molecules, and the further recruitment of inflammatory cells to diseased vessels.
In addition, PAEC develop a marked pro-inflammatory phenotype in PAH that is evident as as increased expression of ICAM-1, vascular cellular adhesion molecule (VCAM-1), and E-selectin on the endothelium of distal pulmonary arteries in patients with idiopathic PAH [59]. Analogously, freshly isolated human PAEC from idiopathic PAH patients displayed a marked pro-inflammatory transcriptional signature, including elevated expression of IL-1α, IL-6,IL-8, IL-12, MCP-1, E-selectin, ICAM-1, P-selectin and VCAM-1 [59]. In plasma from iPAH patients, P-selectin was found to be increased and thrombomodulin was decreased [149]. Similarly, iPAH was associated with increased plasma and lung levels of CCL2, with PAEC from iPAH patients releasing twice as much CCL2 as did PASMC [150]. These epigenetic changes resulted in a marked increase in monocyte migration, and notably, CCL2-blocking antibodies reduced endothelial chemotactic activity by 60% [150].
With the progression of PAH, changes in signaling pathways in PASMC may alter their pro-inflammatory capabilities. Notably, smooth muscle cells have been shown to relay inflammatory signals in the lung via their ability to secret pro-inflammatory cytokines [155]. P-selectin has been found to be persistently upregulated in the PASMCs of human and hypoxia-induced experimental PAH [156]. Furthermore, there is a marked increase in the expression of phosphorylated, inactive PTEN in the pulmonary vasculature of PAH patients as compared to normal lung tissue [157]. The inactivation of PTEN in PASMC has been previously shown to induce PAH and hypersensitivity to hypoxia [157]. PTEN depletion combined with hypoxia resulted in a synergistic increase in macrophage accumulation and sustained IL-6 production, which may imply that interactions between activated PASMC and macrophages can promote an inflammatory environment via a mutual feed-forward activation loop [157].
Fibroblasts
Adventitial fibroblasts were originally thought to simply provide mechanical strength to tissues by producing extracellular matrix and providing stromal support. More recently, they were found to be a "sentinel cell" in the vessel wall, responding to various stimuli, such as vascular distension or hypoxia [158]. In response to such stimuli, pulmonary artery adventitial fibroblasts assume a markedly pro-inflammatory phenotype characterized by increased a production of chemokines, cytokines, and adhesion molecules [158]. In addition, interactions between fibroblasts and leukocytes at sites of chronic inflammation appear to promote sustained leukocyte survival and retention resulting in delayed or failed resolution of the inflammatory lesion ( Figure 3) [159,160]. at sites of chronic inflammation appear to promote sustained leukocyte survival and retention resulting in delayed or failed resolution of the inflammatory lesion ( Figure 3) [159,160]. Figure 3. Activated fibroblasts drive PAH pathogenesis. Pulmonary artery adventitial fibroblasts respond to multiple stimuli and then propagate inflammation in PAH. In response to inflammatory cytokines, mechanical stretch, and hypoxia, fibroblasts adopt an activated, pro-inflammatory state. This phenotype is characterized by the overexpression of surface adhesion molecules and other inflammatory surface receptors, and secretion of cytokines and chemokines, importantly IL-6, IL-1β and CCL2. Together, these molecules stimulate both the adhesion and activation of nearby macrophages to propagate the inflammatory response. Activated fibroblasts also alter extracellular matrix homeostasis, secreting ECM proteins, as well as matrix metalloproteinases (MMP) to break down ECM. Finally, fibroblasts themselves proliferate in response to abnormal stimuli encountered in PAH.
Fibroblasts taken from PAH pulmonary artery adventitia display a fundamentally different phenotype compared with those from normal controls [161]. Pulmonary adventitial fibroblasts from a chronic hypoxia model of PAH expressed a persistently pro-inflammatory phenotype, which is defined by a high expression of IL-1β, IL-6, CCL2, CXCL12, CCR7, CXCR4, CD40, CD40L and VCAM-1 [162].
The exposure of naïve bone marrow-derived macrophage (BMDMs) in vitro to intact whole pulmonary artery explants from hypoxia-induced PAH animals significantly increased macrophage activation [161]. However, removal of the adventitia from the PA explant resulted in a marked decrease in transcriptional signatures of activation in BMDM [161]. Exposing naïve BMDMs to conditioned medium generated by adventitial fibroblasts from human idiopathic PAH and hypoxiainduced PAH animals increased the transcription of Cd163, Cd206, Il4ra and Socs3, indicating BMDM activation. These data suggest that activated fibroblasts in the remodeled PA adventitia of animals and humans with PAH provide soluble factors required for macrophage polarization, lending credence to an "outside-in" hypothesis of how inflammation propagates through the vessel wall in PAH [4,161]. Among the possible soluble factors, paracrine IL-6 release may activate macrophages via STAT3, HIF1 and C/EBPβ signaling, independent of IL-4/IL-13-STAT6 and TLR-MyD88 signaling [161].
Conclusions and Future Directions
In summary, a series of complex interactions between inflammatory cells, vascular cells, and soluble mediators in the lung and periphery promote perivascular inflammation and-presumablyalso pulmonary vascular remodeling in PAH (Figure 1). Over the past 15 years, the fundamental role of the immune system in the development of PAH has been increasingly appreciated. The disease is Figure 3. Activated fibroblasts drive PAH pathogenesis. Pulmonary artery adventitial fibroblasts respond to multiple stimuli and then propagate inflammation in PAH. In response to inflammatory cytokines, mechanical stretch, and hypoxia, fibroblasts adopt an activated, pro-inflammatory state. This phenotype is characterized by the overexpression of surface adhesion molecules and other inflammatory surface receptors, and secretion of cytokines and chemokines, importantly IL-6, IL-1β and CCL2. Together, these molecules stimulate both the adhesion and activation of nearby macrophages to propagate the inflammatory response. Activated fibroblasts also alter extracellular matrix homeostasis, secreting ECM proteins, as well as matrix metalloproteinases (MMP) to break down ECM. Finally, fibroblasts themselves proliferate in response to abnormal stimuli encountered in PAH.
Fibroblasts taken from PAH pulmonary artery adventitia display a fundamentally different phenotype compared with those from normal controls [161]. Pulmonary adventitial fibroblasts from a chronic hypoxia model of PAH expressed a persistently pro-inflammatory phenotype, which is defined by a high expression of IL-1β, IL-6, CCL2, CXCL12, CCR7, CXCR4, CD40, CD40L and VCAM-1 [162].
The exposure of naïve bone marrow-derived macrophage (BMDMs) in vitro to intact whole pulmonary artery explants from hypoxia-induced PAH animals significantly increased macrophage activation [161]. However, removal of the adventitia from the PA explant resulted in a marked decrease in transcriptional signatures of activation in BMDM [161]. Exposing naïve BMDMs to conditioned medium generated by adventitial fibroblasts from human idiopathic PAH and hypoxia-induced PAH animals increased the transcription of Cd163, Cd206, Il4ra and Socs3, indicating BMDM activation. These data suggest that activated fibroblasts in the remodeled PA adventitia of animals and humans with PAH provide soluble factors required for macrophage polarization, lending credence to an "outside-in" hypothesis of how inflammation propagates through the vessel wall in PAH [4,161]. Among the possible soluble factors, paracrine IL-6 release may activate macrophages via STAT3, HIF1 and C/EBPβ signaling, independent of IL-4/IL-13-STAT6 and TLR-MyD88 signaling [161].
Conclusions and Future Directions
In summary, a series of complex interactions between inflammatory cells, vascular cells, and soluble mediators in the lung and periphery promote perivascular inflammation and-presumably-also pulmonary vascular remodeling in PAH (Figure 1). Over the past 15 years, the fundamental role of the immune system in the development of PAH has been increasingly appreciated. The disease is no longer thought to be driven solely by dynamic vasoconstriction, but now inflammation, metabolic processes [163], and even changes akin to oncogenesis [164] have been recognized as critical paradigms in the pathogenesis of PAH. In spite of this new understanding, the majority of our approved clinical therapies for PAH are still vasodilators [3]. Recognition of the importance of inflammation in driving pulmonary vascular remodeling has now spawned a new enthusiasm for clinical and preclinical trials of therapies that go beyond vasodilation to target the cellular and soluble components of the immune system directly, in hopes of slowing or even reversing lung vascular remodeling in this devastating disease. In spite of the wealth of preclinical and clinical data in support of the causal nature of inflammation in the development of PAH, multiple clinical trials investigating immunomodulatory therapy for PAH (discussed above) have been negative or underwhelming. This gives pause to the enthusiasm surrounding this theory of PAH origins and/or progression. Identifying the right patient population and timing of therapy where the largest benefit may be derived represents a critical challenge for the translation of this theory to clinical reality. The extent to which vascular remodeling can be reversed, as well as the reversibility of immune phenomena (such as loss of self-tolerance) seen in PAH, is not clearly known. This leaves open the possibility that patients with severe disease, recruited to clinical trials, will not derive benefit from such therapies. Importantly, these setbacks do not necessarily refute the promise of targeted immunotherapy for the treatment of PAH-it took approximately 4500 years from the first attempts to cause tumor regression via localized infection by Imhotep in 2600 BC to the award of the Nobel Prize in Physiology or Medicine for the discovery of check point inhibitors in 2018. Notably, the revised definition of PAH, which has lowered the minimum mean pulmonary artery pressure for diagnosis from 25 to 20 mmHg, presents a new opportunity to test immunologically-based treatments in patients at an earlier stage of disease [165].
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2022-08-11T15:21:09.033Z
|
2022-08-01T00:00:00.000
|
251477021
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2073-4360/14/15/3200/pdf?version=1659708258",
"pdf_hash": "d61352c9f4cea39e7466ecab55b9b5c3857926c0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2291",
"s2fieldsofstudy": [
"Chemistry"
],
"sha1": "cfb5b464d4805401d36c2e950c545c23c5c361c5",
"year": 2022
}
|
pes2o/s2orc
|
The Analysis of Acute and Subacute Toxicity of Silver Selenide Nanoparticles Encapsulated in Arabinogalactan Polymer Matrix
The acute and subacute toxicity of a newly synthesized silver selenide nanoparticles encapsulated in a natural polymeric matrix of arabinogalactan study has been studied. The nanocomposite is a promising material for the design of diagnostic and therapeutic drugs. It can also be used for the preparation of fluorescent labels and in thermal oncotherapy. The employment of binary nanocomposites enables one to unveil the potential hidden in metals which constitute these composites. The study of acute toxicity, carried out by the oral administration of nanocomposites at a dose of 2000 mg/kg, has shown that the compound belongs to low-toxic substances of the 5th hazard class. With the subacute oral administration of nanocomposites at a dose of 500 μg/kg, slight changes are observed in the brain tissue and liver of experimental animals, indicating the development of compensatory–adaptive reactions. In the kidneys, the area of the Shumlyansky–Bowman chamber decreases by 40.5% relative to the control group. It is shown that the application of the protective properties of selenium, which is contained in the composite, helps to reduce the toxicity of silver.
Introduction
Silver selenide is a promising material for the creation of biomedical theranostic preparations. Due to the fact that the silver selenide luminesce is in the region of electromagnetic radiation that is not absorbed by biological tissues, the application of its quantum dots (QDs) as fluorescent labels seems to be prospective [1,2]. Silver selenide can be employed in oncology to improve the efficiency of photothermal therapy, the process of destroying cancer cells using infrared radiation as a heat source [1,3,4]. Silver nanoparticles are among the strongest natural antiseptics and have pronounced bacteriostatic and bactericidal effects [5,6]. They are also capable of overcoming the blood-brain barrier and accumulating in the brain tissue [7,8]. At the same time, silver nanoparticles are known to exhibit neurotoxic [7][8][9] and hepatotoxic [10] action at high doses. In turn, selenium, in addition to antibacterial properties [11], has cytostatic effects [12] and can suppress tumor growth [12][13][14]. The use of binary nanocomposites (which contain both Ag and Se) enables the unveiling of the potential hidden in the metals which constitute these composites. All of the above effects make the silver selenide nanocomposite a promising material for theranostics.
The improvement of the bioavailability of metal nanoparticles as well as the reduction of their possible toxicity represents an urgent challenge. In this regard, the synthesis of metal-containing nanocomposites using high-molecular compounds, for example, arabinogalactan (AG) of Siberian larch [15,16], appears to be a promising direction. The natural polymer AG is a water-soluble white or cream-colored powder without taste and odor, the production technology of which was patented [17]. The AG macromolecule ( Figure 1) is arabinogalactan (AG) of Siberian larch [15,16], appears to be a promising direction. The natural polymer AG is a water-soluble white or cream-colored powder without taste and odor, the production technology of which was patented [17]. The AG macromolecule ( Figure 1) is represented by the residues of two monosaccharides: galactose and arabinose. The structure of the polysaccharide has a main chain consisting of galactopyranosyl units connected by β-bonds (1 → 3) and a side chain representing various combinations of galactopyranosyl and arabinofuranosyl residues connected by β-bonds (1 → 6) [18]. The polymer is isolated from larch wood, where its content reaches 15%. The specified polymer is a biologically active substance with a wide spectrum of activity. It is found that it possesses good gastroprotective, immunomodulatory, membranotropic and antioxidant properties [19][20][21]. It is shown that AG significantly weakens the effect of chemical toxicants on oxidation processes involving free radicals [22]. In addition, earlier AG already successfully demonstrated itself as an effective stabilizer of a number of different nanoparticles (noble metals, metal oxides, elemental chalcogenes, etc.). The obtained nanocomposites combine good water solubility, high aggregative and kinetic stability, high biocompatibility as well as a number of specific physical-chemical (optical, luminescent and magnetic) and biological (antimicrobial, antioxidant, adaptogenic, immunomodulatory and antianemic) properties due to the presence of nano-sized inorganic components. Thus, the availability of arabinogalactan, its excellent water-solubility and the expressed stabilizing properties (the z-potential of some composites is up to −70 eV) with a rather narrow molecular weight distribution Mw (42.3-45.2 kDa) allow for the water-soluble stable nanomaterials based on it to be obtained, which combine all the above features of both arabinogalactan and an inorganic nanophase [23][24][25].
The unique characteristics of silver selenide nanoparticles encapsulated in a natural polymer matrix of AG open up wide possibilities for its application, at the same time being possible reasons for adverse effects.
In the present work, the morphofunctional state of the tissue of the sensorimotor zone of the cerebral cortex and the hepato-renal system during the acute and subacute The polymer is isolated from larch wood, where its content reaches 15%. The specified polymer is a biologically active substance with a wide spectrum of activity. It is found that it possesses good gastroprotective, immunomodulatory, membranotropic and antioxidant properties [19][20][21]. It is shown that AG significantly weakens the effect of chemical toxicants on oxidation processes involving free radicals [22]. In addition, earlier AG already successfully demonstrated itself as an effective stabilizer of a number of different nanoparticles (noble metals, metal oxides, elemental chalcogenes, etc.). The obtained nanocomposites combine good water solubility, high aggregative and kinetic stability, high biocompatibility as well as a number of specific physical-chemical (optical, luminescent and magnetic) and biological (antimicrobial, antioxidant, adaptogenic, immunomodulatory and antianemic) properties due to the presence of nano-sized inorganic components. Thus, the availability of arabinogalactan, its excellent water-solubility and the expressed stabilizing properties (the z-potential of some composites is up to −70 eV) with a rather narrow molecular weight distribution Mw (42.3-45.2 kDa) allow for the water-soluble stable nanomaterials based on it to be obtained, which combine all the above features of both arabinogalactan and an inorganic nanophase [23][24][25].
The unique characteristics of silver selenide nanoparticles encapsulated in a natural polymer matrix of AG open up wide possibilities for its application, at the same time being possible reasons for adverse effects.
In the present work, the morphofunctional state of the tissue of the sensorimotor zone of the cerebral cortex and the hepato-renal system during the acute and subacute administration of silver selenide nanoparticles encapsulated in a natural AG polymer matrix has been evaluated.
Chemical Synthesis of a Water-Soluble Ag 2 Se-Containing Nanocomposite Based on AG
The nanocomposite was synthesized according to the slightly modified method described in [2]. Namely, an aqueous solution containing 0.178 g AgNO 3 was added to 100 mL of aqueous solution containing 3.4 g of AG under vigorous stirring at room temperature. After 10 min of stirring, 225 µL of reaction medium containing Se 2− -anions, previously generated from powdered elemental selenium in the basic-reduction "hydrazine-hydratealkali" system [2], was added to the obtained transparent colorless solution (AG + AgNO 3 ). The formation of Ag 2 Se nanoparticles was identified by the appearance of brown staining of the reaction medium and the absorption spectrum characteristic of Ag 2 Se. The synthesis time was 20 min. The nanocomposite was isolated by precipitation from the reaction medium in a 5-fold excess of ethanol.
Animals and Experimental Design
For the study purposes, animals were randomly selected, labeled with individual identifiers, and kept in their cages for three to four individuals each for 5 days prior to dosing to allow them to adapt in the laboratory. The absence of external signs of diseases and the homogeneity of groups by body weight (±20%) were considered as a criterion for the acceptability of randomization. Prior to dosing, the animals remained fasted for 3-4 h with free access to water. Animals of the experimental and control groups were kept under the same environmental conditions in the chambers.
Experimental animals were born by their own reproduction in the vivarium of Federal State Budgetary Scientific Institution "East Siberian Institute of Medical and Ecological Research" (FSBSI ESIMER) and kept on a standard diet. Maintenance and care of experimental animals were carried out in accordance with the interstate standard GOST 33216-2014 (Russia).
All animal experiments were approved by the ethical committee of FSBSI East-Siberian Institute of Medical and Ecological Research (identification code: E06/21; date of approval: 24 June 2021, amended/approvals every 6 months). All manipulations with experimental animals were performed in accordance with the rules of humane treatment of animals in accordance with the requirements of the International Recommendations for Biomedical Research Using Animals (WHO, Geneva, 1985).
Study of Acute Toxicity
The acute toxicity method [26] is a step-by-step procedure using a minimum number of animals of the same sex at each step. The method for determining the class of acute toxicity is based on biometric assessments with fixed doses, which are distributed over the time of administration so that it is possible to assess the substance according to the degree of danger and systematize the results in human body. Acute oral toxicity (method for determining the class of acute toxicity of the Ag 2 Se nanoparticles encapsulated in a natural polymer matrix of arabinogalactan (Ag 2 Se-AG)) was determined by the intragastric route of administration on outbred white male mice weighing 21-32 g (experimental n = 6, control n = 6). The acute toxicity test of the Ag 2 Se-AG nanocomposite was performed using the limiting dose (2000 mg/kg of body weight). The test substance was administered orally to white mice of the experimental group with an atraumatic probe at a dose of 2000 mg/kg in 0.9% NaCl. At the same time, the volume of the injected solution did not exceed 0.5 mL. Dosing accuracy was achieved by changing the volume of the injected solution at its constant concentration. Animals of the control group were orally administered in an equivalent volume of 0.9% NaCl (placebo). The doses were prepared immediately prior to the administration. Before the introduction of the test dose, the animals were weighed and the administered dose was calculated at the rate of 2000 mg/kg of body weight. Prior to dosing, mice remained fasted for 3-4 h with free access to water. Food was absent for another 2-3 h after introduction of substance.
Animals were observed daily for 14 days, with particular attention paid to the first 4 h after administration of the test dose. The appearance and disappearance of the external signs of poisoning were assessed: changes in the skin and coat, eyes and mucous membranes, respiratory, circulatory, autonomic and central nervous systems, as well as somatomotor activity and behavior, the appearance of tremor, convulsions, salivation, diarrhea, lethargy, sleep and coma [27]. Then the mice were sacrificed by the method of dislocation of the cervical vertebrae and macroscopic analysis of the internal organs was performed.
Study of Subacute Toxicity
The study of the subacute effect of the nanocomposite was carried out with intragastric administration of Ag 2 Se-AG on white male rats weighing 200-220 g (experimental n = 10, control n = 10). In case of subacute exposure, the experimental group of animals was intragastrically injected with the studied nanocomposite at a dose of 500 mg/kg of body weight in 1 mL of distilled water for 10 days. This dose was chosen based on the results of previous investigations and was 1/4 of LD 50 . The choice of dose is due to previous studies of the toxic properties of nanocomposites of other metals (Ag, Bi, Fe, etc.) [28][29][30][31][32][33], indicating the development of clear and persistent signs of pathology with the introduction of this dose. Control animals (n = 10) received 1 mL of distilled water in the same mode. For histological studies of the nervous tissue, the animals were euthanized by decapitation.
Histological Investigation
The brain, liver and kidneys were isolated from each animal under study and fixed in a neutral buffered 10% formalin solution (BioVitrum, St. Petersburg, Russia). The brain was dehydrated with isopraponol (BioVitrum, St. Petersburg, Russia) and placed in HistoMix homogenized paraffin medium for histological studies (BioVitrum, Russia). Using a HM 400 microtome (Microm, Munchen, Germany), serial frontal paraffin sections of the brain were made for subsequent staining with hematoxylin-eosin for survey microscopy. The nervous tissue of the temporo-parietal zone of the sensorimotor cortex of the brain was studied as a nerve center that provides the regulation of the basic physiological functions of the body and complex forms of behavior [9]. The liver and kidneys after fixation were examined by alcohols of increasing concentrations and embedded in paraffin. Sections 3-5 microns thick were prepared from paraffin blocks, which were stained with hematoxylin and eosin according to the generally accepted method [34]. In the brain preparations, the number of neurons per unit area, astroglial cells and dead neurons were counted, and the number of neuronophagy events were counted using the ImageScope M program (Russia). The number of Kupffer stellate macrophages and the number of polynuclear hepatocytes were counted in the liver tissue. The area of the Shumlyansky-Bowman chamber was evaluated in the kidney tissue. The obtained sections were examined using an Olympus BX 51 light-optical research microscope (Tokyo, Japan) with input of microimages into a computer using an Olympus E420 camera (Tokyo, Japan).
Statistical Analyses
Statistical analysis of the research results was carried out using the Statistica 6.1 software package. To compare the groups, the Mann-Whitney U-test was used. Changes in the studied parameters were considered statistically significant at a significance level of p ≤ 0.05.
Characteristics of the Synthesized Nanocomposite Ag 2 Se-AG
The water-soluble Ag 2 Se-containing nanocomposite based on AG with a quantitative content of the inorganic phase of 4%w was synthesized out in aqueous medium via the ion-exchange interaction of selenide anions Se 2− (generated from bulk powder samples of elemental selenium in a basic reduction system "hydrazine hydrate-alkali" (Equation (1)) and Ag + ions, according to Equation (2), in the presence of AG macromolecules.
The passivation of the energy-saturated surface of Ag 2 Se nanoparticles and the support of their aggregative stability are probably performed by the adsorption of the polysaccharide macromolecules on their surface (steric stabilization) as well as due to the electrostatic stabilization of the Ag 2 Se particle surface by the highly-polar functional AG groups (hydroxyl, terminal carbonyl). A single hybrid stable water-soluble system "nanocore-Ag 2 Se/shell-polysaccharide matrix" is formed.
According to the data of transmission electron microscopy, the Ag 2 Se-AG nanocomposite (4%w Ag 2 Se) is formed as spherical Ag 2 Se nanoparticles dispersed in the polysaccharide matrix of AG. The particle size varies between 4-16 nm with an average value of 9.6 nm. (Figure 2a). Using XRD, it was found that the obtained Ag 2 Se-containing nanocomposite has a two-phase amorphous-crystalline structure. Its diffractogram is presented by a halo in the region of 10-24 • , corresponding to the amorphous AG phase, and also by a set of reflexes of different intensities in the region of 33.6 • , 36.1 • and 45.1 • (JCPDS Card No. 24-1041), characterizing the presence of silver selenide with a cubic crystal lattice (α-Ag 2 Se) in the composite obtained [35]. In addition, the diffractogram shows a set of low-intensity reflexes in the region of 31-70 • , corresponding to orthorhombic crystal lattice β-Ag 2 Se (on the type of mineral Naumannite) [36] (Figure 2b).
Statistical Analyses
Statistical analysis of the research results was carried out using the Statistica software package. To compare the groups, the Mann-Whitney U-test was used. Chan in the studied parameters were considered statistically significant at a significance l of p ≤ 0.05.
Characteristics of the Synthesized Nanocomposite Ag2Se-AG
The water-soluble Ag2Se-containing nanocomposite based on AG with a quan tive content of the inorganic phase of 4%w was synthesized out in aqueous medium the ion-exchange interaction of selenide anions Se 2− (generated from bulk powder s ples of elemental selenium in a basic reduction system "hydrazine hydrate-alk (Equation (1)) and Ag + ions, according to Equation (2), in the presence of AG macrom ecules.
2 Se + 4 KOH + N2H4•H2O = 2 K2Se + 5 H2O + N2↑ The passivation of the energy-saturated surface of Ag2Se nanoparticles and support of their aggregative stability are probably performed by the adsorption of polysaccharide macromolecules on their surface (steric stabilization) as well as due to electrostatic stabilization of the Ag2Se particle surface by the highly-polar functional groups (hydroxyl, terminal carbonyl). A single hybrid stable water-soluble sys "nanocore-Ag2Se/shell-polysaccharide matrix" is formed.
According to the data of transmission electron microscopy, the Ag2Se-AG n composite (4%w Ag2Se) is formed as spherical Ag2Se nanoparticles dispersed in the ysaccharide matrix of AG. The particle size varies between 4-16 nm with an ave value of 9.6 nm. (Figure 2a). Using XRD, it was found that the obtained Ag2Se-contai nanocomposite has a two-phase amorphous-crystalline structure. Its diffractogram presented by a halo in the region of 10-24°, corresponding to the amorphous AG ph and also by a set of reflexes of different intensities in the region of 33.6°, 36.1° and 4 (JCPDS Card No. 24-1041), characterizing the presence of silver selenide with a c crystal lattice (α-Ag2Se) in the composite obtained [35]. In addition, the diffractog shows a set of low-intensity reflexes in the region of 31-70°, corresponding to or rhombic crystal lattice β-Ag2Se (on the type of mineral Naumannite) [36] (Figure 2b). The average size of Ag 2 Se nanocrystallites, calculated by the Scherrer formula, is 11.2 nm which correlates well with the TEM data (Figure 2c). The experimentally obtained value of cell parameter a (0.4962) agrees well with that of the reference sample of cubic silver selenide (a = 0.4983 nm). The optical properties of an aqueous solution of Ag 2 Se nanocomposite (4%w Ag 2 Se) were studied by optical spectroscopy in the visible region of the spectrum at room temperature. It was found that the absorption spectrum is characterized by the absence of well-resolved maxima, probably due to the relatively large size of Ag 2 Se nanoparticles and their wide dispersed distribution, which is confirmed by TEM data.
The value of the optical band gap energy (Eg) of the Ag 2 Se nanoparticles in the AG matrix can be calculated by Tauc's plot method. The basis of this method is the suggestion of the possibility to present the absorption coefficient α in the form of the equation: where Eg is the band gap energy, h is Planck's constant, ν is the photon frequency and B is a constant. We chose the factor γ, which depends on the nature of the electron transition, as 1/2, assuming the direct character of the transitions [37]. According to the data obtained, the optical gap energy of the synthesized Ag 2 Se nanoparticles was higher (3.2 eV) than the value of 0.16 eV reported earlier for bulk Ag 2 Se [38]. Presumably, the increase of the gap in Ag 2 Se nanoparticles compared to the value of bulk silver selenide may be due to the decrease of the particle size and appearance of the quantum confinement effect.
Acute Toxicity Study
The measurement of body weight of laboratory animals is an integral indicator of the state of the organism. During the initial weighing of mice before the introduction of the test nanocomposite, the individuals of the studied groups almost did not differ from each other in weight, being varied within 30-35 g. When weighed one week and 2 weeks after administration, the weight of the animals either increased or remained at the same level. Thus, oral administration of the nanocomposite during observation for 14 days did not decrease the body weight in any case, p ≥ 0.05 (Table 1). During the entire observation, there was no mortality of animals from the experimental group. As in the latter, the observation of mice from the control group also did not reveal changes in behavior, condition of wool or in the consumption of water and food.
A macroscopic investigation of the internal organs of mice euthanized after 14 days of observation showed no differences between the experimental and control groups. Mice had the correct constitution and obtained satisfactory nutrition. The coat had a neat appearance and no foci of baldness were determined. Visible mucous membranes were shiny, smooth and pale in color. Thoracic and abdominal cavities did not contain effusion. The position of the internal organs did not have any disorders. The thyroid gland was of normal size and shape and reddish in color.
The size and shape of the heart did not change. The heart muscle was moderately dense and brownish in color. The lumen of the trachea and large bronchi were uniformly wide. The lungs were easily collapsed when the chest was opened and the surface was of a uniform pale pink color. The tissue of the lungs was airy to the touch.
The stomach had the usual dimensions and its lumen was filled with food contents. The mucous membrane was folded, homogeneous and pinkish in color. Irritation and hyperemia was not observed. The shape and size of the liver did not change. The liver tissue was moderately plethoric. The pancreas was pale pink. The size and shape of the spleen corresponded to those of the control mice. The tissue of the spleen had a moderately Polymers 2022, 14, 3200 7 of 13 dense consistency and dark cherry color. Kidneys were of normal color with a clearly visible cortical immedulla. The membranes of the brain were shiny, thin and smooth. No ventricular expansion was observed on the frontal sections.
It should be noted that a macroscopic investigation of the internal organs of animals from the control group also did not show any pathological changes.
Based on the results obtained, due to the absence of mortality associated with the studied nanocomposite in animals dosed at one stage, the further research seems to be unreasonable [39].
Thus, according to the data of a macroscopic study, acute oral administration of the Ag 2 Se-AG nanocomposite at the maximum dose to white male mice did not cause visible changes in the internal organs, brain and tracheal and stomach mucosa. The study of acute toxicity showed that the test substance can be attributed to the 5th hazard class. Thus, this drug belongs to low-hazard substances.
Subacute Toxicity Study
The study of the subacute effect of Ag 2 Se-AG on the brain tissue showed that the blood filling of the vessels of the brain substance and the state of the vascular intima were unchanged. The number of normal neurons, astroglial cells and degeneratively altered neurons (darkly stained neurons were considered degeneratively altered, without a clearly separated nucleus and cytoplasm) per unit area had no statistically significant differences from the control values. The number of neuronophagy events in animals that received the nanocomposite was statistically significantly higher than in the control group (Table 2, Figure 3). Table 2. Morphometric parameters of the sensorimotor zone of the cerebral cortex, liver and kidney tissues during subacute administration of the Ag 2 Se-AG nanocomposite to rats at a dose of 500 µg/kg of body weight for 10 days. Me (Q25-Q75). Thus, the exposure of the binary nanocomposites of silver selenide at a dose of 500 µg/kg had an insignificant effect on the number and structure of the population of nerve cells in the sensorimotor cortex of albino rats. In the liver tissue, the blood filling of the sinusoidal capillaries, central veins and veins of the portal tracts was normal. Portal tracts were not dilated and were without signs of sclerosis and inflammation. The beam-radial structure of the hepatic lobules was preserved. The number of stellate Kupffer macrophages in the sinusoidal capillaries did not differ from the control group. At the same time, the number of polynuclear hepatocytes was statistically significantly higher than in the control group (Table 2, Figure 4). Thus, the exposure of the binary nanocomposites of silver selenide at a dose of 500 μg/kg had an insignificant effect on the number and structure of the population of nerve cells in the sensorimotor cortex of albino rats. In the liver tissue, the blood filling of the sinusoidal capillaries, central veins and veins of the portal tracts was normal. Portal tracts were not dilated and were without signs of sclerosis and inflammation. The beam-radial structure of the hepatic lobules was preserved. The number of stellate Kupffer macrophages in the sinusoidal capillaries did not differ from the control group. At the same time, the number of polynuclear hepatocytes was statistically significantly higher than in the control group (Table 2, Figure 4). In the kidney tissue, the blood filling of the cortical and medulla of the organ was unchanged. The violations of the blood rheology in the body were not observed. The condition of the walls of the renal arteries, arterioles and interstitial space was normal. The structure of the renal glomeruli was preserved. There were no foci of inflammation or necrosis of the renal tissue. The epithelium of the distal and proximal renal tubules was intact. In the cortical substance of the kidney, a statistically significant decrease in the area of the Shumlyansky-Bowman capsule was revealed in comparison with the control group (Table 2, Figure 5). Thus, the exposure of the binary nanocomposites of silver selenide at a dose of 500 μg/kg had an insignificant effect on the number and structure of the population of nerve cells in the sensorimotor cortex of albino rats. In the liver tissue, the blood filling of the sinusoidal capillaries, central veins and veins of the portal tracts was normal. Portal tracts were not dilated and were without signs of sclerosis and inflammation. The beam-radial structure of the hepatic lobules was preserved. The number of stellate Kupffer macrophages in the sinusoidal capillaries did not differ from the control group. At the same time, the number of polynuclear hepatocytes was statistically significantly higher than in the control group (Table 2, Figure 4). In the kidney tissue, the blood filling of the cortical and medulla of the organ was unchanged. The violations of the blood rheology in the body were not observed. The condition of the walls of the renal arteries, arterioles and interstitial space was normal. The structure of the renal glomeruli was preserved. There were no foci of inflammation or necrosis of the renal tissue. The epithelium of the distal and proximal renal tubules was intact. In the cortical substance of the kidney, a statistically significant decrease in the area of the Shumlyansky-Bowman capsule was revealed in comparison with the control group (Table 2, Figure 5). In the kidney tissue, the blood filling of the cortical and medulla of the organ was unchanged. The violations of the blood rheology in the body were not observed. The condition of the walls of the renal arteries, arterioles and interstitial space was normal. The structure of the renal glomeruli was preserved. There were no foci of inflammation or necrosis of the renal tissue. The epithelium of the distal and proximal renal tubules was intact. In the cortical substance of the kidney, a statistically significant decrease in the area of the Shumlyansky-Bowman capsule was revealed in comparison with the control group (Table 2, Figure 5).
Discussion
With the acute oral administration of the nanocomposite to white mice at a maximum dose of 2000 mg/kg of body weight, no mortality was noted during the entire observation. This, together with the absence of changes in internal organs during the mac-
Discussion
With the acute oral administration of the nanocomposite to white mice at a maximum dose of 2000 mg/kg of body weight, no mortality was noted during the entire observation. This, together with the absence of changes in internal organs during the macroscopic investigation, allows us to classify the studied Ag 2 Se-AG nanocomposite as a low-hazard substance (5th class of hazard).
Many researchers have shown that exposure to silver nanoparticles in low doses leads to the emergence and development of neurotoxic effects [7][8][9]11,[40][41][42]. Skalska J. et al. [40] described the development of pathological changes in neuronal mitochondria (edema, decreased potential of the mitochondrial membrane) and, as a consequence, the induction of autophagy of the neurons themselves when exposed to silver nanoparticles at a dose of 0.2 mg/kg. The cytotoxic effect of low doses of silver nanoparticles on the main components of the blood-brain barrier (endothelial cells and astrocytes) was reported, as well as that directly on neurons, causing a disruption of the cell cytoskeleton and destruction of synaptic connections [7,41]. Our previous studies have shown that a silver nanocomposite encapsulated in an arabinogalactan polymer matrix, despite belonging to the IV low-hazard class of substances with LD 50 , when administered intragastrically at a dose of more than 5000 mg/kg of animal weight, disorders the structure of the nervous tissue, increases the area of mitochondria, violates the structure of nerve cells and activates the apoptosis process [43]. At the same time, silver nanoparticles encapsulated in the natural biopolymer matrix of arabinogalactan are able to penetrate through the blood-brain barrier and, having remained in the nervous tissue of the brain of rats for a long time, cause structural disturbances [43,44]. Silver nanoparticles do not reduce their cytotoxicity for nervous tissue.
In turn, exposure to individual selenium nanoparticles, also encapsulated in a polymeric matrix of arabinogalactan, at a dose of 500 µg/kg of animal body weight reduced the total number of neurons and astroglial cells per unit area in the sensorimotor zone of the cerebral cortex and increased the degeneratively altered neurons and the number of neuronophagy acts, which indirectly indicated both the penetration of the nanocomposite through the blood-brain barrier and the pronounced neurotoxic effect of selenium nanoparticles [45]. Exposure to excessive amounts of selenium can lead to the disruption of the functioning of neurotransmitter systems and the development of neurodegenerative and neuropsychiatric processes [42]. Taking into account that the toxic effect of selenium is realized by the suppression of the intercellular signals transmission [46], it can be assumed that the previously established reduction in the number of normal neurons and astroglial cells in the nervous tissue disturbs the intercellular interaction.
The absence of such effects upon exposure to silver selenide encapsulated in the arabinogalactan polymer matrix is apparently due to the simultaneous presence of nanoparticles in the nanocomposite and, possibly, to their competition for binding to the cell receptors of cerebral cortex neurons.
An increase in the number of polynuclear hepatocytes upon the subacute administration of the Ag 2 Se-AG nanocomposite indicates the activation and development of compensatory repair processes in the liver and stimulation of cell regeneration mechanisms, while the constant amount of Kupffer stellate macrophages evidences no inflammatory process in the liver tissue. According to the data given in [10], silver nanoparticles at a dose of 300 µg/kg disrupt the normal blood rheology in the liver, increase the number of polynuclear hepatocytes and disorder the metabolic activity of hepatocytes. A similar effect was produced by selenium nanoparticles in an arabinogalactan matrix at a dose of 500 µg/kg [46]. The hepatotoxic effect of selenium at a low dose of 10 µg mixed with lithium was shown in studies by Pinto-Vidal, F. et al. [47]. At the same time, some studies showed the hepatoprotective effect of selenium [48][49][50]. The inconsistency of the obtained data is apparently explained by different ways, forms and doses of selenium introduction into the body of biological objects.
In the kidney tissue, the administration of Ag 2 Se-AG at a dose of 500 µg/kg showed a 40.5% decrease in the area of the Shumlyansky-Bowman capsule (a change in the area within 30% is the norm [51]), which can reduce the volume of primary urine formed, and, in turn, contribute to the difficulty of metabolic products' excretion from the body. There were no other significant structural changes in the kidney tissue exposed to the silver selenide nanocomposite. Perhaps this is due to the nephroprotective properties of selenium [52][53][54]. Meanwhile, it is known that silver nanoparticles have a nephrotoxic effect, causing structural changes in the renal tubules and renal glomeruli [55]. Perhaps this is the reason for the slight changes in the kidney tissue when the Ag 2 Se-AG nanocomposite was administered to rats.
The administration of the nanocomposite to rats for 10 days at a dose of 1 ⁄4 of LD50 had a different degree of severity depending on the place of application. Of the greatest interest was the nanocomposite effect on the tissue of the cerebral cortex. The experimental studies revealed no changes in the ratio of the cellular elements of the sensorimotor zone of the cortex. Morphological disturbances of neurons were also not observed in comparison with those in control animals. In the tissue of the cerebral cortex, attention was drawn to higher acts of neuronophagy, indicating an increase in the formation of glial nodules, through which damaged or degeneratively altered nerve cells were destroyed and removed from the body with the help of macrophages. Thus, silver selenide nanocomposites, without a pronounced neurotoxic effect, still increase the number of dead neurons. It might be possible to address this issue by studying the effects of silver selenide on brain tissue in the late post-contact period.
The conducted studies revealed that the toxicity of the silver selenide nanocomposite in the arabinogalactan polymer matrix is much less pronounced than that of silver or selenium nanoparticles alone. Apparently, this fact may be due to the competitive relationship of nanoparticles. At the same time, selenium, being a physiologically important trace element that is part of glutathione peroxidase, may be an antagonist for silver particles. The literature describes the antagonistic properties of selenium for such heavy metals as mercury, arsenic, lead and cadmium [56]. Given the great importance of selenium for the functioning of the immune, endocrine and reproductive systems, metabolism, cellular homeostasis and carcinogenesis, it can be assumed that its ability to bind and activate cell receptors is much higher than that of silver. As a result, the biological effectiveness of selenium can be more pronounced. Conversely, the biological activity of silver is suppressed. For the studied compound, it can be assumed that selenium acts as a protector, suppressing the pathological effects of silver nanoparticles and thereby protecting the cellular metabolism.
Conclusions
In conclusion, the study of the acute toxicity of a silver selenide nanocomposite has shown that the substance belongs to the low-hazard class. The evaluation of the subacute toxicity of a silver selenide nanoparticle encapsulated in an arabinogalactan polymer matrix to white rats does not reveal any significant changes in the tissue structure of the sensorimotor cortex and liver of animals, along with minor changes in the kidney tissue. In this connection, the silver selenide nanocomposite encapsulated in a polymer matrix is a promising preparation for further biomedical research. Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available from the corresponding author upon request.
|
v3-fos-license
|
2016-03-22T00:56:01.885Z
|
2011-04-01T00:00:00.000
|
17804706
|
{
"extfieldsofstudy": [
"Mathematics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2297-8747/16/2/329/pdf?version=1458047141",
"pdf_hash": "fb67d7c47229721da638f206535663c3baa8003b",
"pdf_src": "Crawler",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2294",
"s2fieldsofstudy": [
"Business",
"Engineering"
],
"sha1": "fb67d7c47229721da638f206535663c3baa8003b",
"year": 2011
}
|
pes2o/s2orc
|
Optimal Lot Sizing with Scrap and Random Breakdown Occurring in Backorder Replenishing Period
This paper is concerned with determination of optimal lot size for an economic production quantity model with scrap and random breakdown occurring in backorder replenishing period. In most real-life manufacturing systems, generation of defective items and random breakdown of production equipment are inevitable. To deal with the stochastic machine failures, production planners practically calculate the mean time between failures (MTBF) and establish the robust plan accordingly, in terms of optimal lot size that minimizes total production-inventory costs for such an unreliable system. Random scrap rate is considered in this study, and breakdown is assumed to occur in the backorder filling period. Mathematical modeling and analysis is used and the renewal reward theorem is employed to cope with the variable cycle length. An optimal manufacturing lot size that minimizes the long-run average costs for such an imperfect system is derived. Numerical example is provided to demonstrate its practical usages.
INTRODUCTION
Becoming a low cost producer is one of the main operation strategies and goals of most manufacturing firms.To accomplish this goal, the company must be able to effectively use its resources and minimize its operating costs.In the field of inventory management, Harris [1] first introduced the economic order quantity (EOQ) model to assist corporations in reducing total inventory costs.EOQ model uses mathematical techniques to balance the setup cost and holding cost, and derives an optimal ordering size that minimizes overall inventory costs.In the manufacturing sector, the economic production quantity (EPQ) model is often utilized for determining the optimal production lot-size that minimizes overall production-inventory costs [2][3].Regardless of the simplicity of EOQ and EPQ models, they are still applied industry-wide today [4][5].The classic EPQ model implicitly assumes that items produced are of perfect quality.But in real-life production systems, due to many reasons generation of defective items is inevitable.Hence, studies have been carried out to enhance the classic EPQ model by addressing the issue of imperfection quality items produced .
Boone et al. [12] investigated the impact of imperfect processes on the production run time.They built a model in an attempt to provide managers with guidelines to choose the appropriate production run times to cope with both the defective items and stoppages occurring due to machine breakdowns.Lee and Rosenblatt [16] studied an EPQ model with joint determination of production cycle time and inspection schedules, and they derived a relationship that can be used to determine the effectiveness of maintenance by inspection.Zhang and Gerchak [18] considered joint lot sizing and inspection policy in an EOQ model with random yield.Hayek and Salameh [25] assumed that all of the defective items produced are repairable and derived an optimal operating policy for EPQ model under the effect of reworking of imperfect quality items.Stock-out situations may also occur due to the excess demand.Sometimes, these shortages can be backordered and satisfied at a future time, hence the overall production-inventory costs can be reduced significantly [19][20][24][25].
Random breakdown of production equipment is another common and inevitable reliability factors that trouble the production planners and practitioners most.To effectively manage and control the disruption and minimize overall production costs, become the primary task of most manufacturing firms.It is no wonder that determining optimal lot-size (or production uptime) for systems with machine failures has received attention from researchers in recent decades (see, for instance [27][28][29][30][31][32][33][34][35][36][37][38]).
Example of studies that addressed the machine breakdown issues are surveyed below.Groenevelt, Pintelon, and Seidmann [27] studied two production control policies to deal with the machine failures.The first one assumes that the production of the interrupted lot is not resumed (called no resumption (NR) policy) after a breakdown.While the second policy considers that the production of the interrupted lot will be immediately resumed (called abort/ resume (AR) policy) after the breakdown is fixed and if the current on-hand inventory falls below a certain threshold level.Both of their proposed policies assume that the repair time is negligible and they studied the effects of machine breakdowns and corrective maintenance on the economic lot sizing decisions.Chiu et al. [30] investigated the optimal run time for EPQ model with scrap, rework and random breakdown.They proposed and proved theorems on conditional convexity of the integrated cost function and on bounds of the production run time.Then, an optimal run time was located by the use of the bisection method based on the intermediate value theorem.Makis and Fung [33] studied effects of machine failures on the optimal lot size as well as on optimal number of inspections.Formulas for the long-run expected average cost per unit time was obtained.Then the optimal production/inspection policy that minimizes the expected average costs was derived.Abboud [38] considered an EMQ model with Poisson machine failures and random machine repair time.A simple approximation model was developed to describe the behavior of such systems, and specific formulations were derived for the cases where the repair times are exponential and constant.This study is concerned with determination of optimal lot size for an EPQ model with scrap, shortages allowed and backordered, and random breakdown occurring in backorder-filling period.Since little attention was paid to the aforementioned area, this paper intends to bridge the gap.
ASSUMPTION AND MATHEMATICAL MODELING
This paper considers a manufacturing process with the following features: (1) It may randomly produce x portion of defective items at a rate d.
(2) All imperfect quality items are assumed not repairable, are treated as scrap.
(3) The production rate P is much larger than the demand rate λ and the production rate of scrap items d can be expressed as d=Px.(4) Shortages are allowed and backordered, they will be satisfied first when the next replenishment production cycle begins.(5) According to the mean time between failures (MTBF) data, a single machine breakdown occurs at only backorder replenishing period with random occurrence times (refer to Figure 1).The abort/resume (AR) inventory control policy is adopted in this study and under such policy, when a breakdown takes place the machine is under corrective maintenance immediately, and the repair time is assumed to be constant.The interrupted lot will be resumed right after the restoration of machine.Cost parameters considered in the proposed model include setup cost K, unit holding cost h, unit production cost C, disposal cost per scrap item C S , unit shortage/backordered cost b, and cost for repairing and restoring machine M. Additional notations are listed below.t = production time before a random breakdown occurs, t r = time required for repairing and restoring the machine, t r ' = time required for producing sufficient stocks to satisfy the demand during machine repair time t r , t 4 = time required for filling the backorder quantity B (excluding t r and t r '), t 1 = time for piling up stocks during the production uptime in each cycle, t 2 = time required for depleting all available perfect quality on-hand items, t 3 = shortage permitted time, T 1 = the optimal production uptime to be found for the proposed EPQ model, H 1 = the level of backorder quantity when machine breakdown occurs, H 2 = the level of backorder quantity when machine is repaired and restored, H 3 = the maximum level of on-hand inventory for each production cycle, Q = production lot size for each cycle, B = the maximum backorder level allowed for each cycle, T = the production cycle length, TC(T 1 ,B) = total production-inventory costs per cycle, TCU(T 1 ,B) = total production-inventory costs per unit time (e.g.annual), E[TCU(T 1 ,B)] = the expected total production-inventory costs per unit time.
The production rate P of perfect quality items must always be greater than or equal to the sum of the demand rate λ and the production rate of defective items d.Hence, the following condition must hold: (P-d-λ)>0 or (1-x-λ/P)>0.Because t denotes production time before a breakdown taking place in the backorder replenishing period t 4 , that is t < t 4 .Let g be the constant machine repair time, hence t r = g.The following derivation procedure is similar to what was used by prior studies [20,25].
From Figure 1, one can obtain the following: the level of backorder H 1 (when machine breakdown occurs); the level of backorder H 2 (when machine is repaired and restored); the maximum level of on-hand inventory H 3 ; the production uptime T 1 ; the cycle length T; t r '; time for piling up stocks t 1 ; time required for depleting all available on-hand items t 2 ; t 3 ; time required for filling B (the maximum backorder quantity) t 4 ; and the production lot size Q.
where d=Px.
As depicted in Figure 2, the total scrap items produced during production uptime T 1 can be obtained as shown in equation (12).
The production cycle length is not constant due to the assumption of random scrap rate and a uniformly distributed random breakdown is assumed to occur in the backorder filling period.Thus, to take the randomness of scrap and breakdown into account, one can use the renewal reward theorem in inventory cost analysis to cope with the variable cycle length and the integration of TC 1 (T 1 ,B) to deal with the random breakdown happening in period t 4 .The expected total production-inventory costs per unit time can be calculated as follows.
Convexity of the expected cost function E[TCU(T 1 ,B)]
The optimal inventory operating policy can be obtained by minimizing the expected cost function.For the proof of convexity of E[TCU(T 1 ,B)], one can utilize the Hessian matrix equation [39] and verify the existence of the following: E[TCU(T 1 ,B)] is strictly convex only if equation ( 18) is satisfied, for all T 1 and B different from zero.From equations ( 17) and (18), by computing all the elements of the Hessian matrix equation, one obtains: , , Equation ( 19) is resulting positive because all parameters are positive.Hence, E[TCU(T 1 ,B)] is a strictly convex function.It follows that for the optimal production uptime T 1 and the maximal backorder level B, one can differentiate E[TCU(T 1 ,B)] with respect to T 1 and with respect to B, and solve linear systems of equations ( 20) and ( 21) by setting these partial derivatives equal to zero.
Results and verification
Suppose that the breakdown factor is not considered, then the cost and time for repairing failure machine M=0 and g=0, equations ( 26) and ( 27) become the same equations as were given by Chiu and Chiu [19]: Further, suppose that the regular production process produces no defective items, i.e. x = 0, then equations ( 28) and ( 29) become the same equations as were presented by the classic EPQ model with backordering permitted [40]:
NUMERICAL EXAMPLE AND DISCUSSION
Suppose that annual demand of a manufactured product is 3,600 units and the production rate of this item is 9,000 units per year.According to the MTBF data from the maintenance department a uniformly distributed breakdown is assumed to occur in the backorder filling period.When a breakdown happens, the abort/resume policy is used.The percentage of scrap items produced x, follows a uniform distribution over the interval [0, 0.2].Other parameters are summarized as follows.
C S = $0.3disposal cost for each scrap item, C = $1 per item, M = $500 repair cost for each breakdown, K = $450 for each production run, h = $0.6 per item per unit time, b = $0.2 per item backordered per unit time, g = 0.018 years, time needed to repair and restore the machine.
Sensitivity analyses
Figure 4 shows the behavior of the optimal production lot size Q* with respect to random percentage of defective items x, where each x-value represents a uniform distributed random variable over the interval [0, x].It may be seen that as the random percentage of defective items x increases, the optimal production lot size Q* decreases significantly.Suppose the result of this investigation is not available, one probably can only use a closely related lot-size solution given by [24] for solving such an unreliable EPQ model and obtaining Q=5,849 (or T 1 =0.6498) and B=2,180.Plugging this lot-size solution into Eq.( 17), one has E[TCU(T 1 ,B)]=$5,074.99.It is 4.51% more on total setup and holding costs than the optimal production-inventory costs computed by the result of the present study.
CONCLUSION
In most real-life manufacturing systems, generation of defective items and breakdown of production equipment are inevitable.One cannot count on classical EPQ model to determine the optimal replenishment policy for such a practical system, because it does not consider the imperfect quality factors.The effects of these reliability situations on the EPQ model must be specifically investigated in order to minimize overall production-inventory costs.Since little attention was paid to the aforementioned area, this paper intends to fill the gap.Mathematical modeling is employed in this study.The disposal cost for each scrap item and the repairing cost for the broken-down machine are included in the cost analysis.The renewal reward theorem is utilized to cope with the variable cycle length of the proposed system.An optimal production lot size that minimizes the long-run average costs for such an imperfect quality EPQ model is derived, where shortages are permitted and backordered.A numerical example is provided in Section 3 to demonstrate its practical usage.For future research, one interesting topic will be to consider reworking of the repairable defective items for the same unreliable systems.
ACKNOWLEDGEMENT
Authors would like to thank to the National Science Council of Taiwan for supporting this study under Grant #: NSC 97-2221-E-324-024.
3 tFigure 1 :
Figure 1 : On-hand inventory of perfect quality items in EPQ model with scrap and breakdown occurring in backorder-filling period
Figure 2 :
Figure 2 : On-hand inventory of scrap items in EPQ model with scrap and breakdown occurring in backorder-filling period
Figure 4 :
Figure 4: Variation of scrap rate effects on the optimal production lot size Q* The behavior of the optimal expected cost function E[TCU(T 1 *,B*)] with respect to random percentage of defective items x is depicted in Figure 5.It may be noted that
Figure 5 :
Figure 5: Variation of scrap rate effects on the optimal expected cost function E[TCU(T 1 *,B*)]
|
v3-fos-license
|
2018-11-17T16:21:27.875Z
|
2018-06-20T00:00:00.000
|
53440175
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://repositorio.iscte-iul.pt/bitstream/10071/16798/1/Paper20_Public%20Building%20Energy%20Efficiency.pdf",
"pdf_hash": "a66d48b263dd79c6e1e82c311224592685cf72fd",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2297",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "de6268d18a4ee09d4b8551b09672662db02265e3",
"year": 2018
}
|
pes2o/s2orc
|
ISCTE-IUL
. Buildings play an important role in energy consumption, mainly in the operation phase. Current development on IoT allows implementing sustainable actions in building towards savings, identify consumption patterns and relate consumption with space usage. Comfort parameters can be defined, and a set of services can be implemented toward the goals of saving energy and water. This approach can be replicated in most buildings and considerable savings can be achieved thus contributing to a more sustainable world without negative impact on building users’ comfort.
Introduction
Electric power grids in Europe -and worldwide -are gaining intelligence and becoming "smart grids". The increase electricity consumption in developed countries, caused by a larger number of more powerful and diversified power-connected devices, creates consumption peaks, which lead to the need of integrating new ways to produce, distribute and consume energy with more efficiency. Also considering a constant increase in fuel prices, threats of global warming, implications of carbon and other emissions from traditional fuels, there is a growing interest in improving energy efficiency. One of the most important elements in ensuring energy efficiency is energy management and monitoring. Energy monitoring is an energy efficiency technique based on the standard management axiom stating that "you cannot improve what you cannot measure". It implies the necessity of measurements and data organisation [1]. But measuring is just the first part of the journey. There is also a need to transform collected data into correlated and usable information using a sustainable, well designed, and upgradable energy efficiency monitoring system [2]. Effective energy management requires chronological knowledge of both the relevant energy uses and the main influencing factors such as operational requirements (e.g. production data) and environmental data (e.g. external temperature, humidity, etc.). This activity concerns all types of energy (electricity, gas, steam, chilled water, compressed air, etc.) [2]. Some important questions are to determine which parameters should be monitored, define the optimal number and position of meters, choose the suitable frequency of collecting data (annually, monthly, daily, hourly or less). It is essential to identify main, independent, factors to reduce the number of monitored parameters. Creating a suitable database is essential to analyse energy use of buildings properly [1]. Sustainability initiatives at university level falls into three categories: 1) research-based sustainability -there is a proliferation of masters and doctorate's courses adopting the environmental angle on traditional disciplines, from environmental economics to climate modelling; 2) operational-based sustainability on the university itself. The focus is the reduction of deleterious environmental effects, cutting carbon and energy bills. Less common, but still important, is the role universities have in contributing to their local environmentsocially, culturally, economically and ecologically; 3) "Universities of Sustainability", where the focus is on the education of environmentally and socially responsible citizens, on improving course curricula to ensure that courses include useful contentsdevelop skills for a world altered by climate change and post-peak oil [3]. These levels are not indepent. For example, sustainability research (level 1) will be converted into content for sustainabilty courses (level 3). Campus energy or water saving efforts (level 2) must involve the population thus educating them (level 3).
Several universities around the world are working on making their campuses sustainable, and one of the aspects is smart energy management -which is the main focus of this work, therefore aligned with the second level. Energy waste in various space types, such as teaching auditoriums, working areas (offices, laboratories, computer rooms, etc.) or residential buildings (dormitories) can be found [4]. The energy and environmental impact of universities could be considerably reduced by applying organisational, technological and energy optimisation measures [5] [6]. Actions can be taken and aimed at improving the production, distribution and consumption of energy within the campus, to increase buildings energy performance, improve energy management and educate people about efficient and sustainable energy use [7].
Energy Efficiency project
ISCTE-IUL has a global community of ca. 10000 people, of which 9234 students from undergraduate, master, PhD and postgraduate programs. In 2017 it had a budget of 38.5 M€ of which 54% were self-generated. Four main buildings compose the campus: 1) Building Sedas Nunes (also known as Building I); 2) Building II; 3) Ala Autónoma; and 4) Building INDEG. These buildings are 20 to 40 years old and have a total gross built area of 48,500 m 2 . ISCTE-IUL also has a multi-sports field, a parking lot and an off-campus student residence. In 2017, ISCTE-IUL started a Strategic Program on Sustainability. A formal sustainability specific organizational structure is managed by the Director of Sustainability to implement several projects, namely: 1) Campus Operations, such as water, energy and waste management; 2) Core Activities, like research and education; and 3) Outreach to the community, meaning in this context, activities to connect to society and share knowledge and expertise.
The objective of the university is to become the most sustainable university in Portugal. Under the Sustainability Program, the work described in this paper is focused on the Energy Efficiency Project which includes four topics: 1) Replacement or improvements of the HVAC systems; 2) Upgrade of electric lighting; 3) Installation of photovoltaic panels; and 4) Refurbishment of the Sedas Nunes building's roof to improve thermal insulation. The university believes there will be an average saving of one third on the energy consumption and CO2 emissions. To accomplish these, there is a need to study energy efficiency and monitor energy consumptionthe main focus of this work. What makes IoT interesting is the ability to save ISCTE-IUL a considerable amount of financial resources by optimizing processes. This is possible through the installation of sensors and the respective data analysis which, in turn, allows to take decisions on building operations and influence user behaviour.
Related Work
There is a considerable number of theses and research on IoT related to energy efficiency. In Portugal, there are already several smart grids and consumption control pilotprojects, such as, 1) in 2010 Galp company started the development of an energy management system pilot -SmartGalpwhich monitors energy consumption through a platform that interacts with domestic users, from electricity and natural gas, to fuel. The installation of devices in houses or cars of the end users allows to follow the effective consumption and establish reduction goals to save on the energy bills. Through the monitoring of results, the company verified that this tool allows for effective savings, being able to reduce up to 8% in the energy consumption [8]. EDP electricity company launched in 2011 the InovCity Project, a program where it is possible to be energetically efficient [9]. Within the scope of this project, Évora became the first Iberian metropolis to test a new way of thinking about electricity production and distribution. The first stage focused on the automation of electric grid management to reduce operating costs. With the smart grid, any citizen can know in real time its energy consumption [10]. The project had a very positive impact regarding energy efficiency since 60% savings in electricity costs were attained with the implementation of LED and AI technology. Parque Escolar, a public company in charge of modernising Portuguese public school buildings, implemented a system that allows it to track and control energy consumption on all of installed equipment. This technology is already implemented in several schools' buildings in Lisbon controlling air conditioning, lighting and even IT devices. The pilot-project reduced the use of energy consumed in IT in 25% -including computers, IP phones, wireless access points or video cameras. This system is complemented with an easy to read information presentation that has become a teaching tool in schools encouraging "green" individual behaviour [11]. This is an example of what ISCTE-IUL university may achieve through integrating institutional strategic goals with researchers and students' cooperative work. By 2020, the prediction is that IoT will be a trillion-dollar industry in selling solutions [12]. Specifically dedicated to smart campus, there are already several companies creating custom-made solutions for the university campus market, such as Huawei [13] and Cisco [14].
Many universities throughout the world have set in motion projects aimed at achieving a smart campus. Most also created labs to work specifically in smart environments. An example is the European Commission-funded project aiming at the development of services and applications supported by a data gathering platform that integrates realtime information systems and intelligent energy management systems that drive a bidirectional learning process. The user learns how to interact with the building, and the building learns how to interact with the user in a more energy efficient way [15]. This was applied to chosen universities in Lisbon, Helsinki, Luleå and Milan. This project reached 30% in energy savings through use of ICT, Living Lab methodologies and gamification to promote user behaviour transformation on public building users [16].
Proposal
We have developed a Sensor Network to create smart environments: temperature sensors control the classroom temperature, and BLE beacons track user movement and emit sustainability-related information. Fig. 1 shows our vision for the problem. The sensors installed on the campus provide data to a central cloud server, where information is manipulated towards the desirable goal. A service-based approach is used to provide flexibility and allow the reuse of algorithms towards knowledge extraction. The proposed architecture is composed of four layers: 1) Data layer, which comprises data collection from installed sensors; 2) Information layer, where data is manipulated towards achieving desirable information, based on data mining algorithms (out of the scope of this paper); 3) Knowledge layer, where this information can be used for campus management, and specific functional roles act on infrastructure and systems to optimise operating conditions; and 4) Services layer, which feeds main applications in a service-based approach, where information can be incorporated in the related service. For example, the info about the number of empty spaces at the parking facilities can be used to increase the number of persons using them.
From the gathered (big) data, patterns can be extracted and analysed. It is thus possible to make predictions about the physical or social phenomena being observed. The task of identifying patterns from big data is related to the application domain and oriented to a specific usage. In a university, the information to be collected and analysed has the main objective of allowing an improved operations management that leads to savings and therefore to more a sustainable performance. Nonetheless, in this type of institution, the ability to have data and detect patterns should also be related to research and teaching goals. In fact, these big data sets are also a major opportunity in the search for models relating several levels of information: external environmental conditions (temperature, relative humidity, solar radiation, wind speed, noise pollution, and air quality); internal environmental conditions (air temperature, radiant temperature, relative humidity, air displacement, noise level and air quality); time, date and locationrelated occupancy patterns and rate; resource consumption and waste and emissions generation.
Data Layer
This layer is mainly composed of a sensor array on a Lora communication network linked to a cloud IBM server, Bluemix. The following sensors are installed: 1) Electricity measurements -Current sensor: YHDC SCT013-000, current transformer, 100A:50mA and a Receiver -Raspberry Pi 3 Model B + LoRa Module; 2) Temperature and relative humidity measurements based on Texas instruments CC2650STK; and 3) Beacons -Bluetooth Beacon from Estimote which emit data through Bluetooth that is received by a smartphone app linked to a cloud-based backend that calculates users' position as they move through the campus. Sensors are installed in classrooms; data is being sent through Lora network using the MQTT protocol to publish messages to the IBM server.
Sensors are calibrated and specifically used to collect data such as electricity consumption and temperature. Lora network was installed in the university, allowing the sensors (if connected to a hardware device with Lora technology) to transmit the data captured by the sensors. A gateway which receives this data will send it to other similar gateways if needed until the data arrives at a central server which manages the whole network and communicates with the internet [17].
Information and knowledge layers
Data collected from the sensors will be stored in a database within Bluemix platform; then, through IoT services, we can correlate information and create knowledge from the raw data. Stored data can also be interpreted to identify patterns and create reports. We use the Bluemix platform, from IBM, which provides templates to overview collected data. Rules and alerts based on sensor data in the platform better monitor all the variables. An example for electricity consumption is the identification of residual consumption in empty classrooms leading to corrective actions towards its elimination, or consumption can be correlated with room occupancy and external temperature.
Service layer
Based on extracted data and knowledge, diversity of automatic actions can be implemented based on a service approach. Heating and cooling are activated based on sensor temperature data correlated with the presence of users in each space. Light intensity can be controlled based on luminosity information and presence. Water flow in bathrooms can be correlated with human presence. These services perform actions based on sensor input using node red platform easily. Manual inputs available from mobile devices can complete these actions [18].
We develop services oriented to room comfort temperature control because there is a connection between environmental temperature and cognitive performance [19]. Higher room temperatures can increase heartbeat to above 100 beats per minute. On a higher cardiac frequency, students end up consuming more calories diminishing their cognitive performance. Our service used input pre-defined temperature and, based on external weather conditions (exterior temperature), adapts the interior conditions to these pre-defined values. In winter, the temperature comfort should be around 18, while during the summer it should be around 26. Several factors need to be considered to actually achieve a comfortable environment, such as the number of students in the classroom -if there is a high number of students within the room, temperature will be higher due to more internal gains, and therefore we need to adapt the heating and cooling system; the insulation of the building will also affect the temperature, and since in ISCTE-IUL we have buildings with different construction materials, there is a possibility of studying systems operations more suitable to each type of external envelope; also due to the university having different buildings, which have rooms with a great diversity in spatial orientations, we can study the impact of room' solar orientation, and adjust the systems operations accordingly. With this, we see it is fundamental to regulate temperature and thermal comfort, so classrooms provide the conditions for students to learn in a comfortable environment.
Available information regarding external climate conditions can be used to predict near-future thermal comfort constraints. Further correlating this set of data with sensors-collected data provides very useful management information to predict future needs regarding heating and cooling. It is, therefore, possible to better manage the relationship between energy supply and demand taking better advantage of renewable energy produced on-site.
Results
Experience at ISCTE-IUL shows a significant saving potential. For the last five years, the learning management system included online information about room occupancy based on class schedules. By manually analysing this information on a weekly basis, it was possible to prepare custom-made routines to supply the systems management contractor so that HVAC systems were activated based on actual occupancy predictions. This process led to energy savings of 12%, based on actual consumption determined through energy invoice analysis.
With the implementation of this new sensor-based automatically processed information collection, we foresee a considerable improvement in how the energy-consuming systems are managed. It will be possible to improve occupancy rate-based systems activation at two scales further: the space scale, fine-tuning where heating or cooling should be supplied; and timescale, reducing weekly-based definitions to daily-based information. This is possible putting together the real-time low-frequency data collection with an integrated digital management system. Also, detailed information on the facilities, such as the building geometry, wall, floor and roof composition, room door and window type and size and room size, type, identification, functions, etc. are stored in Building Information Model (BIM) models. BIM models are 3D descriptions of buildings which associate information with the geometry of the building and its contents. ISCTE-IUL's facility management office has been developing and maintaining a BIM model which is being used to feed maps, room listings and locations. This model has been linked to the academic management system to gather and display information such as room capacity, office occupancy and other parameters. Its visualisation capabilities are used to represent sensor location and results, provide info on which to base thermo-hygrometric simulations and to display gathered data in a geo-referenced, visually rich environment. This visualization platform is important when insights on the occupants and buildings systems response are sought for. Spaces occupancy, building materials, solar orientation and other factors can easily be understood, supporting data interpretation.
Conclusions
This work describes ISCTE-IUL approach towards building energy efficiency services where context information from locally installed sensors can be manipulated to identify consumption patterns and later implement actions in a service basis to save energy or water in a building. Usage of external information, like local maps, building materials, external weather conditions and room occupancy can be used to improve further saving actions. In spite of this work being a local dedicated approach to our campus, the service basis approach allows an easy deployment to other cases. In the near future, we will also add to this IoT platform a gamification approach to encourage users in saving energy and water.
|
v3-fos-license
|
2019-10-24T09:07:18.593Z
|
2019-11-12T00:00:00.000
|
208758633
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1088/2053-1591/ab4fd8",
"pdf_hash": "ba4964971371e06af0e666c38b1e8fe19dc5c8f9",
"pdf_src": "IOP",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2298",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "623a6cc6148e1c6fcaddaff0c63cb5fa1eea097f",
"year": 2019
}
|
pes2o/s2orc
|
Maltase and sucrase inhibitory activities and hypoglycemic effects of carbon dots derived from charred Fructus crataegi
Although carbon dots (CDs) have been widely applied in nanobiotechnology and biomedicine, few studies have evaluated the intrinsic self-bioactivities and no reports have described the disaccharidase inhibitory activities of CDs. In this study, charred Fructus crataegi-derived CDs (CFC-CDs) having an average diameter of 1.3–5.6 nm are developed, without further modifications. Owing to an abundance of surface groups, CFC-CDs show distinct solubility, biocompatibility, and bioactivity. Moreover, CFC-CDs inhibit the catalytic activities of sucrase and maltase in vitro and reduce postprandial blood glucose levels in vivo, possibly by acting as a disaccharidase inhibitor. This discovery provides guidance for further research on the bioactivities of CDs and supports their potential applications in the biomedical and healthcare fields.
Introduction
Inhibition of α-glucosidases has recently provided a new approach for the management of hyperglycemia by facilitating the maintenance of normal blood glucose levels [1] via slowing digestion and subsequent absorption of dietary complex carbohydrates [2]. Sucrase and maltase, two essential members in intestinal α-glucosidase families, are considered to be key enzymes involved in the final step of carbohydrate digestion and biosynthesis of glycoproteins [3]. Sucrase is the only intestinal enzyme able to hydrolyze sucrose [4], whereas maltase shows versatile α-hydrolytic activities [5]. Therefore, inhibitors of the two α-glucosidases described above have attracted great interest among researchers [6]. Several currently used therapeutic inhibitors against sucrase and maltase, however, have been shown to cause serious gastrointestinal side effects, and their synthesis involves tedious multistep procedures [7]. Thus, the development of more effective and safe disaccharidase inhibitors is urgently needed for the treatment of hyperglycemia.
Carbon dots (CDs), as a new kind of nanocarbon material, have attracted great interests in nanobiotechnology and biomedicine domain [8,9] owing to their superior properties, such as water dispersity, excellent photoluminescence, and biocompatibility [10][11][12]. Due to their benign nature like facile synthesis, environmental friendliness and robust chemical inertness, CDs have a promising future in the application of photocatalysis, sensors and bioimaging, etc [10,11,13]. For any bio-applications, understanding of the interactions of CDs with biomolecules like enzymes, DNA and lipids, is of great necessity and importance. The interactions of CDs with enzymes can potentially change the structures of enzymes and alter the catalytic reactions that are essential for organisms owing to the unique structural characteristics of CDs, such as abundant nitrogen/oxygen-based surface functional groups and electron conditions [14]. CDs may have roles as sensitizers for redox enzymes [15] and peroxidase mimetic catalysts for molecule detection [16,17]. The existed studies on the effects of CDs on enzyme catalytic activities are still rare and superficial. Only a few reports have illustrated that carbon dots can switch the catalytic activities of laccase [18,19], rubisco enzyme [20,21] and porcine pancreatic lipase [22]. A recent study also discovered that the preparation of maltase/chiral CDs hybrids contributed to the partial inhibition of maltase' activities [23]. Nevertheless, the effects of CDs on bio-enzymes and bioreactions in humans and overall animals are still unclear. Thus, in-vitro and in-vivo investigations of the potential influence of CDs on disaccharidase catalytic activities are essential.
Fructus crataegi (FC) is a traditional medicinal plant widely used in many countries which can regulate digestive function and protect the cardiovascular systems [24]. The fruit of this plant, which is abundant in carbon, oxygen and nitrogen elements, can act as an excellent biomass precursor to prepare self-passivated CDs possessing sufficient surface functional groups. Charred Fructus crataegi (CFC), processed from FC after charcoal processing, is a safe medicinal food used for the treatment of digestive diseases and obesity since 1347 A. D. in traditional Chinese medicine (TCM) [25]. Supported by clinical evidence and modern medical research, CFC has been shown to have therapeutic effects on glucose metabolism disorders and is closely associated with small-bowel disaccharidase activity [25]. However, the mechanisms underlying the pharmacological activities and of CFC are controversial, and few studies have evaluated the effects of CFC on disaccharidase activities. Additionally, from the perspective of small molecule active compound, the material basis of CFC is still less defined. Notably, CDs are generated during charcoal processing [26,27], and we have identified CFC-CDs from CFC for the first time.
Accordingly, in this study, we established novel biocompatible CDs derived from CFC (CFC-CDs) synthesized by a simple, eco-friendly method. We then evaluated the effects of CFC-CDs on disaccharidase catalytic activities in vitro and examined the catalytic kinetics and mechanisms through which CFC-CDs exerted enzyme inhibitory activities. Finally, we evaluated the effects of CFC-CDs in mice on disaccharide digestion and postprandial blood glucose levels.
Chemicals
FC was purchased from Beijing Qiancao Herbal Pieces Co., Ltd (Beijing, China). Dialysis membranes (1000 Da) were purchased from Beijing Ruida Henghui Technology Development Co., Ltd (Beijing, China). Other analytical-grade chemical reagents were obtained from Sinopharm Chemical Reagents Beijing (Beijing, China). Sucrase and maltase standards were purchased from Shanghai Yuanye Biotechnology Co., Ltd (Shanghai, China). All experiments were performed using deionized water.
Preparation of CFC-CDs
First, FC (240 g) was placed into a crucible and calcined using a muffle furnace (TL0612; Beijing Zhong Ke Aobo Technology Co., Ltd, Beijing, China) at 350°C for 1 h, yielding CFC. Then, the CFC (85 g) was boiled twice in distilled water (1 L) at 100°C for 1.5 h each time. The obtained yellowish-brown solution was prefiltered through a 0.22-μm cellulose acetate membrane and concentrated by evaporation. To purify the CFC-CDs, the resulting solution was dialyzed using a 1000-Da dialysis membrane in deionized water for 7 days and the solution inside the dialysis membrane was collected. This CFC-CDs solution was then centrifuged to remove agglomerated particles at 11 000 rpm for 30 min to obtain clear CFC-CDs before use. The preparation process for CFC-CDs solution is shown in figure 1. Finally, the as-prepared CFC-CDs solution was dried to obtain CFC-CDs solid powder and weighed. The solid powder was dissolved in double distilled water at the concentration of 1.0 mg ml −1 , 0.25 mg ml −1 or 0.0625 mg ml −1 .
Characterization of CFC-CDs
The size, morphology, and microstructure of the CFC-CDs preparation were characterized using transmission electron microscopy (TEM; Tecnai G2 20; FEI Company, Hillsboro, OR, USA). The structural details of the CFC-CDs were examined using high-resolution TEM (JEN-1230; Japan Electron Optics Laboratory, Tokyo, Japan). Ultraviolet-visible (UV-vis) adsorption spectrum of CFC-CDs were recorded by spectroscopy (CECIL, Cambridge, UK) and photoluminescence (PL) spectra were determined using a molecular fluorescence spectrometer (F-4500; Tokyo, Japan) in a standard quartz cuvette were also. The structure and crystallinity of CFC-CDs was detected by x-ray diffraction (XRD; Bruker AXS, Karlsruhe, Germany) with Cu K-alpha radiation (λ=1.5418 Å). Raman spectra were obtained on a LabRAM HR800 Raman spectrometer (Jobin-Yvon, HORIBA Group, France) with 514 nm wavelength incident laser light. In addition, the chemical composition and structure of CFC-CDs were characterized by Fourier transform infrared (FT-IR) spectroscopy (Thermo Fisher, Fremont, CA, USA) and x-ray photoelectron spectroscopy (XPS; ESCALAB 250Xi, Thermo Fisher Scientific, Fremont, CA, USA). The CD spectra were obtained using a JASCO J-815 spectropolarimeter.
Quantum yield (QY) of CFC-CDs
The fluorescence QY of the CFC-CDs was measured according to an established procedure using quinine sulfate (Q st : 54 in 0.1 M sulfuric acid solution) as a standard sample and calculated using the following equation. here Q represents the QY, I is the integrated area under the emission spectrum, A is the absorbance at 340 nm wavelength, and st is the refractive index. The subscripts CDs and R refer to CFC-CDs and standard, respectively. To minimize the reabsorption effect, the A R and A CDs were maintained below 0.05.
Fingerprint analysis of CF and CFC-CDs by high-performance liquid chromatography
Sample preparation: To evaluate the component change of the CFC-CDs, an aqueous solution of the CFC-CDs and methanol extracts of CF were initially prepared and the solutions were filtered through a 0.22 μm. Microporous membrane before injecting of the final solution into the HPLC instrument (10 μl) for analysis.
HPLC condition: a comparative analysis of the CF and CFC-CDs was performed using a reported method with some modifications [28,29]. An Agilent 1260 series liquid chromatographer (Agilent Technologies, Palo Alto, CA, USA) and Phenomenex Luna C18(2) 100 A column (5 μm, 250 mm×4.60 mm, Phenomenex, USA). The components were separated by a gradient elution with water containing 0.1% phosphoric acid (solvent A) and acetonitrile (solvent B) with a constant flow rate of 1.0 ml min −1 . The gradient program started with 94% solvent A, followed by a linear decrease to 76% solvent A for 60 min. Each sample was injected and monitored at 210 nm. The column was held at 30°C.
Abs Abs
Cell viability % of control 100 where Abs sample and Abs control represent the A450 of the experimental and control groups, respectively. The experiments were performed in triplicate, independently.
Animals and acute toxicity evaluations in vivo
This study was performed in accordance with the Guide for the Care and Use of Laboratory Animals and was approved by the Committee of Ethics of Animal Experimentation of the Beijing University of Chinese Medicine (IRB Code 2017BZHYLL00106). Male and female Kunming mice (weighing 30.0±2.0 g) were purchased from the Laboratory Animal Center, Si Beifu with a Laboratory Animal Certificate of Conformity. The animals were maintained under the following conditions: temperature, 24.0±1.0°C; relative humidity, 55%-65%; and a 12 h light/dark cycle, with ad libitum access to food and water.
Kunming mice (30.0±3.0 g) were divided into three groups of 12 each (6 female mice and 6 male mice). Two groups of mice were exposed to CFC-CDs (20.85 mg kg −1 , intraperitoneal injection [i.p.]) and were sacrificed 3 and 7 days after administration. Untreated healthy mice were used as the control. The mice and major organs from the mice were harvested, fixed in 4% neutral buffered formalin, processed routinely in paraffin, sectioned into 4-μm-thick slices, and stained with hematoxylin and eosin (HE). Morphological changes were compared between the three groups.
2.7. Sucrase and maltase inhibitory activities of CFC-CDs in vitro 2.7.1. Preparation of mouse intestinal sucrase and maltase fractions Mice were killed by exsanguinations under isoflurane anesthesia, and their small intestines were then excised and washed with physiological saline (0.9%). The small intestine tissues were homogenized (for 3 min at 5000 rpm) in 10 volumes of in phosphate buffer (pH 7.0) solution using a glass Teflon homogenizer (Ultra Turrax IKA T10 Basic; Germany). The homogenate was centrifuged at 3000×g for 30 min to remove debris. Supernatant fluid was collected for the assay.
Sucrase and maltase inhibitory activities of CFC-CDs in mouse intestinal fractions
The maltase and sucrase inhibitory activities of CFC-CDs were assayed as previously reported [30], with some modifications. Briefly, mouse intestine solution (300 μL) and CFC-CD solution (300 μL; 1.0 mg ml −1 , 0.25 mg ml −1 or 0.0625 mg ml −1 for high-, medium-, or low-dose, respectively) were added to a test tube. The control sample was prepared by adding phosphate buffer (pH 7.0) instead of CFC-CDs. After incubation at 37°C for 10 min, sucrose (300 μL; 0.58 M) for sucrase inhibitory activity assays or maltose (300 μL; 0.56 M) for maltase inhibitory activity assays was added to the reaction mixture. After incubation for 60 min at 37°C, the reaction was stopped by adding Na 2 CO 3 (1000 μL, 1.0 M). Maltase and sucrase inhibitory activities were estimated by determining the difference of liberated glucose with or without CFC-CDs. Determination of glucose was performed using an Accu-Chek glucometer (Johnson & Johnson GmbH, New Brunswick, NJ, USA) based on the glucose oxidase method. The percentage of sucrase inhibition was calculated as follows: Inhibitory ratio% glucose glucose glucose 100 The inhibitory activity of each sample was determined three times, and the resulting data were calculated with the same method as described above.
Kinetics of maltase and sucrase inhibitors
Kinetic and Lineweaver-Burk plot analyses for sucrase inhibition by CFC-CDs were performed as previously reported [31], with some modifications. First, in order to select the appropriate concentration of CFC-CDs, different concentrations of CFC-CDs were used while keeping the final concentration of the enzyme (0.38 mg ml −1 sucrase dissolved in 0.1 M PBS solution) and substrate (0.27 M sucrose solution) constant. Sucrase solution was incubated with CFC-CDs at 37°C for 10 min. The substrates of sucrose at different concentrations were then added to start the reaction at 37°C, and after incubation for 1 h, the production of glucose was evaluated using an Accu-Chek glucometer. The concentration of inhibitor required to inhibit 50% of sucrase activity under the assay condition was defined as the IC 50sucrase value. The type of inhibition was determined by Lineweaver-Burk plots, as described above [31]. The concentration of CFC-CDs (0.12 mg ml −1 ) was kept constant while changing the concentration of the substrate. The data were calculated according to Michaelis-Menten kinetics. All samples were measured repeatedly three times, and the values were expressed as the means±standard deviations (SD; n=3). The linear equation and correlation efficiency of the CFC-CDs as a sucrase inhibitor were estimated using Origin 8.0 software.
For determination of maltase inhibitor kinetics, the appropriate concentration of CFC-CDs was also assayed using maltose (0.14 M) and the enzyme (maltase at 3.3 mg ml −1 dissolved in 0.1 M PBS). The concentration of inhibitor required to inhibit 50% of maltase activity was defined as the IC 50maltase value. The Lineweaver-Burk plots were also used for evaluation of maltase inhibition by CFC-CDs using the same method as for the sucrase kinetics test above. The concentration of maltase was 3.3 mg ml −1 in 0.1 M PBS, and the CFC-CD inhibitor concentration was 0.041 mg ml −1 .
Postprandial blood glucose reducing effects of CFC-CDs in vivo
A hyperglycemia model was established according to a previous protocol with some modifications [32,33]. Briefly, mice (30.0±3.0 g) were randomly divided into five groups and subjected to fasting for 18 h. Fasting blood glucose was measured using blood taken from the tail vein with an Accu-Chek glucometer, according to the manufacturer's recommendations. Mice from groups receiving intragastric gavage of maltose or sucrose (2 mg g −1 ) were the model groups and treated with NS. Mice receiving both maltose or sucrose (2 mg g −1 ) and acarbose (5 mg/kg) orally were the positive control groups, while mice receiving normal saline (NS) were the negative control group. CFC-CD treatment groups received both intragastric gavage of maltose (2 mg g −1 ) or sucrose (2 mg g −1 ) and CFC-CD administration (4.17 mg kg −1 ). Blood glucose was measured at 15, 30, 60, 90, 120, 150, 180, and 210 min after induction of hyperglycemia.
Statistical analysis
Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS, version 20.0). The normally distributed data and homogeneous variances were expressed as means±SDs. Multiple comparisons were performed using one-way analysis of variance (ANOVA) followed by the least significant difference test. Results with p values of less than 0.05 were considered significant.
Characterization of CFC-CDs
First, we characterized CFC-CDs extracted from the FC solution. As shown in figure 2(A), TEM images of CFC-CDs revealed that the CDs were nearly spherical and well separated from each other. The size distribution of the CDs was in the range of 1.3-5.6 nm ( figure 2(B)). Furthermore, high-resolution TEM (figures 2(C), (D)) showed that the CDs had a lattice spacing of 0.327 nm, corresponding to the (002) spacing value of graphitic carbon [34].
UV-vis absorption was also evaluated to determine the optical properties of the CFC-CDs. The UV-vis spectrum in figure 3(A) showed a broad absorption spectrum with a weak peak at 273 nm, which could be ascribed to the π-π * transition of C=C. A broad emission centered at 468 nm was observed in the emission spectrum, with the strongest excitation at 323 nm. Two peaks at 275 and 323 nm appeared in the excitation spectrum, revealing that the emission may be related to two types of transitions. The fluorescence quantum yield (QY) of the CFC-CDs was measured, and the QY of CFC-CDs was 5.95%. The XRD profiles of the CFC-CDs ( figure 3(B)) showed a diffraction peak located at around 21.5 degrees, which corresponded to the reported [35]. Raman spectrum of the CFC-CDs (figure 3(C)) showed a disordered (D) band at 1358 cm −1 , and the crystalline (G) band at 1587 cm −1 . The value of I D /I G is 0.36, which indicated that the as-prepared CFC-CDs have highly crystalline nature. To gain better insights into the organic functional groups on the surface of the CFC-CDs, we further analyzed the CFC-CDs using FT-IR; the purified CFC-CD spectra (figure 3(D)) showed characteristic peaks of 3426, 2923, 1628, 1433, and 1047 cm −1 . A sharp peak at 1628 cm −1 (C=O vibrational stretch) and a peak centered at 3426 cm −1 (O-H vibrational stretch) revealed the existence of carboxylate and hydroxyl groups on the surfaces of the C-dots. The peaks at 1047 cm -1 corresponded to the symmetric stretching vibrations of C-O-C. The peak at 2923 cm -1 may be due to the existence of the -CH 2stretch. The absorptions at 1433 was assigned to the C-N stretch. The FT-IR spectra of CFC-CDs demonstrated that many hydrophilic functional groups existed on the surface of CFC-CDs, leading to excellent aqueous dispersibility. Figure 4(A) shows the full-scan XPS spectrum of CFC-CDs, from which we could observe three peaks located at 284.8, 400.1, 532.1, and 168.2 eV, corresponding to C 1 s, N 1 s, O 1 s, and S 2p in CFC-CDs, respectively. In addition, CFC-CDs contained 73.48 at% carbon, 21.42 at% oxygen, 3.75 at% nitrogen, and 0.5 at% sulfur according to the XPS results. High-resolution XPS spectra of C, N, and O were collected to illustrate the detailed bonding formation during the hydrothermal reaction process, and the results are presented in figures 4(B)-(D). The partial XPS spectrum of C 1 s could be divided into four peaks centered at 284.85 (sp 2 C), 286.2 (C-O/C-N), and 288.8 eV (C=O) The high resolution O 1 s spectrum could be divided into two peaks located at 531.9 and 533.3 eV, which were assigned to C-O and C=O, respectively. Two peaks at 400.05 and 401.8 eV were observed in the high-resolution N 1 s spectrum, revealing the existence of C-N and N-H bonding. The XPS analysis suggested that N and S successfully doped with C and O of the carbon core.
Fingerprint analysis of FC and CFC-CDs by high-performance liquid chromatography
The HPLC results of this study showed that no active small molecule compounds of the FC were detected in the prepared CFC-CDs, as shown in figure 5.
Cell viability assay
The CFC-CDs did not affect RAW 264.7 cell growth at concentrations up to approximately 1000 μg ml −1 ( figure 6(A)). Cell viability gradually decreased as the CDs concentration increased from 2000 to 8000 μg ml −1 , revealing the low toxicity of CFC-CDs in vitro.
Animals and acute toxicity evaluations in vivo
We collected the main organs, including the livers, spleens, kidneys, and hearts, of the mice from the control and treated groups and compared histopathological changes in these organs ( figure 6(B)). Overall, no apparent histopathological abnormalities or lesions were observed in the treated groups at our injected CFC-CD doses. Our results collectively suggested that CFC-CDs were highly biocompatible at the dose used in this study.
Sucrase and maltase inhibitory activities of CFC-CDs in mouse intestinal fractions
In order to evaluate the catalytic activities of sucrase and maltase in mouse intestinal fractions under optimal proper assay conditions, a time course was performed using different concentrations of substrate (sucrose solution: 1.0, 1.2, and 1.4 mg ml −1 ; maltose solution: 0.8, 1.0, and 1.2 mg ml −1 ) to determine glucose production. When the substrate was sucrose ( figure 7(A)), before 25 min, the glucose concentration increased in a linear pattern. From approximately 25 min to 60 min, the glucose concentration growth rate became nonlinear and reached its highest level. From these results, we selected an assay time of 20 min. As shown in figure 7(B), the substrate was maltose. Before 45 min, the glucose concentration increased in a linear pattern. From approximately 40 to 120 min, the glucose concentration growth rate slowed, reaching a plateau. Thus, an assay time of 40 min was selected to evaluate sucrase inhibitory activity. control group (20±0.657 mM). The glucose concentration in the L group (18.88±3.89 mM) did not differ significantly compared with that in the PBS control group.
From the above experiments, the enzyme inhibition rates were calculated. As shown in figure 7(E), CFC-CDs has strong sucrase and maltase inhibitory effects at H (1.0 mg ml −1 ) and M doses (0.25 mg ml −1 ), with sucrase inhibition rates of 58.83%±7.28% and 37.28%±8.71%, respectively, and maltase inhibition rates of 56.26%±9.30% and 21.02%±6.85%, respectively. In contrast, a low inhibition rate was observed in the presence of low-dose (0.0625 mg ml −1 ) CFC-CDs. Thus, the extract had significant effects on sucrase and maltase enzymatic reaction when sucrose and maltose were used as the substrates.
Kinetics and mechanisms of sucrase and maltase inhibitory activities of CFC-CDs
To evaluate the kinetics of sucrase and maltase inhibitory activities, we used sucrase and maltase to facilitate stable quantification. These enzymes have been widely used in enzymatic assays to screen new α-glucosidase inhibitors owing to its availability and ease of handling [36]. The sucrase and maltase inhibitory activities were evaluated at different concentrations of CFC-CDs. In sucrase inhibition tests (figure 7(F)), the IC 50sucrase was approximately 0.73 mg ml −1 , and the sucrase inhibition rate reached up to 73.3%. In contrast, in maltase inhibition assays (figure 7(I)), the curve showed a rapid growth phase when the CFC-CD concentration was less than 0.48 mg ml −1 (IC 50maltase : 0.26 mg ml −1 ), and the maltase inhibition rate reached almost 91%.
Next, to clarify the inhibition mode of CFC-CDs, we used Lineweaver-Burk plots (figures 7(G) and (J)). For sucrase ( figure 7(G)), when CFC-CDs were added into the reaction system, hydrolysis of sucrose was obviously inhibited. In both figures 7(G) and (J), the intersection of the double plot was seated at a point above the +1/[s] axis, indicating that CFC-CDs may act as a partially noncompetitive-type inhibitor of sucrase and maltase. The circular dichroism spectra of the enzymes are shown in figures 7(H) and (K). Compared with that of free sucrase ( figure 7(H)), the intensities of the negative peaks at 208, 221 (α-helix), and 216 (β-sheet) for sucrase/CFC-CDs decreased, suggesting the unfolding change in sucrase on CFC-CDs and thus leading to decreased catalytic activity [37]. Compared with that of free maltase (figure 7(K)), the intensities of the positive peak at 197 nm (α-helix) and negative peaks at 208, 221 (α-helix), and 216 nm (β-sheet) for maltase/CFC-CDs increased, indicating the formation of a compact structure owing to the increased α-helix and β-sheet contents when maltase combined with CFC-CDs, allowing fewer active sites to be accessible to the substrate and thereby leading to decreased catalytic activity [18]. A schematic representation of this enzymatic hydrolysis reaction process is illustrated in figure 8.
Postprandial blood glucose reducing effects of CFC-CDs in vivo
To confirm the in vivo relevance of our in vitro findings demonstrating that CFC-CDs exhibited sucrose and maltose inhibitory activities, blood glucose levels in mice were measured in sucrose and maltose loading tests. As shown in figure 9(A), the glucose levels of the postprandial hyperglycemic model group (sucrose) increased between 0 to 30 min. Glucose levels then decreased until 150 min. In contrast, glucose levels in the CFC-CD group peaked at 15 min and then decreased steadily while acarbose group peaked at 30 min. During the test, the glucose levels in the CFC-CD groups were always lower than that in the sucrose group and during 30 to 90 min, the glucose levels in CFC-CD group were also slightly lower than acarbose group. The glucose level in the blank group remained stable. As shown in the inset in figure 9(A), the areas under the curve (AUCs) in the CFC-CD (740.49975 mmol min l −1 ), acarbose group (803.25 mmol min l −1 ) and blank group (462.15 mmol min l −1 ) were significantly lower than that in the sucrose group (1196.55 mmol min l −1 ). Additionally, blood glucose levels in the sucrose and CFC-CD groups were significantly higher at 15, 30, 60, and 90 min (p<0.05) than in the blank group ( figure 9(B); unpaired Student's t-tests). Blood glucose levels in the CFC-CDs and acarbose groups were significantly lower (p<0.05) than those in the model group of sucrose at 15, 30, 60, and 90 min. In maltose loading tests (figure 9(C)), the glucose levels in the postprandial hyperglycemic model group (maltose), the CFC-CD group and acarbose group increased between 0 and 15 min and then decreased steadily. The AUCs (figure 9(C), inset) in the CFC-CDs (1213.575 mmol min l −1 ), acarbose group (1128.15 mmol min l −1 ) and blank groups (596.475 mmol min l −1 ) were significantly different compared with those in the sucrose group (1703.425 mmol min l −1 ). The blood glucose levels in acarbose group was a little lower than CFC-CDs group. As shown in figure 9(D), blood glucose levels in the maltose and CFC-CD groups were significantly higher (p<0.05) than that in the blank group. Blood glucose levels in the CFC-CD groups were significantly lower (p<0.05) than those in the maltose groups.
Discussion
In this study, we developed novel eco-friendly CDs derived from CFC. These CFC-CDs had an average diameter of 1.3-5.6 nm. FC naturally contains carbon, oxygen, and nitrogen, which are necessary for the preparation of CFC-CDs and could serve both as the carbon source and as the passivation agent for CDs. Hence, no further modification or external surface passivation agents were required. Moreover, the prepared CFC-CDs were safe and biocompatible for biological use, as demonstrated by the results of in vitro CCK-8 assays and in vivo acute toxicity evaluations. Through adequate and repeated purification process including dialysis, centrifugation and filtration, we have obtained pure CFC-CDs solution. The purity of the solution has been proved by the results of high-performance liquid chromatography, in which the existence of small active molecules was not detected in the solution.
Nowadays, CDs have been studied in owing to their various self-bioactivities, including anticancer activity [38], antihyperuricemic activities [39], hemostatic effects [26,27], and etc. In our previous study, we observed the hypoglycemic effects of CDs from a charcoal TCM, Jiaosanxian [40]. Thus, we aimed to further investigate the influence of CDs on carbohydrate digestion and disaccharidase catalytic activities related to blood glucose levels.
In our experiments, we found significant inhibitory effects of CFC-CDs on sucrase and maltase in the small intestinal fractions of mice. The reaction was dose dependent, and the kinetic conditions of the assay were optimized. The IC 50sucrase value was 0.73 mg ml −1 , and the IC 50maltase was 0.26 mg ml −1 , reflecting that very small doses of CFC-CDs could be effective and that inhibition of maltase was stronger. Furthermore, Lineweaver-Burk plots suggested that the inhibitory mode of CFC-CDs against sucrase and maltase may be a partially noncompetitive type. Noncompetitive inhibitors bind to the enzyme/substrate [ES] complex and affect the breakdown of the [ES] to form a product. Partial noncompetitive inhibitors are thought to be released from the enzyme when the [ES] is broken down into the product [41,42]. This type of inhibition often occurs where there are multiple inhibitors [43], as would be the case for CFC-CDs, a kind of nanoparticles with an uniform diameter of 1.3-5.6 nm which contain many surface functional groups, including amino, carboxyl, and hydroxyl groups, thereby contributing to their different interactions with enzymes. CD measurements showed that CFC-CDs influenced the secondary structure of the enzyme, owing to the electron accepting or donating properties of CFC-CDs and reduce degradation of the substrate [22].
To confirm the in vivo relevance of our in vitro findings, we performed disaccharide loading tests in mice. The results showed that CFC-CDs significantly decreased postprandial blood glucose levels by reducing peak blood glucose levels and AUCs of blood glucose in mice. Thus, CFC-CDs could act as sucrase and maltase inhibitors to reduce postprandial blood glucose in the complex blood glucose regulatory mechanism. This beneficial effect of CFC-CDs (i.e., postprandial blood glucose reduction) could support the application of CFC as a TCM for the treatment of diseases associated with sugar digestion, uptake, and metabolism, such as diabetes and obesity. Currently, clinical treatment of hyperglycemic disease relies on sugar moieties, such as acarbose, miglitol, and voglibose; however, these moieties are unfavorable for long-term use owing to their severe adverse side effects, including abdominal discomfort, diarrhea, and hepatotoxicity [7]. In contrast to currently available antihyperglycemic small molecule drugs, CFC-CDs as novel α-glucosidase inhibitors are biocompatible nanoparticles derived from CFC, which has long been used clinically to promote digestion and treat abdominal discomfort. It is reported that FC extracts have the antidiabetic effects [44], but excessive and improper consumption of FC can cause stomach stones and discomfort by the rich content of pectin, organic acid, tannin, and etc [45]. However, in this study, we prepared a purified sample of CFC-CDs without any small molecular components existing, which can be proved by the HPLC results and its disaccharidase inhibitory activities and hypoglycemic effects were obvious and reliable from the results of our experiment. Compared with FC, CFC-CDs retained the disaccharidase inhibitory activities with lower side-effect of gastric damage and good solubility. The CFC-CDs may have potential applications as complementary and alternative therapeutic agents for blood glucose control. Overall, our findings provide evidence and guidance for further studies of the intrinsic selfbioactivities of CDs.
Conclusion
In this study, novel CDs derived from CFC were developed and found to be effective for inhibiting sucrase and maltase catalytic activities. This study suggests potential applications as complementary and alternative therapeutic agents for postprandial blood glucose control. Our findings established a basis for future drug discovery and provide insights into expansion of the potential applications of CDs in nanomedical and healthcare fields.
|
v3-fos-license
|
2022-08-02T13:18:03.733Z
|
2022-07-29T00:00:00.000
|
251228496
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1010427&type=printable",
"pdf_hash": "ae98e439c5ca0bc7e34c50bb7233cec04c9fb3d6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2300",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "23cf398075ef4883731c3be836c84ed5ebde44ed",
"year": 2022
}
|
pes2o/s2orc
|
Specification of the endocrine primordia controlling insect moulting and metamorphosis by the JAK/STAT signalling pathway
The corpora allata and the prothoracic glands control moulting and metamorphosis in insects. These endocrine glands are specified in the maxillary and labial segments at positions homologous to those forming the trachea in more posterior segments. Glands and trachea can be homeotically transformed into each other suggesting that all three evolved from a metamerically repeated organ that diverged to form glands in the head and respiratory organs in the trunk. While much is known about tracheal specification, there is limited information about corpora allata and prothorathic gland specification. Here we show that the expression of a key regulator of early gland development, the snail gene, is controlled by the Dfd and Scr Hox genes and by the Hedgehog and Wnt signalling pathways that induce localised transcription of upd, the ligand of the JAK/STAT signalling pathway, which lies at the heart of gland specification. Our results show that the same upstream regulators are required for the early gland and tracheal primordia specification, reinforcing the hypothesis that they originated from a segmentally repeated organ present in an ancient arthropod.
Introduction
Arthropods are characterised by the presence of an external skeleton that protects them from injury but also constrains their growth during development. This problem is solved by a dedicated endocrine system controlling the periodic moulting of the exoskeleton. Two glands control the process of larval moulting and metamorphosis in insects: the corpora allata (CA), which secrete Juvenile Hormone; and the prothoracic glands (PG), which secrete Ecdysone [1]. In holometabolous insects, secretion of both of these hormones to the haemolymph induces the larva moulting into a larger larva, while secretion of Ecdysone alone induces metamorphosis [2]. Similar endocrine glands secreting hormones related to those produced by the CA and the PG have been identified in crustaceans, indicating that this system has an ancient evolutionary origin [1,3,4].
Analysis of development in Drosophila melanogaster showed that the CA and the PG primordia are specified in the lateral ectodermal cells of the maxillary and the labial segment respectively, at homologous locations to those giving rise in more posterior segments to the fly's respiratory organs [5]. During early development, the CA and the PG primordia exhibit a similar behaviour to that of the tracheal primordia, with the epithelium invaginating to form small sacks of cells resembling tracheal pits. However, while the tracheal primordia maintain an epithelial organization throughout development, the gland cells soon experience an Epithelial to Mesenchymal Transition (EMT) induced by snail (sna) gene expression [5]. Following Snail activation, the CA and the PG coalesce into a single primordium that migrates across four segments until it reaches the dorsal part of the first abdominal segment (A1). This migration is guided by several intermediate landmarks that serve as "stepping stones" during their long-range migration (Fig 1A) [6]. Once in A1, the CA/PG primordium fuses ventrally to the corpora cardiaca, an independent endocrine organ of mesodermal origin [7,8], and dorsally to the contralateral primordium, giving rise to a ring structure encircling the anterior aorta. Therefore, the mature ring gland is a composite endocrine organ formed by three different glands, two of ectodermal origin (the CA and the PG) and one of mesodermal origin, the corpora cardiaca [1,6].
Despite their different morphology and function, the CA and the PG have several characteristics in common with the trachea. First, the CA and the PG are specified in the cephalic lateral ectoderm at homologous positions to those forming the tracheal primordia in the trunk segments. Second, all three organs express the gene encoding the transcription factor Ventral veinless (Vvl) activated through the same enhancer (vvl1+2). Third, ectopic expression of the Deformed (Dfd) or the Sex combs reduced (Scr) Hox genes can transform tracheal primordia cells into gland cells and, conversely, the ectopic activation of trunk Hox genes can transform the gland primordia into trachea. These observations led to the proposal that the CA, the PG and the trachea arose from a metamerically repeated ancient structure that evolved divergently in each segment giving rise to three completely different organs [5]. This hypothesis has been reinforced by functional studies performed in the Oncopeltus hemipteran insect [9].
In comparison to the extensive knowledge we have of the mechanisms specifying the Drosophila tracheae [10][11][12][13][14][15][16][17][18][19][20], little is known about CA and PG specification. The first signs of CA and PG specification are noticeable when these primordia start expressing the sna gene [5]. Snail is a zinc-finger transcription factor conserved in vertebrates where its function has also been associated to the induction of EMT [21][22][23]. Apart from its function in the endocrine primordia, Snail is also required for the formation of the mesoderm [24,25]. The sna-rg-GFP reporter gene, made with a 1.9 kb sna cis-regulatory element, is the earliest known specific marker for the CA and the PG primordia [5]. sna-rg-GFP expression is first activated at the beginning of organogenesis (st11), after the two gland primordia have just invaginated in the maxillary and labial segments, and its expression is maintained throughout embryonic gland development (Fig 1G and 1H). Thus, sna expression is a CA and PG specific marker comparable to what trh expression is for the trachea. Both genes encode transcription factors labelling the respective primordia at the earliest stages of development and both genes are required for the development of the organs where they are activated. Therefore, finding the upstream regulators of sna-rg expression should help uncovering the mechanisms required for gland specification. Moreover, the comparison of the gene network activating sna expression in the gland with that activating trh expression in the trachea will allow us to confirm if both organs share similar upstream regulators as would be expected if they shared a common evolutionary origin.
To find out what are the mechanisms inducing CA and PG specification we have analysed how snail expression is activated in the primordia of these organs. We show that the Wnt and Hh pathways determine the antero-posterior segmental location where the sna-rg enhancer is activated. This is achieved indirectly through the localised transcriptional activation of the upd gene, which encodes a ligand activating the JAK/STAT signalling pathway. We show STAT directly activates sna expression in the glands and propose that the Hox input required for activating sna expression is mediated indirectly.
sna expression in the CA and PG primordia is activated by a single cisregulatory region
Expression of the snail gene in the corpora allata (CA) and the prothoracic gland (PG) primordia is key for their specification and development [5]. To test if the sna-rg cis-regulatory region previously described is the only element activating snail expression in the CA and the PG primordia, we created sna ΔrgR2 , a deletion generated with the CRISPR-Cas9 system using specific single guide RNAs (Figs 1B and S1 and Materials and Methods). RNA in situ hybridization reveals sna ΔrgR2 embryos lack sna expression in the CA and PG primordia while maintaining it in other organs (Fig 1C-1F).
Embryos homozygous for sna ΔrgR2 or heterozygous for this deletion over the sna 1 null allele are not viable. These embryos develop a normal mesoderm with the only obvious phenotypic final position in the ring gland is represented by arrows starting from their approximate location at st11. (B) sna locus indicating the position of the transcription unit (black), the two mesoderm enhancers (brown), the ring gland enhancer (blue) and the sna ΔrgR2 deletion. (C-F) sna RNA expression in wild type embryos at st11 (C) and st13 (D), or sna ΔrgR2 embryos at st11 (E) and st13 (F). Arrows point to the CA and PG primordia, asterisks mark the absence of sna transcription. (G-H) sna-rg-GFP reporter in a st11 wild type embryo (G) before CA and PG coalescence, and at st13 (H) showing the coalesced CA/PG migrating towards the dorsal midline. (I-J) sna-rg-GFP sna ΔrgR2 homozygous embryos showing the CA and PG primordia at st11 (I) and at st13 (J) when degeneration is noticeable (asterisks). (G'-J') Show DCP-1 co-expression (white) in the same embryos to reveal apoptosis. In control embryos (G'-H') DCP-1 activation is restricted to ectodermal cells. In sna-rg-GFP sna ΔrgR2 homozygous embryos, gland cells show high levels of DCP-1 at st13 (J') (yellow arrows). At st11 (I'), just after gland specification, DCP-1 starts being detectable before overt gland degeneration. (K-M) sna ΔrgR2 homozygous embryos carrying the sna-rg-mCherry reporter (green) and a sna-BAC rescue construct. The Sna protein in the BAC is tagged with GFP (red) revealing its expression in the gland primordia before coalescence (K, yellow because overlap with sna-rg expression), after coalescence (L, arrow), and after integrating in the ring gland (M). Apart from the gland primordia, the Snail BAC protein reveals other sites of expression: the oenocytes and the wing and haltere primordia (M, asterisks). Note that cell viability and migratory behaviour of the CA and PG are fully rescued by the BAC. All figures show lateral views with anterior left and dorsal up.
These results prove that the sna ΔrgR2 deletion inactivates the only regulatory region driving sna expression in the CA and PG gland primordia, allowing us to use sna-rg-GFP reporter expression as a proxy to discover the upstream trans regulatory elements involved in sna transcription and CA and PG specification.
Requirement of the Wnt signalling pathway for gland specification
The vvl and sna genes are co-expressed in the CA and the PG, but the expression of sna in the gland primordia does not depend on Vvl function [5], suggesting that both genes may respond to similar upstream regulatory cues in the gland region. As tracheal vvl expression expands in wingless (wg) mutants [20], we tested if sna-rg spatial activation is also restricted through the Wnt signalling pathway. In wg CX4 or in wg en11 homozygous mutant embryos sna-rg-GFP expression in the maxilla and the labium appears duplicated at st11 (Fig 2A and 2B). The duplicated primordia form in cells normally expressing Wg and are located at the same dorsoventral position where the endogenous primordium of that segment forms. The ectopic and the normal sna-rg expressing cells become migratory coalescing into a single larger gland primordium, suggesting the ectopic cells form functional gland primordia, although this expanded primordium cannot reach the embryo's dorsal side due to the general defects in wg mutants.
Ectopic UAS-wg expression driven in the maxilla and labium with the sal-Gal4 driver eliminates sna-rg reporter expression ( Fig 2C). This repression is mediated through the Wnt canonical pathway as sna-rg-GFP expression is also eliminated by ectopic expression of an activated form of Armadillo (UAS-ArmS10, Fig 2D) [27]. Surprisingly, we found that while sna-rg expression is normal in embryos homozygous for the pan 2 zygotic null allele of dTCF [a.k.a. Pangolin [28,29]], the DNA binding protein downstream of the Wg signalling pathway ( Fig 2E), double mutant wg CX4 , pan 2 embryos lack the ectopic gland primordia but not the endogenous ones (Fig 2F). These results suggest that Arm-dTCF can prevent sna-rg expression in Wg expressing cells but it does not affect the formation of the endogenous gland primordia which are out of Wg signalling range.
Requirement of the Hedgehog (Hh) signalling pathway for gland specification
It has been reported that vvl expression in the tracheal primordia is strongly reduced in hh mutants [12]. The Hh and Wnt signalling pathways cross-regulate in the trunk epidermal cells where Hedgehog signalling is required for maintenance of wg expression in the adjacent ectodermal cells of the anterior compartment, and Wg signalling is required for the maintenance of hh and engrailed (en) expression in the posterior compartment [30]. As a result of this cross-regulation, wg, en and hh mutant embryos have similar phenotypes in the trunk ventral ectodermal segments [31]. However, in the cephalic region, where the glands are specified, such cross regulation does not occur, with Engrailed expression being maintained in the posterior segments of the maxilla and the labium in the absence of wg function [32]. To study the effect of Hh signalling on gland development, we analysed hh AC and en E homozygous mutant embryos and found an almost complete absence of sna-rg expression (Fig 3B and 3C). Engrailed activates hh expression in the posterior compartment, from where secreted Hh induces the pathway in neighbouring cells. The final target is the Cubitus interruptus (Ci) protein that can act either as a transcriptional activator or as a repressor depending on the pathway's activation state. In the absence of Hh, Ci is cleaved giving rise to a protein repressing the transcription of its direct targets [33]. Conversely, in the presence of Hh, the pathway's activation prevents Ci's cleavage, giving rise to a transcriptional activator [34].
We find that in ci 94 null embryos sna-rg-GFP is expressed in its normal pattern (Fig 3D), indicating Ci is not a necessary activator of sna expression in the glands. We also found that in double en E , ci 94 mutant embryos the sna-rg-GFP expression is recovered compared to en E embryos (Fig 3 compare panel C with E), indicating that the Ci repressor form prevents sna-rg activation. To confirm this, we expressed UAS-Ci76, the repressor isoform of Ci [33], with the sal-Gal4 line and found this causes an almost complete absence of sna-rg activity ( Fig 3F). Although the above results indicate Ci is not absolutely required for sna-rg expression, we observed that overexpression of Ci PKA , the active form of Ci, causes a non-fully penetrant expansion of sna-rg expression ( Fig 3G) suggesting the possibility that sna-rg may be responsive to Ci and to a second activator. We also analysed double wg, hh (or wg, en) mutants and found that these embryos do not activate sna-rg, a phenotype similar to that of hh mutants ( Fig 3H). These results indicate that Ci repression is epistatic over the derepression caused in wg mutants. The above data fit a model where sna-rg expression is under negative regulation, either directly or indirectly, mediated by the Wnt and Hh signalling pathways (Fig 3I). Although Ci repression of sna-rg activity should be relieved by Hh signalling anteriorly and posteriorly to the En expressing cells, the Wnt parallel repressive function prevents sna-rg activation in Wg expressing cells restricting the formation of the CA and PG primordia to the most anterior cells of the maxillary and labial segments.
Regulation of Upd ligand expression by the Wg and Hh pathways
Previously we showed that JAK/STAT signalling is required for sna-rg expression [5]. To find out if the Wg and Hh signalling pathways regulate sna indirectly via JAK/STAT signalling, we reanalysed the spatio-temporal activation of upd in wild type and mutant embryos, paying special attention to the maxillary and labial segments where the gland primordia are specified. In st9 wild type embryos, upd is expressed in segmental stripes immediately posterior to the Engrailed expressing cells (S2 Fig). This pattern of transcription evolves to form a transient antero-posterior lateral stripe that rapidly resolves at early stage 11 into two patches of expression in the maxilla and labium corresponding to the sites where the CA and PG glands form (Figs 4C and S2E). Expression analysis of 10xSTAT-GFP, a reporter that is universally activated in cells where the JAK/STAT pathway is active [35,36] confirms JAK/STAT signalling activation at st10 and 11 in the CA and PG primordia (Fig 4A-4B). Although upd is transcribed in both primordia, we noticed that expression of both upd RNA and the 10XSTAT-GFP reporter is more transient in the CA than in the PG primordium (Fig 4A-4D).
We next analysed if the Wnt and the Hh pathways affect upd transcription in the gland primordia. In hh AC null embryos, we find that the transient upd expression in the CA and PG primordia disappears (Fig 4E), while in wg CX4 mutants upd RNA expression expands (Fig 4F). We also found that ectopic expression of the activator Ci protein results in a non-fully penetrant expansion of upd expression in stage 10 embryos (Fig 4H-4I). These results suggest that the effects on sna-rg expression caused by mutations affecting the Wnt and Hh signalling pathways are mediated indirectly through the JAK/STAT signalling pathway.
Possible cross-regulation between Hox, wg, hh and upd in the maxillary and labial segments
Development of the CA and PG and normal expression of the sna-rg reporter in the maxilla and the labium require Dfd and Scr function [5], therefore we studied if there are any crossregulatory interactions among the genes involved in gland primordia specification.
We first analysed wg and en mutant embryos and found that the expression of Dfd and Scr is not significantly affected (S3A- S3F Fig). Similarly, neither En nor Wg expression is affected in Dfd Scr mutant embryos (S3G- S3J Fig), discarding a possible interaction between Dfd and Scr and the Wnt/Hh signalling pathways. In contrast, we found that the transient expression of upd transcription in the CA and PG primordia almost disappears in Dfd Scr mutant embryos (Fig 4G), indicating that the Hox proteins can regulate JAK/STAT signalling as previously shown for Abd-B [37]. These results indicate that the Hox, and the Wnt/Hh pathways indirectly mediate the regulation of the sna-rg enhancer through their modulation of upd expression and JAK/STAT signalling activation.
The sna-rg reporter is not expressed in Df(1)os1A embryos (Fig 5A and 5B). To test if generalised Upd expression in the maxilla and labium can activate sna-rg independently of other upstream positive or negative inputs, we induced UAS-upd with either the sal-Gal4 or the arm-Gal4 lines. We observe that these embryos have expanded sna-rg expression along the anteroposterior axis in the maxillary and labial segments (Fig 5C). Analysis of Sal expression, which In wg null mutants, upd transcription is extended in the maxillary and labial segments (arrows), as well as expanding labels the PG primordium [5] shows that Upd ectopic expression induces a moderate expansion of the CA primordia while resulting a much larger increase of the PG primordium ( Fig 5D and 5E). This expansion occurs mostly in the antero-posterior axis from cells where the Hh and the Wnt pathways are normally blocking sna-rg expression, while expansion is less noticeable in the dorso-ventral axis. This indicates that most of the antero-posterior intrasegmental inputs provided by the segment polarity genes converge on Upd transcription but that the dorso-ventral information is registered downstream of Upd.
We finally tested if activation of UAS-upd with the sal-Gal4 driver line can rescue sna-rg activation in Dfd Scr mutant embryos. We found that the residual levels of GFP observed in sna-rg Dfd Scr mutant embryos are not increased in sal-Gal4 UAS-upd sna-rg Dfd Scr embryos (Fig 5F and 5G), indicating that besides regulating upd expression, the Hox input has further requirements for gland formation.
Therefore, localised Upd expression defines the antero-posterior intrasegmental localisation of the CA and PG primordia, but other signals besides STAT must be controlling the dorso-ventral and the cephalic sna activation either directly at the sna-rg enhancer level or through unknown intermediate regulators.
Analysis of the direct regulation of sna-rg enhancer by STAT
To find out if the Hox and STAT inputs regulate sna expression directly, we searched for putative binding sites in the cis-regulatory region of sna-rg. To facilitate the bioinformatic analysis we dissected the 1.9 kb sna-rg regulatory element down to a 681bp fragment we call R2P2 (S1A-S1E Fig). The sna-rg-R2P2-GFP reporter construct drives high levels of expression in the CA and PG and its expression is even more specific as it lacks the low levels of GFP expression observed in the haemocytes and neurons of the larger sna-rg-GFP reporter.
Computational JASPAR analysis [38] of the 681bp R2P2 sequence identified three putative Hox-Exd-Hth and three putative STAT binding sites (Fig 6A). Further subdivision of sna-rg-R2P2 in two halves shows that neither the A1 nor the A2 half drive embryonic expression ( Fig 6C and 6D). Reporters containing A1 fused to either the proximal part of A2 (the sna-rg A1+A2prox-GFP reporter containing a single STAT site) or to the distal part of the A2 element (the sna-rg A1+A2dist-GFP reporter containing two STAT sites) recovered ring gland expression (Fig 6E and 6F). The recovery of expression when A1 is fused to either fragment, both containing STAT binding sites, made us wonder if the lack of expression of the A1 fragment is due to the absence of STAT binding sites. To test this hypothesis, we added to A1 a 20bp fragment that contains a single functional STAT site taken from an unrelated gene [the vvl1+2 enhancer of the ventral veinless gene [20]] creating the sna-rg A1+STAT reporter. We find that A1+STAT drives expression in both the maxilla and the labium and that this depends on JAK/ STAT signalling, as mutation of the STAT-binding site abolishes expression in the sna-rg A1+STATmut reporter (Fig 6G and 6H). Taken together, these experiments show that the presence of functional STAT binding sites is required for sna activation in the CA and PG primordia and that the 300bp A1 fragment can interpret the segmental cephalic positional information, suggesting that the Hox-Exd-Hth site located in A1 could mediate Dfd and Scr input to the enhancer. To confirm STAT's binding sites requirement, we mutated all three putative sites on the R2P2 fragment generating the sna-rg-R2P2 STATmut construct expressing simultaneously the LifeActin-GFP and nuclear Histone-RFP reporter markers (Materials and Methods). Comparing its expression to the sna-rg-R2P2-eGFP-PH, we find that although mutating the three STAT binding sites completely abolishes the reporter's expression in the PG, surprisingly, it does not eliminate its expression from the CA, where its activation is only slightly delayed ( Fig 6I and 6J). The CA expression of the sna-rg-R2P2 STATmut construct still depends on upd activity as it disappears in Df(1)os1A embryos lacking all Upd ligands (Fig 6K). These results indicate that either there is a cryptic STAT site in sna-rg-R2P2 we did not mutate, or that in the CA the sna-rg-R2P2 enhancer can be activated both directly and indirectly by STAT through a site present in the A2 fragment (see discussion).
Analysis of the regulation of sna-rg enhancer by Hox proteins
To test genetically the requirement of the Hox proteins and their cofactors for sna activation, we studied sna-rg expression in mutants for Dfd Scr and for the Hox cofactor hth [39]. In Dfd Scr mutant embryos few cells activate sna-rg-GFP expression at st11 (S4A Fig) and those that do soon acquire an apoptotic aspect (Fig 5F), confirming Hox requirement for gland development. Similarly, in hth P2 mutants, sna-rg expression almost disappears (S4B Fig). As described above, there are three JASPAR predicted Hox-Exd-Hth putative binding sites present in the R2P2 fragment. We first mutated the sites located in sna-R2P2 closer to the STAT binding sites in the A2 region and found that the expression of the mutated construct was almost identical to the wild type fragment. The dispensability of these two Hox-Exd-Hth sites for sna activation in the maxilla and labium is further confirmed by the strong expression driven by the snaA1+A2prox-GFP reporter that lacks these two sites (Fig 6E). Although the snaA1+A2prox reporter construct is slightly derepressed in the cephalic region, it is still active in the CA and PG primordia, indicating that, if there is any direct requirement for Hox activation, this would be mediated by the Hox-Exd-Hth site located in fragment A1. This site has a class 2 sequence (TGACAAAT) that has been shown by SELEX-seq analysis to bind preferentially Dfd and Scr proteins in complex with the Exd-Hth cofactors [ [40] and S4C Fig]. We mutated in snaA1+A2prox this class 2 site to (TGATCAAT) which is not detected in vitro by any Hox-Exd protein complex and found the embryos maintain robust expression in the CA and the PG suggesting the enhancer is not a direct Hox target (S4D' Fig). To confirm this, we also mutated the putative class 2 site in snaA1+A2prox changing its affinity to class 1 or class 3 Hox proteins. Such changes have been shown to affect the spatial expression of vvl1+55, a reporter construct directly regulated by the Dfd and Scr proteins and as result only active in the maxilla and the labium. While mutating in vvl1+55 the class 2 site towards a class 3 site conferring affinity for either Antp, Ubx, Abd-A and Abd-B protein in complex with Exd strongly activates embryos. Panels (A') and (A") show each channel separately to appreciate the co-expression of both markers in the gland primordia. (B) Df(1)os1A embryos show an almost complete downregulation of sna-rg and vvl1+2 expression from the CA and PG (asterisks). (C) Ectopic Upd expression driven with sal-Gal4 induces ectopic sna-rg and vvl1+2 expression in the gnathal segments, which for sna-rg is more pronounced in the labium than in the maxilla. Note that in the maxillary segment Upd can induce ectopic dorsal vvl1+2 but not sna-rg expression, this is expected as Dfd only induces sna-rg ventrally in the maxilla. (D-E) sna-rg-GFP embryos stained with anti-GFP (green) and anti-Sal (red). In control embryos (D) Sal labels the PG primordium but not the CA. In arm-Gal4 embryos ectopically expressing Upd (E), the PG is more expanded than the CA as shown by the number of cells coexpressing Sal and GFP. (F-G) sna-rg-GFP expression (green) in st13 Dfd Scr mutant embryos (F), or Dfd Scr mutant embryos after ectopic Upd expression driven with the sal -Gal4 line (G) showing that Upd activation is not sufficient to rescue gland formation in Dfd Scr mutants. In Dfd Scr mutant embryos (F), although the gland primordia become apoptotic, residual GFP expression indicates that there must exist Hox independent inputs activating the sna-rg enhancer. Embryos in (F-G) are also stained with anti-Scr to recognise the homozygous mutants. Scale bars 50 μm.
https://doi.org/10.1371/journal.pgen.1010427.g005 Fig 6. Direct regulation of sna-rg by STAT. (A) Representation of the minimal sna-rg R2P2 subfragments indicating the location of the putative STAT (pink crosses) and Hox-Exd-Hth DNA binding sites (black boxes). Mutated STAT binding sites are represented the enhancer in the trunk, and mutating the sequence towards a class 1 site that confers affinity for the Lab protein activates the enhancer in the intercalary segment where Labial is expressed [19]; equivalent mutations of the class 2 site in the A1+A2prox fragment did not modify significantly the spatial expression of the reporter, which remains expressed mostly in the maxilla and labium (S4E-S4G Fig), further supporting that sna is not directly activated by the Hox-Exd-Hth complex in the endocrine glands.
Intrasegmental specification of the CA and the PG
We have found that the Wnt, the Hh and the JAK/STAT signalling pathways contribute to the specification of the CA and the PG in the maxillary and labial segments. Our results indicate that the Hh and the Wnt pathways act indirectly by negatively regulating the spatial activation of the Upd ligand (Fig 7). Engrailed activation of hh transcription in the posterior compartment of the cephalic segments leads to Hh diffusion to the neighbouring cells in the maxilla and labium. Hh pathway activation prevents the formation of the Ci repressor protein allowing the activation of upd transcription at both sides of the posterior compartment. However, anterior to the engrailed stripe, Wnt pathway activation prevents upd transcription. As a result of the combined Hh and Wnt inputs, upd can be briefly transcribed at stage 11 in two localised ectodermal patches from where it induces JAK/STAT signalling, which activates sna transcription in the CA and the PG primordia through the sna-rg enhancer.
sna transcription in the endocrine primordia is mediated through a 681bp non-redundant cis-regulatory element located 5.439bp upstream of the sna transcription unit. The sna-rg-R2P2 enhancer contains a STAT 3n and two STAT 4n binding sites conforming to the canonical sequence TTCNNN(N)GAA [41,42]. Deletion from the R2P2 enhancer of a 278bp fragment containing all three putative STAT binding sites results in a complete loss of expression that can be regained by adding a single STAT 4n binding site from an unrelated gene, demonstrating STAT's direct involvement in sna regulation. It is interesting to note that the position where we inserted the new STAT site is on the opposite end to where the endogenous STAT sites are located, indicating there is flexibility on STAT protein localisation with respect to other transcriptional regulators binding to the enhancer, something also noticed for the vvl1 +2 STAT-regulated cis-regulatory element [20].
The activation of Upd in the gland primordium at st11 is very transient while sna transcription is maintained at least until st16 of embryogenesis, indicating that JAK/STAT signalling is only required for sna's initial activation in the CA and PG primordia but not for its maintenance. Maintenance of the sna-rg-R2P2 enhancer must be achieved by other elements of the gland gene-network induced by STAT or the Hox proteins. The existence of such a maintenance mechanism could explain why in wild type embryos, after mutating all three canonical STAT sites in the sna-rg-R2P2 STATmut reporter, the expression is still maintained in the CA: In these wild type embryos the endogenous gland gene-network is still functioning, activating the proposed ring gland maintenance mechanism that would be able to act on the sna-rg-R2P2 with a red X over the pink cross. (B) sna-rg R2P2-GFP expression. (C) No GFP expression is observed in A1 nor in A2 constructs (D). Gland expression is observed when A1 is joined to the A2 proximal half (E) or when A1 is joined to the A2 distal half (F). (G) A1 fused to a 20bp from fragment the vvl1+2 enhancer containing a functional STAT binding site. (H) A1 fused to the same 20bp fragment where the STAT binding site has been mutated. (I-K) Embryos carrying both the sna-rg-R2P2-GFP-PH (green) and the sna-rg-R2P2-STATmut Histone2B-RFP-GFP-PH (red and green) constructs. Red nuclear expression is not observed at early stage 11 (I) but can be detected later exclusively in the CA (J). Df(1)os1A embryos lacking all Upd ligands (K) do not express sna-rg-R2P2-STATmut mCherry-GFP-PH. Black arrows in (E-G) point to the CA/PG gland primordia, red arrows to ectopic expression outside the glands. Scale bars 50 μm. https://doi.org/10.1371/journal.pgen.1010427.g006 STATmut reporter even if it lacked the early STAT input. Although we cannot completely discard that the expression from the sna-rg-R2P2 STATmut reporter could be caused by the presence of cryptic STAT-binding sites not mutated in the sna-rg-R2P2 STATmut-GFP construct, two reasons favour the maintenance hypothesis. First, STAT activation in the gland primordia is very brief. Second, in embryos carrying both reporter constructs, the sna-rg-R2P2 enhancer is activated earlier than the sna-rg-R2P2 STATmut enhancer (Fig 6I and 6J), indicating that these DNA binding site mutations can prevent the early STAT activation but not the later maintenance input. This CA maintenance input is most likely mediated by the 278bp A2 region as, in contrast to the sna-rg-R2P2 STATmut reporter, the A1+STAT mut reporter does not retain any CA expression. In Df(1)os1A embryos lacking all JAK/STAT ligands the endogenous gland gene-network is not activated and the sna-rg-R2P2 STATmut enhancer is completely silent as it lacks both the initiation and the maintenance inputs (Fig 6K). A similar gene-network feed-back loop acting on a STAT-regulated enhancer during organogenesis has already been reported [37].
Regulatory similarities between CA, PG and tracheal specification
The CA, the PG and the tracheae have been proposed to originate by the divergence of an ancient serially repeated organ present in an arthropod ancestor [5]. If this was the case, specification of all three organs would be expected to be under similar upstream regulation. Using the early activation of sna transcription in the gland primordia through the sna-rg enhancer as a proxy for their specification, we found that the CA and PG primordia require the same signalling pathways controlling the specification of the tracheal primordia. JAK/STAT pathway activity is key for the activation of vvl and trh in the trachea and also for sna in the glands. Moreover, direct STAT binding to a tracheal specific early enhancer is required for vvl activation and here we show the direct STAT binding is also required for sna specific expression in the glands. Similarly, the Wnt pathway that is required for restricting vvl and trh expression to the tracheal primordium is also restricting the spatial expression of sna in the cephalic segments, although the sna-rg ectopic activation observed in wg mutants is less pronounced than that observed for vvl and trh in the trachea, that in some instances results in a continuous tracheal pit stretching from T2 to A8 due to the fusion of the primordia in neighbouring segments [12,19,20].
Previous work has reported the requirement of Hh for vvl expression in the trachea and for tracheal branch specification [12,43]. Here we also find that Hh and En are required in the gland primordia, although we find this requirement to be more pronounced in the glands than in the trachea judging from the almost complete disappearance of sna-rg expression. We also found that in hh mutants, upd expression disappears in gland primordia but not in tracheal cells, supporting the idea that Hh requirement is stronger in glands than in trachea.
Ectopic trunk Hox protein expression can activate both vvl and trh tracheal expression in the head. Hox requirement for vvl expression is fundamental both in the glands and in the tracheal primordia being controlled via direct DNA binding sites [19]. Although anterior Antp, Ubx, Abd-A or Abd-B Hox expression can also induce ectopic trh activation, it is unclear if this is mediated through direct Hox binding to the DNA regulatory sites [5]. Similarly, the Dfd and Scr Hox proteins are required for gland formation and their ectopic expression can induce ectopic sna-rg activation in the tracheal cells [5]. Our results indicate that Hox requirement for sna activation may be indirect, as mutating the putative Hox binding sites in the enhancer does not affect its expression. The CA and the PG primordia co-express sna-rg-GFP and Dfd and Scr briefly during st11 at the very early specification stage, with Hox expression becoming undetectable in the glands when they initiate migration [5]. Our observation that the transient upd expression in the gland primordia is affected in hh, wg and Dfd Scr mutant embryos, suggests that these pathways regulate sna expression in the ring gland indirectly through the activation of JAK/STAT signalling.
Another interesting similarity between glands and trachea is that, although ectopic Hox gene expression can ectopically induce sna-rg and trh outside their normal domain, the lack of Hox expression does not completely abolish their endogenous expression, indicating that in both cases a second positive input can compensate for the absence of Hox mediated activation. Our results suggest that, at least in the glands, this redundant input could be provided by the activating Ci form (Figs 3G and 4I), but further analysis to confirm this possibility and discard alternative sna-rg activators should be performed.
Differences between CA and PG specification
Development of the CA and the PG requires Sna activation, which in both primordia is regulated by the same enhancer. Also, both primordia are specified in the lateral ectoderm of the maxilla and the labium in cell clusters expressing vvl [5]. Despite these similarities, the position occupied by both primordia with respect to the vvl patch of expression is different. The CA is specified in the most ventral cells of the vvl maxillary patch and the PG is specified in the most dorsal cells of the vvl labial patch (Fig 5A). This suggests that despite their sharing of Hh, Wnt and JAK/STAT pathway regulation, the expression in each primordium must also have differential regulation. We have been unable to separate a CA enhancer from a PG enhancer by dissecting the sna-R2P2 cis-regulatory module into smaller fragments, suggesting that any gland specific binding sites in the enhancer are probably interspersed with the shared ones. The only case in which we were able to affect expression in the PG without affecting the CA was after mutating the three STAT binding sites. Our results indicate sna expression in the CA is under direct STAT regulation (Fig 6C, 6G and 6H), as it is in the PG. The persistence of expression in the CA of the sna-rg-R2P2 STATmut reporter gene can be explained by the existence of differing expression maintenance factors in the CA and in the PG. Gland specific transcription factors like Seven-up (Svp) in the CA and Spalt (Sal) in the PG have been described that are expressed in the early gland primordia when they start their migration [5,44,45]. Future studies will help to discover if these or other gland specific factors are responsible for controlling the maintenance of sna-rg enhancer as well as the slightly different dorso-ventral positions where each gland is specified.
Our analysis of snail activation in the CA and PG shows that these glands and the trachea share similar upstream regulators, reinforcing the hypothesis that both diverged from an ancient segmentally repeated organ. In Drosophila melanogaster the CA and the PG primordia experiment a very active migration after which they fuse to the corpora cardiaca forming the ring gland [6]. This differs from more basal insects where the CA fuses to the corpora cardiaca but not to the PG, and from the Crustacea where the three equivalent glands are independent of each other [2][3][4]46]. As the mechanisms we here describe relate to the early specification of the glandular primordia in Drosophila, it will be interesting to investigate if the equivalent genes are also involved in the endocrine gland specification of more distant arthropods.
Generation of sna ΔrgR2 deletion
The snail-rg R2 enhancer was removed by CRISPR-Cas9 site-directed deletion. Two flanking sna-rg sgRNAs (S1 Table) where cloned into the directed insertion vector pCDF4 [47] and the constructs injected into the 25C (#B25709) or the 68A (#B25710) landing sites using the phiC31 standard method at the Drosophila Consolider-Ingenio 2007 Transformation platform, CBMSO/ Universidad Autónoma de Madrid. Germ line deletions were induced by combining the above sna-rg sgRNA transgenes with nos-Cas9 [47]. Putative heterozygous mutant/ If males were individually crossed to sna 1 /CyO females to identify mutations lethal over the null sna 1 allele. Lethal alleles were tested by PCR to detect the generation of a deletion and the exact nature of the deletion was confirmed by sequencing using the FwdSeq snaCRISPR and RvsSeq snaCRISPR primers (S1 Table and
In situ hybridization
155BS-upd from Doug Harrison and RE35237 from BDGP cDNAs were used to generate upd and sna RNA probes using the DIG RNA Labeling Kit (Roche). Secondary biotinylated antibody against mouse (1:200, Jackson ImmunoResearch) was used for double detection of RNA and protein.
The sna-rg-R2P2 STATmut enhancer was made by performing a PCR mutagenesis in two steps: a first PCR round with the Fwd sna-rg + R2P1 Rvs OP 92E mut primers and with the Fwd SD 92E OP mut + Rvs sna-rg R2P2 92E mut primers (S1 Table). The two resulting PCR amplicons containing overlapping sequences, were mixed and used as a template to be amplified with the external primers: Fwd sna-rg BamHI + Rvs sna-rg R2P2 92E mut BamHI, to generate the final snail-R2P2 STATmut enhancer fragment, which was subcloned into the BamHI site from a pCaSpeR modified version to simultaneously express H2B-mRFP-P2A-LifeActinGFP.
Hox and STAT binding site mutagenesis
The putative class II (TGACAAAT) Hox binding site located at position 222-229 of A1+A2proximal was mutated to either erase the Hox site or to modify its affinity towards another Hox binding class site as described in [40]. Mutations were induced according to [48] using the following oligos: sna mut1 for and sna mut1 rev; sna mut2 for and sna mut2 rev; sna mut3 for and sna mut3 rev or sna mutNull for and sna mutNull rev.
|
v3-fos-license
|
2021-09-17T06:17:24.063Z
|
2021-09-15T00:00:00.000
|
237536516
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-021-97955-4.pdf",
"pdf_hash": "7c60d4ba54a29eed6bc58c0f2761b5f600e0a0ca",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2303",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "3d12170827a6f370bc865f5e888af8f0bfe1f13e",
"year": 2021
}
|
pes2o/s2orc
|
Individuals with bipolar disorder have a higher level of uric acid than major depressive disorder: a case–control study
At present, no well-established biomarkers were ever found to distinguish unipolar depression and bipolar disorder (BD). This study aimed to provide a clearer comparison of UA levels between BD and major depressive disorder. Peripheral UA of 119 patients with BD in acute stage (AS) and 77 in remission stage (RS), and 95 patients with UD in AS and 61 in RS were measured, so were 180 healthy controls. UA levels in BD group were higher than UD and HC groups regardless of the AS or RS, while differences in UA levels between UD group and HC group were not significant. Differences in UA levels of BD-M (bipolar mania/hypomania) were higher than BD-D (bipolar depression) subgroups, and UA levels of BD-M and BD-D subgroups were higher than UD and HC groups. The comparison of number of participants with hyperuricemia among groups confirmed the above results. There were no significant differences in UA levels of between drug-use and drug-free/naïve subgroups. UA could distinguish BD and UD significantly both in acute and remission stage. The study suggests patients with BD had a higher level of UA than UD, especially in mania episode. UA may be a potential biomarker to distinguish BD from UD.
Bipolar disorder (BD) is a serious mental disorder with a low diagnosis rate, resulting from that the onset of BD is often characterized by a depressive episode, which is similar in presentation to unipolar depression (UD) 1 . Due to misdiagnosis, inappropriate treatment with antidepressants without concomitant mood stabilizers results in switching to mania or hypomania and repeated attacks of depression 2 . A recent study showed that family history of BD, early age at onset of the first depressive episode (< 25 years), postpartum depressive episodes, rapid onset of depressive episodes, worse response to antidepressants and the presence of psychotic symptoms or atypical depressive symptoms might be the most consistent clinical predictors of BD 3 . However, no laboratory or imaging markers are identified to allow for a diagnosis of BD or distinguishing between BD and UD.
The purinergic system is a critical neurotransmitter system with the end product of Uric acid (UA), which involves the occurrence and development of mental illness 4 . It has been proved that increased levels of UA are associated with the accelerated purinergic transformation 5 . UA acts on neurons presynaptically and postsynaptically and specific receptors in the glial cell membrane that can affect other neurotransmitters' activities involved in the pathophysiological process of mood disorders, including dopamine, gamma-aminobutyric acid, glutamate and serotonin 6 .
In the late nineteenth century, researchers found that some patients with gout and hyperuricemia suffered from mood disorders and were relieved after receiving lithium treatment. Since then, the relation between UA and mood disorders has raised the hypothesis of purinergic system dysfunction 7 . Recent studies showed that the highest UA levels were observed in patients with BD compared with other mental disorders and healthy controls (HC) [8][9][10][11] , and elevated UA levels were associated with impulsivity, excitatory behavior, irritability, hyperthymia temperament and severe manic symptoms 6,12 . While the lowest UA levels were observed in patients with UD, suggesting that UA may be a potential biomarker for distinguishing between BD and UD. Besides, patients with BD have an increased risk of gout 13 , while allopurinol, an inhibitor of xanthine oxidase used to treat and prevent gout, can be used as an add-on therapy for patients with BD to reduce manic symptoms 14 . Some studies also implied that compared with bipolar depression and remission, the highest UA levels were observed in the manic www.nature.com/scientificreports/ episode, indicating that UA may be a status marker of manic episodes rather than a trait marker [15][16][17] . However, similar results were not detected in similar studies. Studies by Salvadore et al. and Gültekin BK et al. showed that UA levels were higher in patients with BD than in healthy controls but not associated with the severity of mania. Furthermore, some studies showed there were no statistically significant differences in UA levels between BD and UD, neither did to healthy controls [18][19][20] . Previous studies on UA of patients with BD and UD are limited and conflicting. The present study aimed to conduct a clearer comparison of UA levels between BD and UD.
Materials and methods
Subjects and participants. The study protocol was approved by the Clinical Research Ethics Committee of Shandong Mental Health Center and is compliant with the Code of Ethics of the World Medical Association (Declaration of Helsinki). Informed written consent was obtained from all participants or their legal guardians after a complete and extensive description.
We conducted this study at the Shandong Mental Health Center form May 2018 to May 2019, inpatients and outpatients aged from 18 to 60 years with the Diagnostic and Statistical Manual of Mental Disorders, fifth edition (DSM-5) diagnosis of BD or UD were recruited. Furthermore, healthy individuals with no family history of psychiatric disorders were enrolled in the study as the control group.
Inclusion criteria for patients: (1) meet the bipolar disorder or major depressive disorder criteria based on DSM-5; (2) age 18-60 years, han Chinese; (3) understand research content and provide written informed consent.
The exclusion criteria for all participants were as follows: (1) Combined with organic brain diseases or brain trauma. (2) Hypertension, diabetes, gout or liver, kidney, biliary, and other physical diseases or abnormal renal and liver function. (3) Combined with other mental disorders. (4) Positive in urine pregnancy test or lactating females. (5) Modified electroconvulsive therapy treatment within 4 weeks, or long-acting antipsychotics treatment within 6 months; (6) Taking antioxidants or neurotrophic drugs within 12 weeks before and during enrollment.
All participants received an interview by a psychiatric postgraduate (Zhe Lu), the diagnosis was confirmed by at least two experienced psychiatrists based on DSM-5.
Evaluation instruments and measurement. Demographic and clinical information of participants
were collected by the self-designed case report form, which including age, sex, history of smoking, family history of psychiatric disorders, number of mood episodes, duration of disease, and whether with psychotic symptoms.
Serum UA levels and lipid indices (total cholesterol, CHOL; triglyceride, TG; high-density lipoprotein, HDL; low-density lipoprotein, LDL) test as part of routine blood checks was performed during the inpatient stays and the regular return visit of outpatients, while serum UA levels and lipid indices test of healthy individuals in this study was performed after enrollment. The assay was prepared as follows: 5 mL of fasting venous blood samples were drawn from all participants. According to the manufacturer's instructions, serum levels of UA were detected by Roche Cobas C702 automatic biochemical analyzer (Swiss Roche Diagnostics Co., Ltd.). In Shandong Mental Health Center, the normal range of serum UA values has been standardized as 208-428 µmol/L in males and 155-357 µmol/L in females.
Statistical analysis. All of the data were analyzed by using IBM SPSS Statistics for Windows, Version 26 (Chicago Inc., USA). All measurement data were inspected for normality by Kolmogorov-Smirnov test. Kruskal-Wallis one-way analysis of variance (ANOVA) was performed to compare the differences of age, onset age, number of mood episodes, duration of disease, LDL, HDL and TG among 3 groups. One-way ANOVA was used to compare CHOL among 3 groups. Chi-square test or Fisher's exact test was conducted to analyze sex, history of smoking, positive family history and whether with psychotic symptom. Differences of UA were tested by analysis of covariance (ANCOVA), with age, sex, age of onset, mood episode numbers, duration of disease, whether with psychotic symptom and lipid indices as covariates to control confounding factors between BD and UD groups; age, sex, lipid indices as covariates among 3 groups. Bonferroni test as the post-hoc multiple comparison was used to identify the differences among 3 groups. Receiver operating characteristic (ROC) analysis was applied to see the possibility to use UA as a biomarker.
Results
Demographic and clinical data. The study included 119 BD patients in acute stage (AS) and 77 in remission stage (RS) and included 95 UD patients in AS and 61 in RS as well as 180 subjects in the HC group. Differences of sex among 3 groups were not significant whether on AS or RS. Age of BD was lower than UD (P < 0.001) and HC (P < 0.001) groups in acute stage, while the difference between UD and HC groups was not significant; on remission stage, there were no significant differences between BD and UD groups, as well as between HC and UD groups, while the age of BD group was lower than HC group (P = 0.001). Duration of illness and mood episode times in BD group were higher than UD group whether on AS or RS. The differences in smoking history and family history between BD group and UD group were not significant whether on AS or RS. Patients with psychotic symptoms in BD group were more than UD group. HDL of BD was lower than HC (P = 0.009, after Bonferroni test) groups in acute stage, while the difference between UD and HC groups, as well as between HC and UD groups were not significant; on remission stage, HDL of UD group was higher than BD and HC groups. LDL and CHOL of BD and UD groups were lower than HC groups, while the differences between UD and BD www.nature.com/scientificreports/ groups were not significant. There were no significant differences on TG among 3 groups on acute stage, while TG of HC was lower than BD and UD groups on remission stage (Table 1).
Differences in UA levels among BD, UD, and HC group in acute stage. There were significant differences in UA levels and number of participants with hyperuricemia among three groups. Post-hoc analysis showed that UA levels and number of participants with hyperuricemia in the BD group were higher than UD and HC group adjusted by bonferroni test, while differences in UA levels and number of participants with hyperuricemia between UD and HC group were not significant (Table 2). Afterward, the BD group was divided into bipolar mania/hypomania (BD-M, n = 64) subgroup and bipolar depression (BD-D, n = 55) subgroup to be compared with the UD group. There were significant differences among the 3 groups. The post-hoc test showed that differences on UA levels and number of participants with hyperuricemia of BD-M subgroup were higher than BD-D (P = 0.002) subgroup and UD group (P < 0.001), and UA levels of BD-D subgroups were higher than UD group (P = 0.034), while the differences in number of participants with hyperuricemia between BD-D subgroup and UD group were not significant (Table 2, Fig. 1).
Differences in UA levels among BD, UD and HC group in remission stage. Significant differences in UA levels were detected among three groups. The post-hoc test showed that UA levels and number of participants with hyperuricemia in BD group were higher than UD and HC group, while differences in UA levels and number of participants with hyperuricemia between UD and HC group were not significant ( Table 3).
Effects of treatment on UA levels. Drug-use subgroup vs. drug-naïve/free subgroup. Patients in acute stage were divided into drug-use subgroup and drug-naïve/free subgroup (mania and depression unmedicated Table 2. UA levels of participants in AS (mean ± SD, μmol/L). UA, uric acid; HPUA, hyperuricemia; BD-M, mania/hypomania; BD-D, bipolar depression; UD, unipolar depression; HC, healthy control; SD, Standard deviation. 1 , age, sex, history of smoking, family history, age of onset, mood episode numbers, duration of disease, whether with psychotic symptom and lipid indices as covariates between BD and UD groups. 2 www.nature.com/scientificreports/ first episode or no treatment was used within eight weeks). There were no significant differences on UA between drug-use and drug-free/naïve subgroups whether in BD group or UD group (Table 4).
BD-M vs. BD-D vs. UD in the drug-use subgroup.
In the drug-use subgroup, the differences among 3 groups were significant (F = 8.570, P < 0.001), the post-hoc test showed there were no significant differences in UA between BD-M and BD-D subgroups (P = 0.227), as well as between BD-D and UD groups(P = 0.080), while UA levels of BD-M group were higher than UD group (P < 0.001).
BD-M vs. BD-D vs. UD in drug-naïve/free subgroup.
In drug-naïve/free subgroup, the differences among 3 groups were significant (F = 10.267, P < 0.001), there were no significant differences in UA levels between BD-D and UD groups (P = 0.217), but UA levels of BD-M groups were higher than UD (P < 0.001) and BD-D groups (P = 0.027). Figure 1. UA levels of participants in acute stage. UA of BD group was higher than UD and HC groups, while the difference between UD group and HC group was not significant. UA of BD-M subgroup was higher than BD-D subgroup and UD group, and UA of BD-D subgroup level was higher than UD group. BD, bipolar disorder; BD-M, mania/hypomania; BD-D, bipolar depression; UD, unipolar depression; HC, healthy control; UA, uric acid. *P < 0.050; **P < 0.010; ***P < 0.001. Table 4. UA levels of drug-use and drug-naïve/free subgroups (mean ± SD, μmol/L). Age, sex, history of smoking, family history, age of onset, mood episode numbers, duration of disease, whether with psychotic symptom and lipid indices as covariates. UA, uric acid; BD-M, mania/hypomania; BD-D, bipolar depression; UD, unipolar deprssion; SD, Standard deviation. www.nature.com/scientificreports/ ROC analysis of UA as a biomarker to distinguish BD and UD. UA level could significantly distinguish the BD group and UD group (area under curve: all subjects, 0.731; male subjects, 0.752; female subjects, 0.753) in acute stage, as well as BD-D group and UD group (area under curve: all subjects, 0.691; male subjects, 0.660; female subjects, 0.703). In remission stage, the UA could also significantly distinguish the BD group and UD group (area under curve: all subjects, 0.705; male subjects, 0.675; female subjects, 0.811) (Fig. 2).
Discussion
In the study, UA levels in the BD group were higher than UD and HC groups, whether in acute or remission stage. Nevertheless, a recent study indicated that UA levels in UD were lower than HC; a possible reason was the heterogeneity of subjects in the UD group because the UA diagnosis is only based on clinical symptoms at present while some patients with BD often begin with depression. It was further confirmed by a recent study that the higher UA levels might be a predictor of BD 21 . The previous study showed that sex was an important factor that could affect UA levels 19 , but we analyzed separately by sex and got similar results.
The purinergic system is involved in neurodevelopment and pathophysiological processes of psychotic disorders, such as the process of genesis, differentiation on neurocyte and inflammation of neuro-glial cell, and so on [22][23][24][25] . Purinergic receptors can be divided into P1 and P2 receptors according to their biochemical and pharmacological properties 26 . P1 receptors can regulate plasticity of synapse and the release of neurotransmitters 24,25,27,28 , while P2 receptors are closely related to embryonic neural development 29 . The dysfunction of the purinergic system result from any causes may lead to psychotic disorders. UA, as the end product of the purinergic system, is in connection with some physiological functions, including sleep, motor, cognitive function, appetite, and social activities, as well as the pathophysiology of mood disorders 6,12 . Additionally, UA is also related to specific traits, including driving and disinhibition, which is very common in BD. It is also noticed that the peripheral UA levels are consistent with that in the central nervous system 30,31 .
Beyond that, UA is also a selective antioxidant whose level is considered as a marker of oxidative stress, and results in this study indicated that patients with BD might have a higher oxidative stress level. Moreover, in this study, we divided the acute patients with BD into BD-M and BD-D subgroups, with results showing that UA levels of both subgroups were higher than UD group, and UA levels of BD-M group were higher than BD-D www.nature.com/scientificreports/ group. However, there were no significant differences between BD-D and UD group on a number of patients with hyperuricemia. It suggested that patients with mania episode might have a higher level of oxidative stress. In order to detect the effects of treatment on UA levels, we divided the acute patients into drug-use and drugnaïve/free subgroups. It was observed that the differences on UA levels between 2 subgroups were not significant, which suggested that UA might be a steady biomarker to distinguish BD and UD.
As the comparation of demographic data in Table 1, the difference of age among three groups was significant, moreover, we did a partial correlation analysis, which controls the influence of the diagnosis and sex, the result showed that the association between age and UA was significantly negative. The previous study also showed that age was negatively correlated with the UA level 32 . To eliminate the influence of confounding factors, we set the age as covariates when we conducted the comparation.
We draw a figure which showed the distribution of numbers of patients in the interval of UA (every 50 μmol/L of UA change), it clearly showed that the UD group included the higher percent of patients with high level of UA than BD-D subgroup (Fig. 3). To see the possibility to use UA as a biomarker clinically, we conducted a ROC analysis, the result showed that the UA could distinguish BD and UD significantly both in acute and remission stage, which indicated that the UA might be a potential biomarker to distinguish BD from UD.
There are some limitations to this study. Firstly, diet is an affecting factor to UA levels, but this study did not strictly control the diet. Secondly, mediation analysis indicated that metabolic syndrome, triglyceride, and abdominal perimeter could affect UA levels, although it could not fully explain the correlation between UA and BD 8 , we collected the lipid indices and control the confounders, but biochemical indicators like hepatorenal function and indexes of glycometabolism were not collected, which may affect the UA. Thirdly, we did not evaluate the severity of the disease because we aimed to compare the difference among different mood states, it was difficult to add the severity of disease as the covariate when conduct the comparison. A previous study showed that UA levels were positively correlated with the severity of mania 9 , but recent studies indicated that there was no significant correlation between UA and severity of mania 18,33 , which is calling for more strictly designed prospective studies to explore the relation between UA and severity of the disease. Finally, although we divided acute patients into drug-use and drug-naïve/free subgroup, the effect of different kinds of mood stabilizers on UA levels are diverse, such as lithium 34 and carbamazepine may decrease UA levels of BD patients, while valproates seemly have the opposite effect 35 , and the effect of antidepressants, physiotherapeutic and psychotherapy on UA levels were not yet discussed.
In conclusion, this study observed that UA levels in BD were higher than UD and HC, especially in mania episode, which provide further evidence on the relation between the purinergic system and pathogenesis of BD. Moreover, UA levels may be a potential biomarker to distinguish BD from UD. In the future, a strict-design, larger-sample prospective study is required to confirm this conclusion.
Data availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
|
v3-fos-license
|
2020-04-23T09:07:58.111Z
|
2020-04-23T00:00:00.000
|
216076740
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11239-020-02077-9.pdf",
"pdf_hash": "70e9266961155251959cf804b4e6c7c75fce4507",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2306",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4cfc55efaba242c9bf5583ce4e7e5871e0ca9bf0",
"year": 2020
}
|
pes2o/s2orc
|
Associations between model-predicted rivaroxaban exposure and patient characteristics and efficacy and safety outcomes in patients with non-valvular atrial fibrillation
Rivaroxaban exposure and patient characteristics may affect the rivaroxaban benefit–risk balance. This study aimed to quantify associations between model-predicted rivaroxaban exposure and patient characteristics and efficacy and safety outcomes in patients with non-valvular atrial fibrillation (NVAF), using data from the phase 3 ROCKET AF trial (NCT00403767). In ROCKET AF, 14,264 patients with NVAF were randomized to rivaroxaban (20 mg once daily [OD], or 15 mg OD if creatinine clearance was 30–49 mL/min) or dose-adjusted warfarin (median follow-up: 707 days); rivaroxaban plasma concentration was measured in a subset of 161 patients. In this post hoc exposure–response analysis, a multivariate Cox model was used to correlate individual predicted rivaroxaban exposures and patient characteristics with time-to-event efficacy and safety outcomes in 7061 and 7111 patients, respectively. There was no significant association between model-predicted rivaroxaban trough plasma concentration (Ctrough) and efficacy outcomes. Creatinine clearance and history of stroke were significantly associated with efficacy outcomes. Ctrough was significantly associated with the composite of major or non-major clinically relevant (NMCR) bleeding (hazard ratio [95th percentile vs. median]: 1.26 [95% confidence interval 1.13–1.40]) but not with major bleeding alone. The exposure–response relationship for major or NMCR bleeding was shallow with no clear threshold for an acceleration in risk. History of gastrointestinal bleeding had a greater influence on safety outcomes than Ctrough. These results support fixed rivaroxaban 15 mg and 20 mg OD dosages in NVAF. Therapeutic drug monitoring is unlikely to offer clinical benefits in this indication beyond evaluation of patient characteristics. Electronic supplementary material The online version of this article (10.1007/s11239-020-02077-9) contains supplementary material, which is available to authorized users.
Introduction
Rivaroxaban, an oral direct factor Xa inhibitor, is approved for the prevention of stroke and systemic embolism (SE) in adults with non-valvular atrial fibrillation (NVAF) with one or more risk factors (e.g., prior stroke) [1], based on the phase 3, randomized, controlled trial ROCKET AF (NCT00403767) [2]. In ROCKET AF, rivaroxaban (20 mg once daily [OD], or 15 mg OD if creatinine clearance [CrCl] was 30-49 mL/min) was non-inferior to dose-adjusted warfarin for the prevention of stroke or SE, and similar with respect to the risk of major bleeding or a composite of major or non-major clinically relevant (NMCR) bleeding.
Advanced age and impaired renal function are associated with increased rivaroxaban exposure [1] and are also independent risk factors for NVAF-related thromboembolism and for major bleeding events in anticoagulant-treated patients [3][4][5][6]. It has been proposed that therapeutic drug monitoring (i.e., plasma concentration-based dose adjustment) may help guide anticoagulant dosing for individual patients. This post hoc exposure-response analysis aimed to explore this possibility and to quantify the associations between predicted rivaroxaban exposures, patient characteristics and clinical outcomes in patients with NVAF using data from ROCKET AF.
Study design
Full details of the methodology and ethical conduct of the ROCKET AF study have been reported previously [2,7]. Briefly, 14,264 patients with NVAF were randomized to receive rivaroxaban (20 mg OD, or 15 mg OD in patients with a CrCl of 30-49 mL/min) or dose-adjusted warfarin (median follow-up: 707 days; median duration of treatment: 590 days) (Table 1) [2,7].
The efficacy outcomes evaluated in this exposure-response analysis were a composite of ischemic stroke or non-central nervous system (non-CNS) SE, and a composite of ischemic stroke, non-CNS SE or all-cause death. Major bleeding events and the composite endpoint of major or NMCR bleeding events were evaluated as safety outcomes (Table 1).
Patient characteristics
Patient characteristics for potential inclusion in the exposure-response evaluation were identified a priori based on a review of the literature [8][9][10][11] and experiences in ROCKET AF [2,12,13]. The variables were categorical in nature or grouped categorically to aid clinical interpretation.
Rivaroxaban exposure predictions
An integrated population pharmacokinetics (popPK) model was developed as previously described [14]. The model used pooled rivaroxaban pharmacokinetic data from a subset of 161 patients for whom rivaroxaban exposure was measured in ROCKET AF, and from patients in six phase 2 trials of rivaroxaban in which a wide range of rivaroxaban doses were evaluated [14]. Individual steady-state rivaroxaban exposure metrics (including area under the plasma concentration-time curve from time 0 to 24 h [AUC 0-24 ], maximum plasma concentration [C max ] and trough plasma concentration [C trough ]) for each patient were predicted based on individual patient characteristics (age, weight, renal function measured as rate of CrCl, and sex) and rivaroxaban dose.
Using patient characteristics alone to predict individual exposure might not appropriately reflect the variability expected. Therefore, prothrombin time (PT) measurements, collected from ROCKET AF participants at weeks 12 and 24, were used to derive rivaroxaban AUC 0-24 , C max and C trough , based on the linear relationship between plasma concentration and PT determined using a thromboplastin reagent sensitive to the anticoagulant effects of rivaroxaban [15]. This adjustment enhanced precision in the exposure predictions and was applied to 5681 patients in ROCKET AF, including the 161 patients with available rivaroxaban exposure measurements [15].
Exposure-efficacy analyses included patients who received at least one dose of rivaroxaban, were followed for events while receiving rivaroxaban or within 2 days after discontinuation, and had available efficacy outcome data. Exposure-safety analyses included patients who received at least one dose of rivaroxaban and were followed for events while receiving rivaroxaban or within 2 days after discontinuation. Measures of exposure in these analyses were predicted based on the popPK model, patient characteristics and dose, with or without PT adjustment for over 7000 patients.
Regression analyses
Relationships between rivaroxaban exposure metrics, patient characteristics and the efficacy and safety outcomes were assessed using Cox proportional regression analysis, as described in the supplemental material. The hazard ratios (HRs) generated for the variables using the final models for each outcome were displayed in forest plots. The reference category was the category most commonly observed for the variable, except for geographic region for which Western Europe was set as the reference. The final models were used to simulate the probability of efficacy or safety events at 1 year versus predicted exposure in a typical patient population (i.e., with individual patient characteristics set to reference values).
Patient characteristics
Supplemental Table 1 shows the characteristics of patients selected for evaluation in the efficacy (n = 7061) and safety (n = 7111) populations. Approximately 38% of patients were > 75 years of age, 40% were female, 81% had persistent atrial fibrillation (AF) and 43% had a CHADS 2 score of 3. Baseline antiplatelet and non-steroidal anti-inflammatory drug (NSAID) use and prior vitamin K antagonist use were reported in 40%, 4% and 62% of patients, respectively. Histories of stroke, transient ischemic attack and SE were present in 34%, 22% and 4% of patients, respectively. Baseline CrCl was < 50 mL/min in 21% of patients. Table 1 Description of ROCKET AF and outcomes and event rates for the exposure-response analyses CNS central nervous system, CrCl, creatinine clearance, ER exposure-response, INR international normalized ratio, NMCR non-major clinically relevant, NVAF non-valvular atrial fibrillation, OD once daily, SE systemic embolism a Major bleeding was defined, in accordance with International Society on Thrombosis and Haemostasis criteria, as the following: overt bleeding associated with a decrease in hemoglobin level of ≥ 2 g/dL or leading to a transfusion of ≥ 2 units of packed red blood cells or whole blood; bleeding in a critical site; or bleeding contributing to death [24] b NMCR bleeding was defined as overt bleeding that did not meet the criteria for major bleeding but that was associated with medical intervention, unscheduled contact with a physician, interruption or discontinuation of study drug, or discomfort or impairment of activities of daily life [2] ROCKET AF [
Rivaroxaban exposure predictions and event rates
Predicted C trough showed larger between-patient variability than predicted AUC 0-24 or C max (Supplemental Table 2). The exposure predictions were all highly correlated (> 0.85) within a given individual. The observed event rates for efficacy and safety outcomes are summarized in Table 1. C trough was the exposure metric most strongly associated with the likelihood of both efficacy and safety events, as evident from the lowest Akaike information criterion (AIC) value, and was selected for investigation for both analyses, as described in the supplemental material.
Regression analyses
The results of the final exposure-response models are shown in Table 2 and Supplemental Table 3.
Exposure-efficacy analysis
There was no apparent trend between C trough quartiles and the composite efficacy outcomes (Fig. 1a, b).
There was also no significant association between C trough and the outcome in the final model for ischemic stroke or non-CNS SE; the HRs associated with C trough in the 5th and 95th percentiles versus the median were 1.02 (95% confidence interval [CI] 0.89-1.18) and 0.94 (95% CI 0.65-1.35), respectively (Fig. 2a). Of the variables included in the model, CrCl and history of stroke showed a significant association with the outcome; there was no significant association with age (Fig. 2a, Supplemental Table 3).
In the final model for ischemic stroke, non-CNS SE or all-cause death, there were no significant associations between either C trough or age and the outcome; significant associations were evident for CrCl, geographic region and histories of stroke, myocardial infarction (MI) and heart failure (Fig. 2b, Supplemental Table 3). Histories of stroke and MI had an impact similar to or greater than CrCl, with HRs of 1.56 (95% CI 1.25-1.94) and 1.84 (95% CI 1.44-2.35), respectively.
There was a small decrease in expected HR for ischemic stroke or non-CNS SE with increasing predicted C trough values (Fig. 3a). The association was relatively flat between the HR for ischemic stroke, non-CNS SE or all-cause death and predicted C trough values (Fig. 3a).
Exposure-safety analysis
The cumulative event rates for major bleeding (Fig. 1c) and for the composite of major or NMCR bleeding (Fig. 1d) increased with increasing rivaroxaban C trough .
In the final model for major bleeding, the HRs associated with C trough in the 5th and 95th percentiles (vs. the median) were 0.92 (95% CI 0.85-0.99) and 1.25 (95% CI 1.03-1.51), respectively; the association between C trough and major bleeding risk was not statistically significant (Fig. 2c). Age (> 75 years vs. 65-75 years) was significantly associated with major bleeding (Fig. 2c, Supplemental Table 3). Patients in North America versus Western Europe had a higher risk of major bleeding, and the risk of major bleeding was higher in patients with versus without baseline use of NSAIDs or aspirin, a history of gastrointestinal (GI) bleeding, and low baseline hemoglobin. CrCl had no significant impact on major bleeding risk.
For major or NMCR bleeding, the HRs associated with C trough in the 5th and 95th percentiles (vs. the median) in the final model were statistically Table 3). Overall, history of GI bleeding had the greatest impact on this outcome. Patients aged > 75 years were more likely to experience major or NMCR bleeding than those aged 65-75 years, as were patients with versus without low baseline hemoglobin, antiplatelet therapy or a history of vascular disease. The magnitude of the impact of these covariates on the risk of major or NMCR bleeding was similar or greater than that of rivaroxaban C trough . For major bleeding, there was a small increase in HR with increasing C trough values, which appeared to plateau at ~ 115 µg/L (Fig. 3b). For major or NMCR bleeding, there was a small increase in HR over the range of C trough values (Fig. 3b).
Expected probability of efficacy or safety events at 1 year of treatment with rivaroxaban
An increase in C trough from the median to the 95th percentile was predicted to increase the probability of having a major bleeding event from ~ 2.1 to ~ 2.8% (p = 0.0211) and the probability of having a major or NMCR bleeding event from ~ 12.2 to ~ 15.5% (p = 0.00002) (Supplemental Fig. 1). Having a history of GI bleeding shifted the entire exposure-response curve for major bleeding upwards and appeared to have a greater impact on the probability of major bleeding at 1 year of treatment than any of the predicted changes in rivaroxaban exposure. An increase in C trough from the median to the 95th percentile was predicted to increase the probability of having a major bleeding event from ~ 2.1 to ~ 2.8% in patients without a history of GI bleeding, and from ~ 5.1 to ~ 6.9% in patients with a history of GI bleeding.
Discussion
This analysis evaluated rivaroxaban exposure-response relationships in over 7000 patients with NVAF to assess the potential of monitoring drug levels and evaluating patient Red lines represent means and shaded areas represent 95% confidence intervals. Black squares represent median C trough and horizontal error bars represent the range between the 5th and 95th percentiles of C trough . Vertical dashed lines label the 5th and 95th percentiles of C trough . CNS central nervous system, C trough trough plasma concentration, HR hazard ratio, NMCR non-major clinically relevant, SE systemic embolism characteristics in optimizing the benefit-risk profile of treatment.
Warfarin, which requires monitoring, has a clear delineation between international normalized ratio values that are associated with maximum efficacy and those that are associated with increased bleeding risk (i.e. a narrow therapeutic window) [16].
In this analysis, rivaroxaban showed no clear lower limit of exposure that resulted in loss of efficacy, indicating a wide therapeutic window for efficacy in the NVAF indication. Several patient characteristics were significantly associated with the composite efficacy outcomes, but the CHADS 2 score showed no significant association. A likely explanation is that history of stroke (which showed significant associations with both composite efficacy outcomes) was included as an independent risk factor in the model. Impaired renal function (CrCl < 50 mL/min) showed significant associations with both composite efficacy outcomes.
Increasing predicted rivaroxaban C trough from the median to the 95th percentile was associated with a significant increase in the risk of major or NMCR bleeding, with a HR of 1.26. The HR for major bleeding was similar (1.25) but the association between C trough and the risk of major bleeding was not statistically significant. This may reflect the smaller number of major bleeding events compared with the composite of major or NMCR bleeding events (395 vs. 1475). Thus, the significance of the association between rivaroxaban exposure and major bleeding and the extent to which this contributes to the association between exposure and the composite of major or NMCR bleeding remains uncertain. However, the present analysis does show that the exposure-response relationships for both major bleeding and the composite of major or NMCR bleeding were shallow, with a gradual increase in bleeding risk across a wide range of predicted exposures and no clear threshold of exposure above which the increase in bleeding risk accelerated. The expected increase in the HR of the composite of major or NMCR bleeding, and possibly major bleeding, is therefore small relative to the change in rivaroxaban plasma concentration, which means that any potential gain from measuring rivaroxaban levels and forcing a change in dose would be limited. The CIs around the 1-year estimates of bleeding event rates were wide for any given rivaroxaban concentration and overlapped within the 5th and 95th percentiles of exposure. Taken together, these results suggest that therapeutic drug monitoring would be of limited benefit in patients with NVAF receiving rivaroxaban under the prescribed regimen.
Our analysis identified age, NSAID or aspirin use, history of GI bleeding and low baseline hemoglobin as the components of the HAS-BLED and other bleeding scores [4,8,11], which were statistically significant risk factors for major bleeding. These patient characteristics therefore appeared to be more important determinants of risk than rivaroxaban exposure. The increased risk of major bleeding in North American patients compared with those from Western Europe observed in this analysis may be due to ascertainment bias or other confounding factors, such as comorbidities [12]. For major or NMCR bleeding, patient characteristics such as history of GI bleeding and age were statistically significant risk factors, with an impact similar to or greater than rivaroxaban exposure. For example, increasing rivaroxaban C trough from the median to the 95th percentile (from 52.55 to 124.13 µg/L) increased the risk of the composite of major or NMCR bleeding by 26%, whereas having a history of GI bleeding increased this risk by 47%.
Similar findings regarding the effects of exposure and patient characteristics on bleeding risk have been reported for edoxaban, another direct factor Xa inhibitor. In separate analyses of phase 2 and phase 3 trial data, there were significant increases in bleeding risk with increasing edoxaban exposure in patients with NVAF [17,18]. In contrast to the present results for rivaroxaban, the relationship between edoxaban exposure and bleeding risk was steep over the exposure range [18]. However, edoxaban dose reductions based on patient characteristics in the phase three trial were associated with preservation of efficacy and further reductions in the incidence of major bleeding compared with warfarin (dose reduction vs. no dose reduction; p interaction ≤ 0.023), leading the authors to conclude that the data validate the strategy of tailoring the dose based on clinical factors alone and that such a strategy obviates the need for drug monitoring [19]. The significant variability in exposures in both the edoxaban and dabigatran trials and thus the potential difficulty in selecting threshold drug concentrations for guiding dose changes was also highlighted [18,19].
The dosage of rivaroxaban in ROCKET AF was tailored based on renal function (20 mg OD reduced to 15 mg OD in patients with a CrCl of 30-49 mL/min) and these dosages were subsequently approved for the NVAF indication [2,20]. Renal function is also a key consideration in decision-making regarding peri-procedural management of rivaroxaban therapy [21]. While monitoring of coagulation and plasma drug concentrations has been proposed in some patients for guiding pre-and peri-procedural management of direct oral anticoagulants [22], expert consensus from the American College of Cardiology (ACC) focuses on the importance of patient and procedural risk factors. The ACC recommends that patient risk factors for bleeding followed by bleeding risk of the procedure be considered for the decision on whether or not to interrupt therapy, and that the specific drug and level of renal function then be used to guide the timing and duration of interruption to therapy [21]. Results from the present analysis support the central role of patient characteristics in decision-making processes regarding bleeding risk with rivaroxaban and the limited likely value of adding drug monitoring into management pathways.
Limitations of this analysis include the paucity of direct rivaroxaban plasma concentration measurements in ROCKET AF, although this was partially offset by the PT adjustment in some patients [14,15]. The predicted C trough values showed moderate between-patient variability (coefficient of variation: 54%) and were consistent with the previously published ROCKET AF popPK model [23]. In addition, because ROCKET AF was not designed to evaluate exposure-response relationships, the current analysis may have been underpowered to detect statistically significant differences for some outcomes. Finally, the exposure-response analysis included baseline use of antiplatelet agents and NSAIDs but did not evaluate the impact of their continued use during follow-up.
Conclusions
These results support fixed rivaroxaban 15 mg and 20 mg OD dosages in patients with NVAF and suggest therapeutic drug monitoring is unlikely to offer clinical benefits in this indication beyond evaluation of patient characteristics.
|
v3-fos-license
|
2021-10-21T15:19:33.557Z
|
2021-09-10T00:00:00.000
|
239080857
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.cell.com/article/S2666389921002038/pdf",
"pdf_hash": "72e0eb3e273e872183dcb82dd19f19d4d868af24",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2307",
"s2fieldsofstudy": [
"Computer Science",
"Psychology"
],
"sha1": "c83160c476b5c0134abcf464230cc633e5708b8e",
"year": 2021
}
|
pes2o/s2orc
|
Grounding deep neural network predictions of human categorization behavior in understandable functional features: The case of face identity
Summary Deep neural networks (DNNs) can resolve real-world categorization tasks with apparent human-level performance. However, true equivalence of behavioral performance between humans and their DNN models requires that their internal mechanisms process equivalent features of the stimulus. To develop such feature equivalence, our methodology leveraged an interpretable and experimentally controlled generative model of the stimuli (realistic three-dimensional textured faces). Humans rated the similarity of randomly generated faces to four familiar identities. We predicted these similarity ratings from the activations of five DNNs trained with different optimization objectives. Using information theoretic redundancy, reverse correlation, and the testing of generalization gradients, we show that DNN predictions of human behavior improve because their shape and texture features overlap with those that subsume human behavior. Thus, we must equate the functional features that subsume the behavioral performances of the brain and its models before comparing where, when, and how these features are processed.
In brief
With a set of five DNNs, Daube et al. model the behavior of n = 14 human participants and n = 12 human validators who identified familiar faces using a generative model of faces that controls the face features. They demonstrate that the better DNNs can predict human behavior because they use similar face shape features. They visualize the shape features that humans and DNNs jointly use and test whether these features enable generalization to new viewing angles, older age, and opposite sex conditions.
INTRODUCTION
Visual categorization is the pervasive process that transforms retinal input into a representation that is used for higher-level cognition, such as for memory, language, reasoning, and decision. For example, to guide adaptive behaviors we routinely categorize faces as being relatively happy, aged, or familiar, using different visual features. A long-standing challenge in the field of cognitive science is therefore to understand the categorization function, which selectively uses stimulus features to enable flexible behavior. [1][2][3] From a computational standpoint, this challenge is often framed as understanding the encoding function 4 that maps high-dimensional, highly variable input images to the THE BIGGER PICTURE Deep neural networks (DNNs) are often presented as ''the best model'' of human perception, achieving or even exceeding ''human-level performance.'' However, it remains difficult to describe what information these DNNs process from their inputs to produce their decisions. In naturalistic images, multiple cues can lead to the same decision. For example, a DNN can identify Peter's face from his darker eyebrows or high cheekbones. However, a human knowing Peter could identify his same face with similar accuracy, but using different features (e.g. his chin or hairstyle). Decision accuracy thus only tells the visible part of the story. The hidden part is the specific information processed to decide. To address this, we compared DNNs that predicted human face identity decisions to varying faces generated with a computer graphics program. With such controlled stimuli, we revealed the hidden part of the specific face information that caused the same behavioral decisions in humans and DNNs.
lower-dimensional representational space of features that serve behavior. Deep neural networks (DNNs) have recently become the model of choice to implement this encoding function. Two key properties justify popularity of DNNs: first, they can solve complex, end-to-end (e.g., image-to-behavior) tasks by gradually compressing real-world images over their hierarchical layers into highly informative lower-dimensional representations. Second, evidence suggests that the activations of DNN models share certain similarities with the sensory hierarchies in the brain, strengthening their plausibility. [5][6][7][8][9][10] Such findings underlie the surge of renewed research at the intersection between computational models, neuroscience, and cognitive science. 11 However, there is ample and mounting evidence that DNNs do not yet categorize like humans. Arguably, the most striking evidence comes from adversarial examples, whereby a change in the stimulus imperceptible to humans can counter-intuitively change its categorization in a DNN 12 and vice versa. 13 Even deceptively simple visual discrimination tasks reveal clear inconsistencies in the comparison between humans and state-of-theart models. 14 Furthermore, when tested with photos of everyday objects taken from unusual perspectives, DNNs trained on common databases of naturalistic images decrease in test-set performance in ways humans do not. 15 In sum, although DNNs can achieve human-like performance on some defined tasks, they often do so via different mechanisms that process stimulus features different from those of humans. 16,17 These results suggest that successful predictions of human behavioral (or neural) responses with DNN models are not sufficient to fully evaluate their similarity, a classic argument on the shortcomings of similarity in cognitive science. 18,19 In fact, we already know that similar behaviors in a task can originate from two human participants processing different features. 20 Generalizing to the comparison of a human and their DNN model, consider the example whereby both categorize a given picture as a horse. Should we conclude that they processed the same features? Not if the DNN learned to use the incidental horse-specific watermarks from the image database. 21 This simple example illustrates both the general importance of attributing behavior to the processing of specific features, and the longstanding challenge of doing so, especially given the dense and unknown correlative structure of real-world stimuli. 22 From an information-processing standpoint, we should know what stimulus information (i.e., features) the brain and its DNN models process, before comparing where, when, and how they do so. 23,24 Otherwise, we risk studying the processing of different features without being aware of the problem (cf. watermark example above). Thus, to realize the potential of DNNs as informationprocessing models of human cognition, 25 we need to first take a step back and demonstrate that similar behavior in a task is grounded in the same stimulus features-i.e., more specifically, in similar functional features: those stimulus features that influence the behavioral output of the considered system. 1 When such functional feature equivalence is established, we can meaningfully compare where, when, and how the processing of these same functional features is reduced with equivalent (or different) algorithmic-implementation-level mechanisms in humans and their models.
To develop such equivalence of functional features, we explicitly modeled stimulus information with an interpretable generative model of faces (GMF). 26 The GMF allows parametric experimental control over complex realistic face stimuli in terms of their three-dimensional (3D) shape and two-dimensional (2D) RGB texture. As illustrated in Figure 1, a candidate DNN model is typically evaluated on how it predicts human responses, by computing the bivariate relationship between human responses and DNN predictions. Here, we further constrained this evaluation by relating human behavioral responses and their DNN predictions to the same set of experimentally controlled GMF features. Conceptually, this is represented as the triple intersection in Figure 1, where the pairwise intersections <GMF features; hu-man> and <GMF features; DNN predictions> comprise the functional face features that subsume human responses and their DNN models. The triple intersection further tests whether the same responses in the two systems arise from the same face features, on the same trials. We then compared how each candidate DNN model represents these face features to predict human behavior and reconstructed the internal face representations of humans and their DNN models with reverse correlation. 27 Lastly, and importantly, we used our generative model to compare the generalization gradients of humans and DNNs to typical out-ofdistribution stimuli (i.e., generalizations to changes of face pose, age, and sex to create siblings with family resemblance). With this approach, we ranked models not only according to their surface similarity of predicted human behavior but also according to the deeper similarity of the underlying functional features that subsume behavioral performance.
RESULTS
We used a generative model that parameterizes faces in terms of their 3D shape and 2D RGB texture (GMF; see ''generative model of 3D faces'' in experimental procedures) to control the synthesis of $3 million 2D face images that varied in identity, sex, age, ethnicity, emotion, lighting, and viewing angles (see Figure S1 for a demonstration; see ''networks, training set'' in experimental procedures). We used these images to train five DNNs that shared a common ResNet 31 encoder architecture but differed in their optimization objectives. The five DNNs were as follows (see Figure 2 for their schematic architectures and performances): (1) a triplet loss network 32 that learned to place images of the same (versus different) identity at short (versus long) Euclidean distances on its final layer; (2) a classification network 33 that learned to classify 2,004 identities (2,000 random faces, plus four faces familiar to our participants as work colleagues, ''ClassID''); (3) another classification network that learned to classify 2,004 identities plus six other factors of variation of the generative model (''ClassMulti''); (4) an autoencoder (AE) 34 that learned to reconstruct all input images; and (5) a view-invariant autoencoder (viAE) 35 that learned to reconstruct the frontal face image of each identity irrespective of the pose of the input.
We used these five DNNs to model the behavior of each of n = 14 individual human participants who resolved a face familiarity experiment (see ''participants'' in experimental procedures and Zhan et al. 26 ) In this experiment, participants were asked to rate, from memory, the similarity of random face stimuli generated by the GMF (Figure 2A) to four familiar identities (see ''experiments'' in experimental procedures and Zhan et al. 26 ) On each of 1,800 trials, each participant was presented six random faces. They were asked to first choose the face most similar to a target identity and then rate this similarity on a 6-point scale. Importantly for our modeling, we propagated these 2D images through the five DNNs and then used the activations of their respective layer of maximum compression (i.e., the ''embedding layer'') for the subsequent analyses detailed below.
To assess functional feature equivalence between human participants and the DNN models, we proceeded in four stages (see Figure 2 for an overview of our pipeline). First, we used the representations of the experimental stimuli on the DNNs' embedding layers to predict the corresponding behavior of humans in the experiment (Figures 2C and 2D). We did so using linear models to restrict the assessment to explicit representations. 4 We call this first stage of seeking to equate human and DNN behavior ''forward modeling.'' In a second stage, we analyzed the face features represented on the DNN embedding layers that predict human behavior. In a third stage ( Figures 2E and 2F), we used reverse correlation to reconstruct and compare these categorization features between humans and their DNN models. Lastly, in a fourth stage ( Figure 2G), we compared the generalization performances of humans and DNNs under new testing conditions of face viewing angles, sex, or age that did not appear in the data used to fit the forward models.
On previewing the results of the DNN models tested, the viAE afforded the best predictions of human behavior. These could be attributed to the shape features of the GMF, which also subsumed human behavior. That is, the surface similarity of behavioral performance was grounded in a deeper similarity of functional face features. Of the DNN models tested, the viAE model was therefore the most functionally similar to humans.
Forward modeling of human behavior using DNN activations
To evaluate how accurately the compressed stimulus representations on the DNNs' embedding layers predicted the face similarity ratings (on a 6-point rating scale, see Figure S2) of human participants, we activated their embedding layers with the 1,800 2D face stimuli rated in terms of similarity to each target identity in the human experiment. We then used these activations to linearly predict the corresponding human ratings in a nested crossvalidation 37 (see ''forward models'' in experimental procedures). We compared DNN performances with three additional benchmark models that also linearly predicted human behavior. The first model used on each trial the objective 3D shape parameters of the GMF that define the identity of each face stimulus (rather than the face image); the second one used instead the GMF texture parameters (cf. Figures 1 and 2, and 3D shape and 2D In general, complex visual inputs are processed in an unknown way in the brain and its DNN models to produce behavior. DNNs (schematized as layers of neurons) can predict human behavior and can in principle be used to facilitate our understanding of the inaccessible information-processing mechanisms of the brain. However, nonlinear transformations of information in DNNs complicate our understanding, in turn limiting our understanding of the mechanistic causes of DNN predictions (and human behavior). To address this issue of interpretability, we used a generative model of realistic faces (GMF) to control the high-level stimulus information (3D shape and RGB texture). The Venn diagram illustrates the logic of our approach. Human behavior and its DNN model predictions are both referred to in the same stimulus model: (1) the GMF features that underlie human behavior; (2) the GMF features that underlie DNN predictions of human behavior. The question then becomes: are these GMF features equivalent? That is, do the two intersections intersect? 28 We quantify GMF feature overlap with information theoretic redundancy 29,30 -i.e., as the information that GMF features and the activations of the embedded layers of DNN models provide about human behavior. In doing so, we assess the functional feature equivalence of individual human participants and their DNN models in relation to a specific model of the stimulus and behavioral task. See Figure 2 for a detailed overview of the analysis pipeline. Our results develop why such feature equivalence enhances our understanding of the information-processing mechanisms underlying behavior in the human brain and its DNN models. RGB texture). Finally, the third model was a simpler architecture that linearly predicted human behavior from the first 512 components of a principal components analysis (PCA) of all stimulus images (''pixelPCA'').
For each model, we evaluated predictions of human behavior with two information theoretic quantities ( Figures 3A and 3B). With mutual information (MI), we quantified the strength of the relationship between the observed human and DNN predicted similarity ratings ( Figures 3A and 3B, y axes). Importantly, we also used redundancy (from partial information decomposition) 29 to evaluate the triple set intersection of Figure 1, which quantifies the overlap between predictions from DNN models and predictions from GMF shape parameter models ( Figure 3B, x axes). This overlap indicates the extent We seek to establish the GMF feature equivalence between humans and their DNN models. (A) We used the GMF to synthesize random faces (3D shape and RGB texture). (B) We asked humans to rate the similarity of these synthesized faces to the faces of four familiar colleagues (symbolized by purple, light-blue, gray, and olive dots). (C) Linear multivariate forward models predicted human responses (denoted by the multiplication with linear weights B) from GMF shape and texture features and DNN activations (DNN architectures are schematized with white circles symbolizing neurons, embedding layers are colored; scatterplots for Triplet network show two-dimensional t-stochastic neighborhood embeddings 36 of the embedding layer when activated with 81 different combinations of viewing and lighting angles per colleague). As a baseline model, we also included the first 512 components of a principal components analysis on the pixel images (''pixelPCA,'' not shown here). (D) We then evaluated shared information between human behavior, DNN predictions from embedded activations, and GMF features using partial information decomposition. 36 Here, the Venn diagram shows the mutual information (MI) between human responses and their predictions based on the GMF shape features (blue circle) or based on the Triplet model (yellow circle). The overlapping region denotes redundancy (R). (E-G) We performed reverse correlation (E) to reconstruct internal templates (F) of the familiar colleague faces from human and model predicted behavior. Lastly, we amplified either the task-relevant versus task-irrelevant features of the four colleagues (identified in E) and rendered these faces in five different generalization conditions (G) that humans and DNNs had to identify. See also Figure S1.
OPEN ACCESS
Article to which the DNN embedding layers and the GMF shape parameters both predict the same human behaviors on the same trials. With Bayesian linear models, 38 we then statistically compared the bivariate relationships (i.e., MI) and overlaps (i.e., redundancy) of different GMF parameters and DNN embedding layers with each other.
Of all models, the viAE best predicted human behavior (see Figure 3B), closely followed by the AE, with a performance level similar to that of the GMF shape parameters (fraction of samples of posterior in favor of viAE over shape: f h1 = 0.7536; AE > shape: f h1 = 0.6457; f h1 = 0 for all other networks versus shape). Surprisingly, the simple pixelPCA came close to the complex AEs (with (B) y axis: MI between human behavior and test-set DNN predictions; x axis: redundant information about human behavior that is shared between DNN predictions and GMF shape feature predictions. These plots show that DNN prediction performance of human behavior increases on the y axis when the DNN embedding layers represent the same shape features as humans. Each data point in (A) and (B) represents the combination of one test set, one participant, and one familiar identity. Overlaid lines reflect the 95% (bold) and 50% (light) highest posterior density intervals (HPDIs) of the corresponding main effects of predictor spaces from Bayesian linear models fitted to the MI and redundancy values. Article the AE only narrowly beating pixelPCA, f h1 = 0.8582, Figure 3B). Critically, as model predictions increased in accuracy, they also increased in overlap (i.e., redundancy) with the GMF shape parameters ( Figure 3B), implying that single-trial behavior across systems (i.e., humans, viAE, and pixelPCA) could be attributed to these same specific parameters of 3D face shape-i.e., under these conditions they used the same functional face features to achieve the same behaviors.
Furthermore, we validated this overlap in shape parameters by showing that a model using jointly (vi)AE activations and GMF shape parameters (versus (vi)AE activations on their own) did not improve prediction of human behavior (see Figures S4 and S8 for additional candidate models, including combinations of the predictor spaces reported here, weighted and unweighted Euclidean distances, variational AEs, and decision neuron activities; see Figure S5 for the same comparison using Kendall's tau as an evaluation metric; see Figures S6 and S7 for a model comparison on the across-participant average). Note that the performances of these models could not be reached when predicting the behavior of participants with the behavior of other participants (see Figures S3-S5). This means that participants behaved in systematically idiosyncratic ways.
In sum, in our first stage to assess functional equivalence between humans and their DNN models, we built forward models that predicted human behavior from the DNNs' embedding layers. The embedding layer of the (vi)AE won. We further showed that better predictions of human behavior from the embedding layers of DNNs were caused by their increased representation of the 3D face features that predict human behavior. However, a simple PCA of the pixel images performed competitively. At this stage, we know that better predictions of human behavior are caused by better representations of the 3D shape features that humans use for behavior. Next, we characterized what these 3D features are.
Embedded face-shape features that predict human behavior The viAE learned to represent on its embedding layer, from 2D images, the face-shape features that provide the best per-trial prediction of human behavior. Here, we establish: (1) how the DNNs represent these face-shape features on their embedding layers; and (2) how each feature impacts behavioral prediction in the forward models discussed in stage 1 above. We did not analyze the GMF texture features further because they could not predict human behavior (see Figure 3). Face-shape features represented on the embedding layers of DNNs To reveal these face-shape features, we built linear decoding models. These used the embedding layer activations to predict the positions of individual 3D vertices (see ''decoding of shape information from embedding layers'' in experimental procedures). We then evaluated the fidelity of their reconstructions with the Euclidean distance between the linearly predicted and the objective 3D face vertex positions. Fidelity increased from the Triplet to the two classifier networks, to the (vi)AE (which had the lowest error, see Figure 4C). The pixelPCA achieved a similarly low error, and all models shared a common type of reconstruction errors ( Figure 4D) which misrepresented the depth of the peripheral and nasal face regions. Patterns of face-shape features that predict behavior in the DNN forward models To better understand the shape features that the aforementioned forward models used to predict human behavior, we examined their linear weights (see ''forward models'' in experimental procedures). The forward GMF shape model weights directly relate a 3D shape space to human behavior. Thus, their weights form an interpretable face-space pattern that modulates behaviori.e., a ''shape receptive field'' (SRF), see Figure 4H (rightmost column). In contrast, the forward models based on the DNN relate (i.e., linearly weigh) DNN activations, not GMF shape parameters, to human behavior. Thus, we used an indirect approach to interpret these weights. We built auxiliary forward models that simulated (i.e., linearly re-predicted, Figure 4E) the DNN predictions of human behavior, but this time using the GMF shape parameters instead of the embedding layers. This produced interpretable SRFs ( Figure 4H) with which we could therefore understand which shape features are (or are not) represented on the DNN embedding layers to predict human behavior. Specifically, we reasoned that DNN activations and GMF features would similarly predict behavior if: (1) both shared the same SRF; and (2) predictions from DNN activations were similar to their simulations based on GMF features. Our analyses revealed that the (vi)AE best satisfied these two conditions (Figures 4F and 4G). PixelPCA features were again close to the performance of the best DNN models ( Figure 4F).
In this second stage to assess functional feature equivalence, we identified, at the level of individual 3D face vertices, the shape (D) Correlation matrix of error patterns across DNNs. Colored dots on x and y axes represent each DNN model (see F for a legend). Correlating the MAE patterns from (C) across models reveals a high similarity of errors across models: vertices that are difficult to decode from Triplet activity are also difficult to decode from viAE activity. (E) Simulating DNN predictions of observed human behavior with GMF shape features using re-predictions. First, we estimate B S , the shape receptive fields (SRFs) that predict human behavior from GMF shape features. Second, we estimate B N , the weights that predict human behavior from DNN activations. Third, we estimate B SN , the SRFs that predict DNN predictions of human behavior from GMF shape features. features that DNNs represent to predict (cf. ''forward modeling of human behavior using DNN activations'') human behavior. Of all five DNNs, we found that the (vi)AE represents face-shape vertices most faithfully, leading to the most accurate predictions of human behavior. However, the simpler pixelPCA used apparently very similar features.
Decoding the shape features with reverse correlation So far, we have assessed the functional equivalence between human behavior and DNN-based forward models in two stages: we have quantified to what degree the DNN model predictions of human behavior are attributable to GMF face-shape parameters (in stage 1), and we have characterized how the DNN models used specific patterns of face-shape parameters to predict behavior (in stage 2). In this third stage, we use the behavior observed in humans and predicted by DNN models to reconstruct, visualize, and compare the actual 3D shape features of the target faces represented in both humans and their DNN models.
To run the human experiments 26 with the DNN models, we proceeded in three steps (see''reverse correlation'' in experimental procedures). First, we used the forward models described in stage 1 to predict human behavior in response to all face stimuli of the human experiment (6 3 1,800 = 10,800 face stimuli per familiar target face). 26 On each trial, the forward models ''chose'' the face stimulus with the highest predicted rating from an array of 6 (see Figure S3). This resulted in 1,800 chosen faces and their corresponding similarity rating predictions. Second, for each model and participant, we regressed (mass univariately) the GMF parameters of the chosen faces on the corresponding ratings to derive a slope and intercept per GMF shape and texture parameter. Third, we multiplied these slopes by individual ''amplification values'' that maximized the behavioral responses ( Figure 4B). The results were faces whose functional features elicited a high similarity rating in the DNN models ( Figure 4C), analogous to faces that elicited high similarity ratings in each human participant, as in the original study. 26 We then compared the functional face features of human participants and their DNN models ( Figure 5D, left). We also computed how veridical these human and DNN features were to the ground truth faces of familiar colleagues ( Figure 5D, right).
How human-like are DNN features?
The viAE had the most human-like features, with the lowest mean absolute error (MAE, Figure 5D, left, y axis; comparison with second best DNN model, AE > viAE: f h1 = 0.9943) and a correlation with human features similar to that of the AE ( Figure 5D, left, x axis; viAE > AE: f h1 = 0.8489). All DNN models had a lower MAE than the simple pixelPCA model (all DNNs < pixelPCA: f h1 > 0.9492), but only the (vi)AE had a better correlation with human features (AE and viAE > pixelPCA: both f h1 > 0.9729). How veridical are DNN and human features? viAE features were closest to the veridicality of human features to the ground truth 3D faces, with the lowest MAE ( Figure 5D, right, y axis; second best DNN model AE > viAE: f h1 = 0.9558; viAE > human: f h1 = 0.9996) and a correlation comparable with that of the AE. All DNN models had a lower MAE than the simple pix-elPCA model (all DNNs < pixelPCA: all f h1 > 0.9732), but only the (vi)AE had a better correlation with the ground truth face identity features (AE and viAE > pixelPCA: both f h1 > 0.8842).
In sum, this analysis compared the internal representations of the target faces in human participants and their DNN models, and all with the ground truth 3D shapes of the target identities. These comparisons, supported by intuitive visualizations, revealed that the viAE had internal feature representations that best matched the internal representations of humans.
Generalization testing
A crucial test of models of human behavior is their generalization to conditions that differ from the distribution of the training data. We performed such out-of-distribution testing in five different tasks, 26 using the GMF to change the viewing angle, the age (to 80 years), and the sex (to the opposite sex) of the target familiar face ( Figure 6C). Importantly, we did so while also selectively amplifying functional face features that were expected ( Figure 6A) or not expected ( Figure 6B) to cause the identification of each familiar face (based on reverse correlation, see ''experiments-generalization testing''in experimental procedures; Zhan et al. 26 ). Using these new stimuli, we compared the generalization performance of a new group of n =12 human validators and the DNN models. On each trial, validators responded by selecting the familiar identity that was most similar to the face stimulus (or used a fifth option when the stimulus was not similar to any familiar face). For each face stimulus, we predicted the human similarity ratings using the forward models fitted to each of the 14 participants and four familiar faces as described in stage 1 above, and chose the faces that yielded the highest predicted rating. We then compared the absolute error of the model choice accuracies with the human choice accuracies.
The viAE best matched human identification performance, which both increased when the functional features were amplified in the stimulus ( Figures 6D-6F). The viAE had only a slightly smaller error compared with the AE for the frontal view (viAE < AE: f h1 = 0.8958), but a better view invariance with a clearly smaller error for the À30 (viAE < AE: f h1 = 0.9995) and +30 views (viAE < AE: f h1 = 0.9696). Only the GMF shape feature model came close to the (vi)AE (and was better than both AEs at À30 , both f h1 = 1, and +30 , both f h1 > 0.7656). However, recall that the GMF is a non-image-computable ''ground truth'' 3D model whose input is not affected by 2D image projection. Critically, the simple pixelPCA model did not generalize well to viewpoint changes (viAE and AE < pixelPCA: f h1 = 1) except in the age generalization task, where it had a slightly lower error than the second best viAE (pixelPCA < viAE: f h1 = 0.9940). In the opposite sex task, the viAE again had the lowest error (viAE < second best AE: f h1 = 1).
Whereas previous analyses suggested that a model as simple as the pixelPCA could explain human responses, more comprehensive tests of the typical generalization gradients of face identity demonstrated that such a conclusion was unwarranted. Thus, rigorous comparative tests of typical generalization gradients are required to properly assess human visual categorization in relation to their DNN models.
DISCUSSION
In this study, we sought to address the long-standing problem of interpreting the information processing performed by DNN models so as to ground their predictions of human behavior in ll OPEN ACCESS Article interpretable functional stimulus features. Key to achieving this was our use of a generative model to control stimulus information (3D face shape and RGB texture). We trained five DNN models with different objectives, following which we activated the DNNs' embedding layers with the face stimuli of a human experiment (in which participants were asked, based on their memory, to assess the similarity of random faces to the faces of four familiar colleagues). We then used these activations to fit forward models that predicted human behavior. Of the tested models, (vi) AE embeddings best predicted human behavior, because these embeddings represented the human-relevant 3D shape of familiar faces with the highest fidelity. Next, we reconstructed the face features represented in the embeddings that impact the behavioral predictions. The 3D reconstructions demonstrated that the viAE models and humans used the most similar functional features for behavior. Lastly, we found that the viAE best matched human generalization performance in a range of five different out-of-distribution changes of the stimuli (testing several viewing angles, older age, and opposite sex versions of the four colleagues).
Together, our approach (cf. Figure 1) and analyses suggests a more stringent test of functional feature equivalence between human responses and their DNN models beyond the simple equivalence of responses to uncontrolled naturalistic stimuli. Such deeper functional features equivalence enables the mechanistic interpretations of the processing of these same features across the layers of the human brain and its DNN models. However, as shown in psychophysics, exhaustively testing the generalization gradients of human visual categorization is difficult because it requires not only modeling behavioral (or neuronal) responses but also the real-world (and artificial) dimensions of variations of the stimulus categories under consideration. .). With mass-univariate regression, we predicted each individual GMF feature from human behavior and its DNN predictions (3.). (B) Amplification tuning curves. We presented the reverse correlated templates amplified at different levels to each model. Solid lines denote pooled median across participants and colleagues, shaded regions denote 95% (frequentist) confidence intervals. Black lines at the top denote 95% (bold) and 50% (light) highest density estimates of human amplification values. The linear GMF shape and texture forward models predicted monotonically increasing responses for higher amplification levels.
Other models peaked at a given amplification level. See Figure S9 for amplification tuning responses of a broader range of models. (C) Comparison of rendered faces. Panels show ground truth face of one exemplary target familiar colleague captured with a face scanner (top left) and reconstructions of the face features from human behavior and its DNN predictions for one typical participant (i.e., closest to the pooled group medians shown in D). Figure S14
Why focus on functional equivalence?
A key finding that motivates usage of DNNs as models of the human brain is that their activations predict behavioral and neural responses to novel real-world stimuli better than any other model. However, it remains unclear whether these surface similarities between humans and DNNs imply a deeper similarity of the underlying information-processing mechanisms. 39 Real-world stimuli comprise multiple unknown correlations of undefined features. It is generally unknown which of these features DNNs use, leading to unpredictable out-of-distribution generalizations. Consequently, it is difficult to assess the featural competence of the model that predicts the behavioral or neural responses. Surprisingly simple feature alternatives (''feature fallacy'') 40,41 could explain such surface similarities. 21 Relatedly, extensive testing of the generalization gradients of humans and DNNs is required to reveal algorithmic intricacies that would otherwise remain hidden, leading to failure with out-of-distribution exemplars.
Marr's framework offers a solution to these problems: 23 we should constrain the similarity of complex information-processing mechanisms at the abstract computational level of their functional goals of seeking specific information to resolve a task. Our methodology sought to assess whether the human participants and their DNN models processed similar functional face features in a face identity task where features are defined within a generative model of the stimulus. Once functional equivalence is established, we can turn to the algorithmic-implementation levels of Marr's analysis. That is, we can seek to understand where, when, and how detailed mechanisms of the occipitoventral hierarchy, and suitably constrained DNN architectures (e.g., with two communicating hemispheres, properties of contralateral followed by bilateral representations, and so forth) process the same functional features of face identity, using a model of the stimulus. 42 Such algorithmic-implementation-level explorations could then consider estimates of the algorithmic complexity of the task 43 to regularize explanations of model predictions to be as simple as possible. 16,[44][45][46] We see the deeper functional equivalence of the information processed as a necessary prerequisite to surface comparisons of network activations or behaviors in a task.
Hypothesis-driven research using generative models
The idea of using generative models in psychophysics and vision research is not new. [47][48][49][50] It arose from the recognition by synthesis framework, 51,52 itself an offspring of Chomsky's generative grammars. Explicit experimental hypotheses are directly tied to the parameterization of stimuli by generative models and vice versa. For example, we explicitly tested that a parameterization of faces in terms of their 3D shape and RGB texture could mediate human and DNN behavior in the task. 26,53 Our study thereby contributes to the debate about the degree to which convolutional DNNs can make use of shape information in images. 33,[54][55][56][57][58][59] In this context, the exact structure of the information represented in the human brain remains an empirical question. The veridical representation implied by computer graphics models 53,60,61 is one hypothesis. Other specific ideas about face, object, and scene representations must and will be tested with different designs of generative models, including DNNs (e.g., VanRullen and Reddy, [62][63][64] Bashivan et al., [62][63][64] Ponce et al. [62][63][64]. The ideal generative model for the encoding function of visual categorization would ''simply'' be the inverse of the function implemented by the biological networks of the brain. Such an inverse would provide the control to experiment with each stage of the brain's algorithm of the stimulus processing for visual categorizations. In the absence of such an ideal, we must develop alternative generative models to test alternative hypotheses of the brain's encoding function for categorization. Modern systems such as generative adversarial networks 65 and derivatives of the classical variational autoencoders (VAEs) such as vector-quantized VAEs 66,67 and nouveau VAEs, 68 which can be trained on large, naturalistic face databases, can synthesize tantalizingly realistic faces, complete with hair, opening up an interesting avenue for future research and applications. [69][70][71][72][73] However, understanding and disentangling their latent spaces remains challenging. 74,75 viAE wins Among the tested DNNs and across the multiple tests, the viAE provided the best face-shape representations to predict human behavior. With the notable exception of the generalization testing, the simple nonlinear pixelPCA model came close to this performance. This speaks to a model of human familiar face perception whereby the goal of feedforward processing is a view-invariant but holistic representation of the visual input. Interestingly, the Triplet, ClassID, and ClassMulti built up to this performance level (cf. Figures 3, 4, and 5). This suggests that the latent space learned to reconstruct an entire image of the input ((vi)AE) is approximated by the latent space learned when performing multiple stimulus categorizations (recall that ClassMulti learned all the categorical factors of the GMF), whereas simpler cost functions (Triplet and ClassID) yielded less informative latent spaces. Their discriminative goals can be solved with shortcuts 16 relying on a few isolated features, which are not sufficient to generalize as humans do. 76 This aligns with previous findings that multi-task learning [77][78][79] and generative models 80 enhance robustness against adversarial attacks and best predict behavior under severe testing. 17 In relation to faces as a broad category, future research could systematically study the number and combinatorics of categorizations (e.g., identity, sex, age, ethnicity, facial expressions) and rendering factors (e.g., illumination, pose, occlusions) that would be required to enhance the latent spaces to match (or surpass) the predictiveness of behavior of the latent space of the viAE, also across varying levels of familiarity. 81 Note that our specific viAE model remained imperfect in its prediction of human behavior and functional similarity of features. Its architecture did not incorporate many well-known characteristics of the human visual hierarchy, including temporal, recurrent 9 processing (e.g., with multiple fixations 82 due to foveated and para-foveated image resolution), 83 contralateral, hemispherically split representations of the input, transfer of visual features across hemispheres, 84 and integration in the ventral pathway, 85 among others. An algorithmic-implementation-level explanation of the functional features learned by the viAE should be part of future research.
Constraints on the comparison of models with human behavior
Our modeling explicitly fitted regressions of multivariate features on unidimensional behavior. 4 Our attempts to directly (parameter-free) extract one-dimensional predictions of human ll OPEN ACCESS Article behavior from DNNs failed ( Figure S4). Whereas models might exist to solve this problem more efficiently, 17,80 an obstacle remains in that the human task is subjective: we do not expect the behavior of a given participant to perfectly predict that of another (see Figures S3 and S4, although representations tend to converge across participants). 26,86 Participants can have their own internal representations of each target colleague, 1,86 which is impossible to predict without considering data from individual participants. From that perspective, learning an abstracted feature representation that still allows prediction of individual behavior is an attractive compromise. We implemented such a weighting, either directly as a linear combination of GMF features and DNN activations, or as a linear combination of feature-or activation-wise distances of stimuli to model representations of the target identities. For the image-computable models, these approaches did not lead to strong differences. Arbitrating between such computational accounts of human categorization behavior thus remains a question for future research. [87][88][89] The interpretability of DNNs is now an important research topic. Sophisticated methods exist to visualize the stimulus features that cause the activation of a network node, such as deconvolution, 90 class activation maps, 91 activation maximization, 92-96 locally linear receptive fields, 97 or layer-wise relevance propagation. 21,98,99 These methods usually rely on the noise-free accessibility of the activations, which is not possible with humans, making these methods unsuitable to compare humans with their DNN models. This is a significant hindrance to developing a human-like artificial intelligence, which requires resolving the challenge of designing experiments and analyses that enable inferences about the hidden representations of both humans and models. 100,101
Conclusion
We have developed an example of how we can extend mechanistic interpretations of DNN predictions of human responses, in which we progress beyond surface predictions to a functional equivalence of the features that affect behavior. We did so by controlling complex stimulus features via an interpretable generative model. The limits of what we can predict about human behavior may be defined by the limits of current computer vision models. However, within these limits, the proportion that we can meaningfully understand is defined by the ever increasing capacities of interpretable generative models of stimulus material. 102 Databases of natural images will only take us so far. Hence, we believe that future research attention should be distributed on the gamut between discriminative models to do the tasks, and generative models of the stimulus to understand what these models do.
Materials availability
This study did not generate new unique reagents. Data and code availability Data are available in the following repository: https://osf.io/7yx28/ Code can be found in the following github repository: https://github.com/ cdaube/sharedFunctionalFeatures.
Generative model of 3D faces
The generative model of 3D faces decomposes the shape and texture components of a database of 357 faces, captured with a 3D face-capture system, 103 to enable their controlled recombination. For this study, two variations of the database were created: one excluding the faces of two female target colleagues and another excluding the faces of two male target colleagues. Each of the two database subsets then consists of a [355 3 4,735 * 3] (N 3 vertices * XYZ) shape matrix S and 5 [355 3 800=2 i * 600=2 i * 3] (N 3 X=2 band * Y=2 band * RGB) texture matrices T i for bands i = 0, ., 4 of a Gaussian pyramid model.
For each of the two database subsets, the modeling is achieved in two steps. In the first step, two separate general linear models are used to estimate the linear parameters of a constant term as well as sex, age, ethnicity (coded using two dummy variables), and their two-and three-way interactions. This is done with a [355 3 12] design matrix X describing the predictor values, a [12 3 4,735 * 3] matrix A S describing the shape coefficients, and [12 3 800=2 i * 600=2 i * 3] matrices A Ti describing the texture coefficients: ) Here, E S [355 3 4,735 * 3] and E Ti [355 3 800 * 600 * 3] are the model residuals for shape and texture, respectively. A S and A Ti are estimated using leastsquares linear regression.
In the second step, the residual components E S and E Ti are then isolated by removing the linear effects of ethnicity, sex, and age as well as their interactions from S and T i : Next, singular value decomposition (SVD, using MATLAB's economy-sized decomposition) is performed to orthogonally decompose the shape and texture residuals: The matrices U S [4,735 * 3 3 355] and U Ti [800=2 i * 600=2 i * 3 3 355 for each of i = 0, ., 4 spatial frequency bands] can thus be used to project randomly sampled shape or texture identity vectors into vertex or pixel space, respectively.
Any single face can then be considered as a linear combination of two parts: a basic ''prototype face'' defined by its factors of sex, age, and ethnicity and a specific individual variation on that prototype defined by its unique component weights. Once we know these two parts of the individual face, e.g., by random sampling, we are free to change one or the other, producing for example the same individual at a variety of different ages. This can then be rendered to an observable image with a desired viewing and lighting angle.
Ratings of random faces
To obtain behavioral data from humans, we recruited seven male and seven female white Caucasian participants aged 25.86 ± 2.26 years (mean ± SD). Generalization testing For a second validation experiment, 12 separate participants (7 white Caucasian female and 1 East Asian females, 5 white Caucasian males aged 28.25 ± 4.11 years [mean ± SD]) were recruited.
In both experiments, all participants had been working at the Institute of Neuroscience and Psychology at the University of Glasgow for at least 6 months and were thus familiar with the target faces. All participants had normal or corrected-to-normal vision, without a self-reported history or symptoms of synesthesia, and/or any psychological, psychiatric, or neurological condition that affects face processing (e.g., depression, autism spectrum disorder, or prosopagnosia). They gave written informed consent and received UK£6 per hour for their participation. The University of Glasgow College of Science and Engineering Ethics Committee provided ethical approval for both experiments.
Experiments
Ratings of random faces Four sets of 10,800 random faces were generated, one for each of the four target colleagues. Two sets of random faces were created using the GMF ll OPEN ACCESS Article that was built with the database that excluded the two female target colleagues. The other two sets of random faces were created using the GMF built with the database that excluded the two male target colleagues. The demographic variables were fixed (sex, age, and ethnicity) to those of the target colleagues. The resulting faces were rendered at frontal viewing and lighting angles. For each participant and target colleague, the generated faces were randomly gathered into 1,800 groups of 2 3 3 arrays, which were superimposed on a black background. In a given trial, these face arrays were shown on a computer screen in a dimly lit room while the participant's head was placed on a chin rest at a 76 cm viewing distance from the image, such that each face subtended an average of 9.5 3 6.4 of visual angle. Participants were instructed to choose the face of the array that most resembled that of the target colleague by pressing the corresponding button on a keyboard. The screen then changed to display the instruction to rank the chosen face with respect to its similarity to the target colleague on a 6-point rating scale, ranging from 1 (''not similar'') to 6 (''highly similar'').
These trials were split into four sets of 90 blocks of 20 trials each, resulting in a total of 7,200 trials that all participants completed over several days. Generalization testing For each target colleague, 50 new 3D face stimuli were generated. These comprised the combinations of two levels of diagnosticity at five levels of amplification, which were each rendered in five different generalization conditions. Each of these factors will be explained in the following.
In the original analysis, 26 the mass-univariate reconstructions from observed human behavior (see ''reverse correlation'' below) had been referenced to reconstructions from 1,000 permuted versions of the responses (using the same amplification values). For each vertex, the Euclidean distance of the chance reconstruction to the categorical average had been signed according to whether it was located inside or outside of the categorical average and averaged across permutations (''chance distance''). This was repeated using the ground truth target colleague shape (''ground truth distance'') as well as the human-reconstructed shape (''human-reconstructed distance''). If the absolute difference of the chance distance and the ground truth distance was larger than the absolute difference of the human-reconstructed distance and the ground truth distance, the vertex was classified as ''faithful.'' This had resulted in a 4,735 3 14 * 4 binary matrix which had then been decomposed into matrices W [4,735 3 8] and H [8 3 56] (each column corresponding to a combination of a participant and a target colleague) using non-negative matrix factorization. Any of the eight component columns in W had been classified as contributing to a group representation of the target colleagues if the median of the loadings H across participants surpassed a threshold value of 0.1. The ''diagnostic component'' C D of each target colleague had then been defined as the maximum value on that vertex across components considered to load on the respective target colleague representation. After construction, C D had then been normalized by its maximum value. Its ''non-diagnostic'' complement C N was then defined as C N = 1 À C D . Taken together, the vectors C D and C N could now be interpreted as reflecting to what degree each vertex contributed to the faithful representation of each target colleague across the group of participants.
These diagnostic and non-diagnostic components could then be used to construct 3D faces containing varying levels of either diagnostic (F D ) or nondiagnostic (F N ) shape information: Here, G reflects the ground truth representation of the respective colleagues recorded with the 3D camera array, a reflects an amplification value that was set to one of five levels (0.33, 0.67, 1, 1.33, 1.67), and X describes the sex, ethnicity, age, and interaction values that describe the respective colleague such that XA S represents the categorical average (see ''generative model of 3D faces''). Each of these ten faces per target colleague were rendered at the viewing angles À30 , 0 , and +30 as well as with their age factor set to 80 years and a swapped sex factor.
The 12 validation participants completed three sessions (3 viewpoints, age, and sex) in a random order, with one session per day. On a given trial, the validators saw a central fixation cross for 1 s, followed by a face stimulus on a black background for 500 ms. They were then asked to classify the seen face as showing one of the four target colleagues (or their siblings or parents in the age and sex conditions) or ''other'' if they could not identify the face as accurately and quickly as possible. Between each trial, a fixation cross was shown for a duration of 2 s. Each stimulus was shown five times in a randomized order. In the viewpoint session, validators completed 15 blocks of 41 trials; in the age and sex sessions, validators completed 5 blocks of 44 trials. This yielded accuracies of either 0, 0.2, 0.4, 0.6, 0.8, or 1 for each of the 10 stimuli per target colleague.
Networks
Training and testing of the networks was performed in Python 3.6.8 using keras 2.2.4 104 with a tensorflow 1.14.0 backend. 105 All networks shared the same training and testing sets and were constructed using the same encoder module. All models were trained using three data augmentation methods (random shifts in width and range by 5% as well as random zooms with a range of 10%).
Training and testing sets
The networks were trained on observable images generated by the GMF. We created 500 random identity residuals and combined them with the four combinations of two sexes (male and female) and two types of ethnicity (Western Caucasian and East Asian). To these, we added the four target colleagues, resulting in a total of 2,004 identities. We rendered these at three different ages (25,45, and 65 years), seven different kinds of emotion (happy, surprise, fear, disgust, anger, sadness, neutral), and three different horizontal and vertical viewing and lighting angles (À30 , 0 , and 30 ), resulting in 3,408,804 images at a resolution of 224 3 224 RGB pixels. The four colleagues were rendered with two versions of the GMF built on face database subsets that excluded the two target colleagues of the same sex. Fifty percent of the 2,000 random identities were rendered with one of these two GMFs. This dataset had first been generated for experiments not including the data from the human experiment. The version of the GMF that had been used to generate the stimuli for the human experiment had slight differences (rescaling of the data from the face database and different range of random coefficients). To allow for effortless generalization to the slightly different statistics of the stimuli that had been generated for the human experiment, we rendered all 3,408,804 images twice, once with each of the two versions, effectively resulting in a further data augmentation. For the purpose of training, development, and testing, the dataset of 6,817,608 images was split into a training set containing 80% of the images, and into a development and test set each containing 10% of the images.
Encoder module
We used a ResNet architecture to encode the pixel space images into a lowdimensional feature space. 31 The 224 3 224 RGB images were first padded with three rows and columns of zeros, then convolved with 64 7 3 7 filters with a stride of 2, batch normalized, subjected to a rectifying linear unit (ReLU) nonlinearity, max-pooled in groups of 3 3 3, and propagated through four blocks with skip connections, across which an increasing number of 3 3 3 filters was used (64, 128, 256, and 512), with a default stride of 1 in the first block and a stride of 2 in the remaining three blocks. In each skip block, the input was first convolved with the corresponding filters and default stride, then batch normalized and subjected to a ReLU function, then convolved with filters corresponding to the current block, however with a stride of 1, batch normalized and then added to a branch of the input that was only convolved with a 1 3 1 filter with default stride and batch normalized. The resulting activation was again subjected to an ReLU nonlinearity. After four of these blocks, an average pooling on groups of 7 3 7 was applied. Triplet We used SymTriplet loss, 106,107 a version of the triplet loss function (''Face-Net''). 32 To do so, we connected the encoder module to a dense mapping from the encoder output to a layer of 64 neurons. We then fed triplets of images to this encoder, consisting of an ''anchor,'' a ''positive,'' and a ''negative,'' where the anchor and positive were random images constrained to be of the same identity while the negative was an image constrained to be of a different identity. The loss function then relates these three images in the 64-dimensional embedding space such that large Euclidean distances between anchor and positive, and short distances between anchor and negative, are penalized, as are short distances between positive and negative images. When training the parameters of this network, this yields a function that places samples of the same identity close to each other in the embedding space. The triplet loss network was trained with stochastic gradient descent with an initial learning rate of 10 À3 until no more improvements were observed, and fine-tuned with a learning rate of 10 À5 until no more improvements were observed. ClassID Here, we connected the encoder module to a flattening operation and performed a dense mapping to 2,004 identity classes. We performed a softmax activation and applied a cross-entropy loss to train this classifier. 33 We trained the ClassID network with a cyclical learning rate 108 that cycled between a learning rate of 10 À6 and 0.3.
ClassMulti
This network was the same as the ClassID network; however, it classified not only the 2,004 identity classes but also all other factors of variation that were part of the generation: the 500 identity residuals, the two sexes, the two ethnicities, the three ages, and the seven emotional expressions, as well as the three vertical and horizontal viewing and lightning angles. For each of these extra classification tasks, a separate dense mapping from the shared flattened encoder output was added to the architecture. 33 We trained the ClassMulti network with a cyclical learning rate 108 that cycled between a learning rate of 10 À6 and 0.3. Autoencoder For this architecture, we connected the encoder module to two branches, each consisting of a convolution with 512 1 3 1 filters and a global average pooling operation. This was then connected to a decoder module, which upsampled the 512-D vector back into the original 224 3 224 RGB image space. To do so, we used an existing decoder (''Darknet'' decoder). 109 In brief, this decoder upsamples the spatial dimension gradually from a size of 1-7 and then in five steps that each double the spatial resolution to reach the resolution of the final image. Between these upsampling steps, the sample is fed through sets of blocks of convolution, batch normalization, and ReLU with the number of filters alternating between 1,024 and 512 in the first set of five blocks, between 256 and 512 in the second set of five blocks, between 256 and 128 in the third set of three blocks, between 128 and 64 in the fourth set of three blocks, staying at 64 in the fifth set of one block, and alternating between 32 and 64 in the last set of two blocks. The filter size in all of these blocks alternated between 3 3 3 and 1 3 1. Finally, the 224 3 224 3 64 tensor was convolved with three filters of size 1 3 1 and passed through a tanh nonlinearity.
The loss function used to optimize the parameters of this network is the classic reconstruction loss of an AE, operationalized as the MAE of the input image and the reconstruction in pixel space. We trained the AE using the Adam optimizer 110 with an initial learning rate of 10 À3 until no further improvements were observed.
View-invariant autoencoder
This network shared its architecture and training regime with the AE; however, we changed the input-output pairing during training. Instead of optimizing the parameters to reconstruct the unchanged input, the goal of the viAE was to reconstruct a frontalized view, independent of the pose of the input, while keeping all other factors of variation constant. This resulted in a more viewinvariant representation in the bottleneck layer compared with the AE. 35 Variational autoencoder For this architecture, 111 we connected the encoder module to two branches, each consisting of a convolution with 512 1 3 1 filters and a global average pooling operation. These were fed into a sampling layer as mean and variance inputs, transforming an input into a sample from a 512-D Gaussian with specified mean and diagonal covariance matrix.
This sample was then fed into the same decoder module as described for the AE and viAE above.
The loss function used to optimize the parameters of this network is the sum of two parts: The first is the reconstruction loss of a classic autoencoder, for which we used the MAE between the reconstruction and the original image. The second part is the Kullback-Leibler divergence measured between the multivariate normal distribution characterized by the mean and variance vectors passed into the sampling layer and the prior, a centered, uncorrelated, and isotropic multivariate normal distribution. The second part can be seen as a regularization that effectively leads to a continuous latent space. As it has been reported that weighing the second part of the loss function stronger than the first part can improve the disentanglement of the resulting latent space (''beta-VAE''), 112 we also repeated the training with several values of the regularization parameter beta. However, this did not substantially change the latent space that we obtained.
We also trained two additional identity classifiers that used the frozen weights of the (beta = 1)-VAE. The first directly connected the VAE encoder to a dense linear mapping to 2,004 identity classes. The second first passed through two blocks of fully connected layers of 512 neurons that were batch normalized and passed through an ReLU nonlinearity before the dense linear mapping to identity. In both cases, a softmax activation function was applied and the resulting networks were trained with a cross-entropy loss function. All models shared the training regime of the AE and viAE models as described above.
Forward models
We were interested in comparing the degree to which various sets or ''spaces'' of predictors describing the rated stimuli were linearly relatable to the human behavioral responses. To do so in a way that minimizes the quantification of just overfitting, we linearly regressed the ratings on a range of different descriptors extracted from the random faces presented on each trial in a cross-validation framework.
The predictor spaces we used for this (each consisting of multiple predictor channels) were the texture and shape components of the single trials, as provided by the GMF, as well as the activations of the networks on their ''embedding layers,'' as obtained from forward passes of the stimuli through the networks. Specifically, we used the 512-dimensional, pre-decision layers of the classifiers (ClassID and ClassMulti), the 64-dimensional final layer of the triplet loss network, and the 512-dimensional bottleneck layer of the AE, viAE, and VAE. We then also propagated images of the four target colleagues as recorded with the 3D capture system, fit by the GMF, and rendered with frontal viewing and lighting angles through the four networks, and computed the Euclidean distances on the embedding layers between the random faces of each trial and these ground truth images. We extended this by computing the channel-wise distances of each feature space and using them as an input to the regression described below to obtained weighted Euclidean distances. Additionally, we extracted the pre-softmax activity (''logits'') of the decision neurons trained to provide the logits for the four target colleagues in the final layer of the classifier networks (ClassID and ClassMulti, as well as the linear and nonlinear VAE classifiers). Since we were interested in assessing to what degree the GMF shape and texture features and various embedding layer activations provided the same or different information about the behavioral responses, we also considered models with joint predictor spaces consisting of the two subspaces of shape features and AE, viAE, or VAE activations as well as the three subspaces of shape features, texture features, and AE, viAE, or VAE activations. Lastly, to assess the extent to which a simple linear PCA could extract useful predictors from the images, we performed an SVD on the nonzero channels, a subset of the training images used for the DNNs. Performing SVD on the entire set of training images used for the DNNs would have been computationally infeasible. The subset we used consisted of 18,000 RGB images of all 2,000 identities rendered at nine different viewing angles, limiting emotion expression to the neutral condition and lighting angles to frontal angles. The first 512 dimensions could account for 99.5976% of variance in the training set. We projected the experimental stimuli onto these for further analyses.
We performed the regression separately for each participant and target colleague in a nested cross-validation procedure. 37 This allowed us to continuously tune the amount of L2 regularization necessary to account for correlated predictor channels and avoid excessive overfitting using Bayesian adaptive direct search (BADS), 113 a black-box optimization tool (see Daube et al. 41 for a comparable approach). Specifically, we divided the 1,800 trials per participant into folds of 200 consecutive trials each and, in each of nine outer folds, assigned one of the resulting blocks to the testing set and eight to the development set. Then, within each of the nine outer folds, we performed eight inner folds, where one of the eight blocks of the development set was assigned to be the validation set and seven were assigned to the training set. In each of the eight inner folds, we fitted an L2 regularized linear regression (''ridge regression'') using the closed form solution where B denotes the weights, y denotes the n 3 1 vector of corresponding human responses, R describes a regularization matrix, and X denotes the matrix of trials n 3 predictors M, where such that o denotes the number of combined predictor subspaces and m s describes the number of predictor channels in the s th subspace. In the cases where the features were combinations of multiple feature subspaces, i.e., where o>1, we used a dedicated amount of L2 regularization for each subspace. This avoids using a common regularization term for all subspaces, which can result in solutions that compromise the need for high and low regularization in different subspaces, which fails to optimally extract the predictive power of the joint space. The regularization matrix R can then be described as R = diag À l 11 ; :::; l m1 ; l 12 ; :::; l m2 ; :::; l 1o ; :::; l mo Á ; (Equation 9) where l cs describes the amount of L2 regularization for channel c of predictor subspace s, which is constant for all c in one s. For each predictor subspace, l cs thus was one hyperparameter that we initialized at a value of 2 17 and optimized in BADS with a maximum of 200 iterations, where the search space was constrained within the interval [2 À30 , 2 30 ]. The objective function that this optimization maximized was Kendall's tau, as measured between predicted and observed responses of the inner fold validation set. We used the median of the optimal l cs across all inner folds and retrained a model on the entire development set to then evaluate it on the unseen outer fold. This yielded sets of 200 predicted responses for each test set of the nine outer folds. We evaluated them using two information theoretic measures: MI and redundancy, both computed using binning with three equipopulated bins. 114 We computed bivariate MI with Miller-Madow bias correction between the predictions of each forward model and the observed human responses. We also computed redundancy, using a recent implementation of partial information decomposition (PID), I ccs . 29 When there are two source variables and one target variable, PID aims to disentangle the amount of information the two sources share about the target (redundancy), the amount of information each source has on its own (unique information), and the amount of information that is only available when considering both sources. In our case, we were interested in quantifying how much information the predictions derived from DNN-based forward models shared with the predictions derived from GMF shape features about observed human behavior. To assess whether the amount of MI and redundancy exceeded chance level, we repeated the nested cross-validation procedure 100 times for each combination of participant and target colleague, each time shuffling the trials. From these surrogate data, we estimated null distributions of MI and redundancy and defined a noise threshold within each participant and target colleague condition as the 95 th percentile of MI and redundancy measured in these surrogate data. We counted the number of test folds of all participants and colleagues that exceeded this noise threshold and report this as a fraction relative to all data points.
To then assess whether different predictor spaces gave rise to different levels of MI and redundancy in the presence of high between-subject variance, we employed Bayesian linear models as implemented in the brms package, 38 which provides a user-friendly interface for R 115 to such models using Stan. 116 Specifically, we had performances (MI and redundancy) for each of the nine outer folds b for each combination of target colleague j, participant i, and all predictor spaces f 1 to f q . The factor of interest were the predictor spaces f. We used Hamiltonian Monte-Carlo sampling with four chains of 4,000 iterations each, 1,000 of which were used for their warm-up. The priors for standard deviation parameters were not changed from their default values, i.e., half-Student-t distributions with three degrees of freedom, while we used weakly informative normal priors with a mean of 0 and a variance of 10 for the effects of individual predictor spaces. Specifically, we modeled the log-transformed and thus roughly normal distributed MI and redundancy as performances k with the following model: s $ jtð3; 0; 10Þj; m n $ b i:f½n + b i:b½n + b i:j½n + b f1 ½n + ::: + b fq ½n ; s 2 bint $ jtð3; 0; 10Þj; b f1 ; :::; b fq $ Nð0; 10Þ: To compare the resulting posterior distributions of the parameters of interest, we evaluated the corresponding hypotheses using the brms packageb fa À b fb >0 for all possible pairwise combinations of predictor spaces-and obtained the proportion of samples of the posterior distributions of differences that were in favor of the corresponding hypotheses.
As well as the predictions, the forward models also produced weights that linearly related predictors to predicted responses. We were interested in examining these weights to learn how individual shape features were used in the forward models. For the forward models, predicting responses from shape features was directly possible: the weights B S mapped GMF shape features to responses and could thus be interpreted as the ''shape receptive field.'' However, to be able to compare these weights on the vertex level, we used a differently scaled version of the shape features. This was obtained by multiplying the 4,735 * 3D Z-scored 3D vertex level shape features with the pseudoinverse of the matrix of left-singular vectors U S from the SVD performed on the identity residuals of the 3D vertex features of the face database (see ''generative model of 3D faces''). This 355-dimensional representation of the shape features performed virtually identically to the unscaled version in the forward modeling. For visualization, we could then project the weights B S from the 355D PCA component space into the 4,735 * 3D vertex space, where the absolute values could be coded in RGB space. This resulted in a map that indicated how the random faces at each vertex affected the response predictions in the three spatial dimensions.
The weight maps B N that form the forward models that relate DNN activations to responses were less simple to study in this shape space, since they mapped the less interpretable network activations, not GMF shape features, to behavioral responses. To interpret these models in vertex space, we re-predicted (''simulated'') the response predictions b y derived from DNN features with the GMF shape features to obtain re-predictions b b y as well as weights B SN . We reasoned that response predictions of the ideal DNN model should be perfectly predictable by the shape features and that the corresponding simulated shape weights B SN should be identical to the original shape weights B S in this case. We thus correlated the simulated response predictions with the DNN response predictions, as well as the simulated shape weights with the original shape weights for each test fold in each participant for each target colleague condition.
Decoding shape information from embedding layers
To understand what shape information is available on the embedding layers of the networks, independently of human behavior, we trained linear models that decoded GMF shape PCA components from embedding layer activations in response to images of faces. We used a cross-validation framework on the full set of stimuli, consisting of 43,200 RGB images and their corresponding GMF shape PCA components, using a random set of 80% of the images for training, a further 10% for tuning, and the remaining 10% for testing. Specifically, we trained mass-multivariate L2 regularized regressions, separately predicting each GMF shape component from all neurons of the DNN embedding layers. Similar to the approach taken for the forward models, we tuned the L2 regularization using BADS to maximize the prediction performance on the tuning set. We then projected all predicted GMF shape PCA components into vertex space and, at each vertex, assessed the Euclidean distance between the original GMF shape model and the predictions from the DNN embedding layers.
Reverse correlation
To reconstruct internal templates of the target colleagues' faces under the GMF, we performed a mass-univariate linear mapping from the observed behavior of the human participants to each GMF shape and texture feature.
We repeated this with the choice behavior and rating behavior predicted by the forward models to compare these forward models, human observed behavior, and the ground truth shape information of the target colleagues as captured by our 3D camera array.
We performed the linear regressions of variation in the shape vertices and texture pixels of the random stimuli on the ratings of the images chosen by the human participants and their forward models based on GMF features, as well as DNN and PCA activations. This was done separately for each vertex and spatial dimension, as well as for each pixel and RGB dimension. In principle, this is equivalent to inverting the weights of the forward model. 117,118 However, to match the procedure in Zhan et al., 26 we re-estimated these parameters per vertex and pixel using the MATLAB function ''robustfit.'' Each of the v = 1; :::; 4735 Ã 3 shape vertex positions s was thus modeled as and each of the p = 1; :::; 800 Ã 600 Ã 3 texture pixel RGB values t was modeled as Here, r are the vectors of observed or predicted responses, b 0 is an intercept term, and b 1 is a slope term.
In the original experiment, new faces were then generated by multiplying the slopes obtained from the regressions with different ''amplification values.'' The resulting faces had then been presented to the participants to titrate the ''amplification'' of the weights that would result in the highest perceptual similarity of the reconstructed face for each participant. An amplification of 0 here corresponds to the shape or texture feature being reconstructed as a function of the intercept term only. This corresponds to the shape or texture feature resulting from the average of the faces chosen from the array of six faces in the first stage of each trial.
We repeated this for the forward models by storing the shape and texture components and by rendering observable images of faces corresponding to amplification values ranging from 0 to 50 (the same range used to titrate the human reconstructions) in steps of 0.5. We then computed forward model predictions from GMF shape and texture features, and propagated the observable images through encoding models based on DNNs. This resulted in responses of all systems across the range of amplification values. We chose the peak of each curve and reconstructed the internal templates corresponding to the shape and texture components at these peaks.
We rendered the corresponding internal templates as intuitively visualizable faces. We also considered the explicit descriptions in vertex space to compare templates from humans and templates from forward models among each other, and with the ground truth face shape from the target colleagues. To evaluate the ''humanness'' of the forward models, we computed the Euclidean distances and correlations from the internal templates of the forward models with the internal templates of the humans. To also evaluate the ''veridicality,'' we computed the Euclidean distances and correlations from the ground truth target colleagues with the internal templates from the forward models and the human participants.
This resulted in Euclidean distances and correlations for each target colleague condition j and human participant i (observed and predicted by different predictor spaces f). We then log-transformed the Euclidean distances and Fisher z-transformed the correlations to obtain evaluation measures e and modeled them with Bayesian hierarchical models similar to the ones used to model the prediction performances of the forward models: To compare the resulting posterior distributions of the parameters of interest, we evaluated the corresponding hypotheses using the brms packageb fa À b fb >0 for all possible pairwise combinations of predictor spaces-and obtained the proportion of samples of the posterior distributions of differences that were in favor of the corresponding hypotheses. Prior to visualization, we back-transformed the posterior distributions of the log Euclidean distances with an exponential and the posterior distributions of correlations with the inverse Fisher z-transformation.
Generalization testing
The models of human behavior had been trained and tested under the same conditions. To also test how they would perform under data from a different distribution, we re-used data from a validation experiment originally conducted by Zhan and colleagues 26 .
We propagated the 50 stimulus images per target colleague (combinations of two levels of diagnosticity at five levels of amplification, which were each rendered in five different generalization conditions, see ''experiments-generalization testing'') through each of the model systems under consideration and extracted the rating predictions for each of the 14 participants of the first experiment for each of the four colleagues from each of the four correspondingly fitted forward models. Next, we normalized the predictions to values between 0 and 1 within target colleagues to eliminate possible biases from participants rating the random stimuli of the first experiment higher for one target colleague than for others. We then used the maximum predicted rating across all target colleagues for a given stimulus as the choice of the respective system. The predictions for each of the 14 participants of the first experiment were compared with the behavior of each of the 12 additional participants of the second experiment.
Since all systems were deterministic, the resulting accuracy values for the systems were thus binary (this was different for the human responses, since each stimulus had been shown to the validators five times; see ''experiments-generalization testing'').
We analyzed the data by first computing the absolute difference of human and model accuracies and then subjecting the resulting absolute errors to a Bayesian linear model. Since the model accuracies could only take one of six different values (from 0 to 1 in steps of 0.2), we used an ordinal model. To do so, we used a cumulative model assuming a normally distributed latent variable as implemented in brms. 119 Concretely, we modeled the probability of a model accuracy a of model type f predicting behavior in task g of participant i for target colleague j and validated by validator k to fall into category t given the linear predictor h as: Prða = tjhÞ = Fðt t À hÞ À Fðt tÀ1 À hÞ; (Equation 14) where F is a cumulative distribution function, t t is one of T = 5 different thresholds that partition the standard Gaussian continuous latent variableã into T + 1 categories, and h describesã corresponding to the following model: To compare the resulting posterior distributions of the parameters of interest, we evaluated the corresponding hypotheses using the brms package (b fa :gx À b fb:gx >0 for all possible pairwise combinations of model types within each task), and obtained the proportion of samples of the posterior distributions of differences that were in favor of the corresponding hypotheses.
ACKNOWLEDGMENTS
This work has been funded by the Wellcome Trust grant (Senior Investigator Award, UK; 107802) and the Multidisciplinary University Research Initiative/ Engineering and Physical Sciences Research Council grant (USA, UK; 172046-01) awarded to P.G.S. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
|
v3-fos-license
|
2021-04-23T13:57:52.512Z
|
2021-04-22T00:00:00.000
|
233356178
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-021-08180-1",
"pdf_hash": "82a22b3c9d740d82ea9eda3c7f0fb5e46b504d49",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2309",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "36f238384af336833872610db69d4e69e9f3b6b1",
"year": 2021
}
|
pes2o/s2orc
|
Surgical choice of non-small cell lung cancer with unexpected pleural dissemination intraoperatively
Background Whether patients with non-small cell lung cancer (NSCLC) with unexpected pleural dissemination (UPD) could get survival benefit from tumor resection remained controversial. Methods Totally, 169 patients with NSCLC with UPD were included between 2012 and 2016. Patients were divided into the tumor resection and open-close group. Progression-free survival (PFS) and overall survival (OS) were compared with a log-rank test. The multivariable Cox analysis was applied to identify prognostic factors. Results Sixty-five patients received open-close surgery and 104 patients underwent main tumor and visible pleural nodule resection. Tumor resection significantly prolonged OS (hazard ratio [HR]: 0.408, P < 0.001), local PFS (HR: 0.283, P < 0.001), regional PFS (HR: 0.506, P = 0.005), and distant metastasis (HR: 0.595, P = 0.032). Multivariable Cox analysis confirmed that surgical method was an independent prognostic factor for OS, local PFS and regional PFS, except distant metastasis. Subgroup analyses indicated that tumor resection could not improve OS in the patients who received targeted therapy (HR: 0.649, P = 0.382), however, tumor resection was beneficial for the patients who received adjuvant chemotherapy alone (HR: 0.322, P < 0.001). In the tumor resection group, lobectomy (HR: 0.960, P = 0.917) and systematic lymphadenectomy (HR: 1.512, P = 0.259) did not show survival benefit for OS. Conclusions Main tumor and visible pleural nodule resection could improve prognosis in patients with UPD who could not receive adjuvant targeted therapy. Sublobar resection without systematic lymphadenectomy may be the optimal procedure. Supplementary Information The online version contains supplementary material available at 10.1186/s12885-021-08180-1.
Introduction
Lung cancer ranked first in terms of the incidence and the mortality among malignant tumors [1], and nonsmall cell lung cancer (NSCLC) represented approximately 85% of lung cancer cases [2]. Curative surgical resection was the first-line choice for early stage NSCL C, while systematic therapy was the standard of care for advanced NSCLC [3]. Pretreatment evaluation for tumor resectability and metastasis should be conducted before the operation, and methods for evaluation included bronchoscopy, endobronchial ultrasound, positron emission tomography / computer tomography (PET/CT), etc. [3]. Although patients were assessed to clinical stage M0 after evaluations, unexpected pleural dissemination (UPD) was detected occasionally by thoracic surgeons in the operating procedure. The choice of tumor resection or open-close surgery remained controversial.
Recently, several studies revealed that tumor resection could bring survival benefit in the patients with UPD [4][5][6][7]. However, the sample size of these studies was small. In addition, these studies did not make subgroup analysis for adjuvant therapy. Given that targeted therapy had a greater survival benefit for advanced NSCLC than conventional chemotherapy [8][9][10], the survival benefit of tumor resection for the patients who received targeted therapy was unclear. Thus, our aims were to validate the benefit of tumor resection in the patients with UPD and explore its benefit in subgroups of different adjuvant therapeutic regimens.
Study design
The was a retrospective cohort study that was approved by the Ethic Committee of Shanghai Pulmonary Hospital (approved number: K20-283). This analysis was performed in accordance with the Strengthening the Reporting of Cohort Studies in Surgery (STROCSS) criteria [11].
Patients
We retrospectively reviewed medical records of the consecutive patients who received thoracic surgery between January 2012 and December 2016 in Department of Thoracic Surgery, Shanghai Pulmonary Hospital. The inclusion criteria were: (1) primary pathologic stage IV-M1a NSCLC according to the 8th edition of the TNM staging system [12], (2) clinical stage M0 before the operation, (3) malignant pleural dissemination. The patients were excluded if met any of the following criteria: (1) benign disease, (2) small cell lung cancer, (3) metastatic tumor of other cancer, (4) stage I-III NSCLC, (5) stage IV-M1b or IV-M1c. (6) lost fellow-up (< 3 months).
Preoperative evaluation
All patients underwent preoperative evaluation for both tumor resectability and metastasis. Bronchoscopy and chest enhanced CT scan were requested for all lung cancer candidates. Distant metastasis was assessed routinely by using brain CT scan or magnetic resonance imaging (MRI), abdominal CT/MRI or sonography and bone scintigraphy. If the patients received PET-CT scan, the examinations above (except bronchoscopy) were not requested. Ultrasonic probing for thoracentesis was performed routinely in patients with preoperative noted pleural effusion, and the drainage liquid was sent for cytology.
Operations
Video-assisted thoracic surgery (VATS) or standard posterolateral thoracotomy was performed according to the tumor characteristics, and VATS was the first choice generally. Initial exploration was performed and a frozen section of the pleural biopsy was taken if pleural metastasis was suspected. After pathological confirmation of the pleural malignancies, the choice of tumor resection or pleural biopsy alone and the extent of resection depended on surgeons' (17 chief or deputy chief surgeons) experiences and preferences. If the surgeons did not choose to resect the primary tumor, thorax closure was performed immediately. All the visible pleural lesions of the patients who underwent tumor resection were resected (large lesions) or cauterized by the electrotome (small lesions) as many as possible.
Adjuvant therapy
Driver gene mutation detection was recommended for all patients. If the patients harboring epidermal growth factor receptor (EGFR) mutation or anaplastic lymphoma kinase (ALK) rearrangement, the corresponding targeted drugs was recommended for first-line treatment. If the driver gene mutations were negative or the patients did not choose a targeted therapy due to cost, allergy, adverse effects or other factors, platinum-based chemotherapy was recommended.
Follow-up
The patients were scheduled for a first re-visit at 4 weeks after operations, and the follow-up visit was scheduled every 3 -6 months. Tumor progression events were detected by radiological evaluation (as listed above). As the definitions of previous studies [4,7], Local progression was defined as the primary lesion enlargement or lesion recurrence at the resection site. Regional progression was defined as increasing pleural effusion / pleural nodules / lung lesions, or ipsilateral lymph node recurrence / enlargement. Distant metastasis was defined as new lesions in the contralateral lung or any other organ (brain, bone, etc.).
Statistical analysis
Categorical variables were analyzed by the Pearson chisquare test or Fisher's exact test. Continuous variables were analyzed by the Student's t test or Wilcoxon ranksum test. Progression-free survival (PFS) was defined as the time from surgery to any disease progression or the last follow-up. Overall survival (OS) was defined as the time from surgery to death or the last follow-up. Kaplan-Meier method was used to obtain the PFS and OS curves, and a log-rank test was used to compare the curves. Univariable Cox proportional hazard regression was used to identify prognostic factors. Multivariable analysis was performed in the factors with p value < 0.10 to identify independent prognostic factors. All analyses were conducted by using R software (version 3.6.3), and a two-sided P value of 0.05 was considered statistically significant.
Clinicopathological characteristics
Totally, 169 patients who fulfilled the selection criteria were included in the study (Fig. 1). Of the 169 patients, 65 patients received open-close surgery and 104 patients underwent main tumor and visible pleural nodule resection. Table 1 presented the clinical and pathological characteristics in the two groups, and no significant difference was observed. The open-close group included 54 (83.1%) adenocarcinoma, 4 squamous cell carcinoma (SCC), 7 other NSCLC (2 adenosquamous carcinoma, 2 large cell carcinoma, 2 carcinosarcoma and 1 poorly differentiated carcinoma). The tumor resection group included 92 adenocarcinoma, 7 SCC and 5 other NSCLC (3 adenosquamous carcinoma, 1 large cell carcinoma and 1 lymphoepithelioma-like carcinoma). EGFR mutation was detected in 22 and 57 patients in the two groups, respectively. Two cases of ALK rearrangement were detected in the tumor resection group, and one case in the open-close group.
Perioperative outcomes
More patients in the tumor resection group received thoracotomy (35.6% vs 10.8, P = 0.001), and the openclose group had a higher proportion of pleural effusion (75.4% vs 23.1%, P < 0.001) ( Table 2). In the tumor resection group, 67 patients received lobectomy, and 33 patients received sublobar resection (including 2 cases of segmentectomy), and 4 patients received pneumonectomy. Forty-four (42.3%) patients in the tumor resection group underwent systematic lymphadenectomy, while no patients in the open-close group underwent systematic lymphadenectomy. The tumor resection group had significantly longer post-operative hospital stay (6 days vs 4 days, P < 0.002). Although the incidence of postoperative complication (19.2%) in the tumor resection group was higher than the open-close group (9.2%), the difference was not significant (P = 0.080). There was one case of death in the tumor resection group. The patient was a 57-year-old man, and he suffered from massive pulmonary embolism on the first post-operative day and died 4 days later.
Neoadjuvant and adjuvant therapy
Eight patients (2 in the open-close group and 6 in the tumor resection group) received neoadjuvant chemotherapy (Table 1) and 1 patient underwent targeted therapy followed by chemotherapy. Totally, 126 patients received first-line platinum-based chemotherapy. Of the 65 patients who received targeted therapy, 22 patients received first-line targeted therapy alone, and 23 patients received targeted maintenance therapy followed by chemotherapy, and 20 patients received (Fig. 2). In univariable analysis, besides surgical method, sex, smoking status, clinical T stage, pleural effusion, chemotherapy and targeted therapy were prognostic factors for OS (Table 3). Multivariable Cox analysis confirmed that surgical method was an independent prognostic factor (HR: 0.521, 95% CI: 0.288 -0.943, P = 0.031), and clinical T stage, adjuvant chemotherapy and adjuvant targeted therapy were also independent prognostic factors ( Table 3).
Subgroup analysis regarding adjuvant therapy
The clinicopathological characteristics of the 65 patients who received targeted therapy were shown in Supplementary Table 4, and they were parallel except higher incidence of pleural effusion in the open-close group (71.4% vs 11.4%, P < 0.001), which was coincidence with overall analyses. Kaplan-Meier survival analysis with a log-rank test indicated that tumor resection could not improve OS (HR: 0.649, 95% CI: 0.246 -1.710, P = 0.382) (Fig. 3a). No significant difference was observed among first-line, maintenance therapy after chemotherapy and second-line therapy, and we also observed that administration of third generation TKIs after tumor progression did not significantly improve the OS (Table 4), probably due to small sample size in the subgroups. In multivariable Cox analysis, after adjustment for clinical T stage, N stage, timepoint of TKIs and third generation of TKIs administration, surgical method was still not a risk factor (Table 4).
In the 78 patients (Supplementary Table 4) who received adjuvant chemotherapy alone, tumor resection could significantly prolong OS (HR: 0.322, 95% CI: 0.165 -0.628, P < 0.001) (Fig. 3b), and it remained positive in multivariable analysis adjusted for clinical T stage, N stage and pleural effusion ( Table 5). We noticed that more patients in the lobectomy group underwent systematic lymphadenectomy (80.6% vs 6.1%, P < 0.001). The Kaplan-Meier survival analysis with a log-rank test showed that there was no significant difference between lobectomy and sublobar resection for OS (HR: 0.960, 95% CI: 0.452 -2.040, P = 0.917) (Fig. 3c), and it was also negative in multivariable analysis (Supplementary Table 6) We also observed that systematic lymphadenectomy could not improve OS in Kaplan-Meier plot (HR: 1.512, 95% CI: 0.738 -3.099, P = 0.259, Fig. 3d) and Cox analysis (Supplementary Table 6), with the similar clinical N stage (Supplementary Table 5). Given the potential interaction between resection extent and systematic lymphadenectomy, we also analyzed systematic lymphadenectomy in the patients who underwent lobectomy, and it remained negative (Supplementary Figure 1).
Discussion
NSCLC with pleural or pericardial dissemination was categorized into stage IV-M1a in the 7th and 8th TNM staging system [12,13], which was generally not recommended for surgery, according to the National Comprehensive Cancer Network (NCCN) guidelines [3]. However, UPD was occasionally met in the operation procedure by surgeons, and surgeons chose the surgical method according to their experiences and preferences.
In the study, significantly higher rate of pleural effusion in the open-close group was observed, which indicated that surgeons may prefer to open-close surgery due to pleural effusion. However, the multivariable analysis showed that pleural effusion was not independent prognostic factors for OS. Li et al [4] found that clinical T stage was higher in the open-close group than the tumor resection group, which may associate with a surgeon's tendency to select open-close procedures.
In this retrospective study, we observed that tumor resection had better OS than open-close surgery in 169 patients with UPD, which was in concordance with previous studies [4][5][6][7]. Ren et al [5] reported 83 cases in our center from 2005 to 2013, and they found that primary tumor resection had significantly better OS compared with biopsy in patients with UPD (3-year OS, 45.8% vs 11.8%, P = 0.001). They also analyzed the survival data of patients with ipsilateral pleural effusion (stage M1a) from the Surveillance, Epidemiology, and End Results database, and they also observed the similar result (HR: 2.58, 95% CI: 1.84 -3.61, P < 0.001) [6]. Li et al [4] analyzed 43 patients with lung adenocarcinoma with intraoperatively diagnosed pleural seeding, and A significantly higher 3-year OS was observed in the tumor resection group than open-close surgery (82.9% vs 38.5%, P = 013). The results from Yun and colleagues' study in 78 patients localized pleural seeding demonstrated that tumor resection could increase 3-year survival rate (66.7% vs 41.1%, P = 0.012). A meta-analysis including 9 studies also concluded that tumor resection had significant survival benefit (HR: 0.443, 95% CI: 0.344 -0.571, P < 0.001) [14]. NSCLC with pleural or pericardial dissemination was generally not recommended for surgery [3], and the consensus favored open-close surgery followed by chemotherapy or targeted therapy for stage IV disease [15]. However, these studies indicated that tumor resection could be an option in multimodality treatment. Besides advanced disease, surgery associated complication was another concern for tumor resection. In our study, we did not observe a significantly higher incidence of post-operative complication in the tumor resection group, and Ren et al [5] and Yun et al [7] also reported the same result. We observed better local and regional PFS in the tumor resection group, which was consistent with previous studies reported by Li et al [4] and Yun et al [7] . Tumor resection significantly reduced tumor volume, and larger volume was associated with poor local control [16]. Miura et al [17] claimed that pleural seeding originates from direct or local extension of the tumor via the subpleural lymphatic system. In terms of distant metastasis, the positive result in the Kaplan-Meier survival curves did not be confirmed by multivariable Cox analysis. Li et al [4] and Yun et al [7] also found that tumor resection could not improve distant metastasis-free survival.
Targeted therapy recommended by the NCCN guidelines was the first-line therapy for advanced NSCLC harboring EGFR mutation or ALK rearrangement [3]. The greater response rate and survival benefit of targeted therapy than chemotherapy had been validated by several large phase 3 clinical trials [18][19][20][21][22][23]. Thus, we thought that targeted therapy may affect the benefit of tumor resection. In subgroup analysis, we found that patients could not get survival benefit from tumor resection if they received targeted therapy, while tumor resection could improve OS in the patients who received chemotherapy alone. The results was in accordance with the recent study that reported by Li et al [24]. These indicated that tumor resection may be only beneficial for a subgroup of patients with UPD who did not have the driver gene mutation or could not receive targeted therapy due to cost, allergy, adverse events or other factors. However, the result of diver gene detection should be available for thoracic surgeons when making decision.
The surgical extent had been analyzed in previous studies [4,7,25,26], and they concluded that compared with sublobar resection, lobectomy could not improve prognosis for stage M1a NSCLC. In our study, we also got the same result in subgroup analysis. In addition, we also analyzed the effect of systematic lymphadenectomy in the tumor resection group, and the result demonstrated that systematic lymphadenectomy could not bring survival benefit. These results were not surprising, because tumor resection was a debulking surgery rather than curative surgery for the patients with stage M1a NSCLC.
There were several limitations in our study. First, some biases were inevitable because of the retrospective and single-center nature of this study. Selection bias probably existed in the choice of surgical method, and higher incidence of pleural effusion was observed in the open-close group, which may be associated with a surgeon's tendency to select open-close surgery. Second, the sample size was not big enough, especially for subgroup analyses, although it was the largest one among the recent studies.
Conclusions
This study indicated that main tumor and visible pleural nodule resection could improve OS and PFS for the patients with UPD, especially for the patients who could not receive adjuvant targeted therapy. For the patients harboring driver gene mutations, tumor resection may be not beneficial for prognosis due to the great benefit of targeted therapy. Sublobar resection without systematic lymphadenectomy may be the optimal procedure, because extensive resection and systematic lymphadenectomy could not improve prognosis. Large-scale, prospective studies were warranted to validate the benefit of tumor resection for stage M1a NSCLC.
Additional file 1: Table 1. Prognostic factors for local progression-free survival by using the Cox proportional hazard model. Table 2. Prognostic factors for regional progression-free survival by using the Cox proportional hazard model. Table 3. Prognostic factors for distant metastasisfree survival by using the Cox proportional hazard model. Table 4. Clinicopathological characteristics of patients in different therapy subgroups. Table 5. Clinicopathological characteristics of patients who underwent tumor resection. Table 6. Prognostic factors for overall survival of the patients who underwent tumor resection by using the Cox proportional hazard model. Figure 1. Subgroup analysis in the lobectomy group regarding systematic.
Declarations
Consent to publication Not applicable.
Ethics approval and consent to participate This study was approved the Ethic Committee of Shanghai Pulmonary Hospital (approved number: K20-283), and the study was conducted in compliance with the principle of the Declaration of Helsinki of 1964 and later versions. Written informed consent was obtained from all patients or their family members.
|
v3-fos-license
|
2018-04-03T02:26:48.908Z
|
2017-01-25T00:00:00.000
|
15705900
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/srep41238.pdf",
"pdf_hash": "0e690305bc09686ac8a801a1ff19b84b3d98a088",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2310",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "0e690305bc09686ac8a801a1ff19b84b3d98a088",
"year": 2017
}
|
pes2o/s2orc
|
Patient-specific hepatocyte-like cells derived from induced pluripotent stem cells model pazopanib-mediated hepatotoxicity
Idiosyncratic drug-induced hepatotoxicity is a major cause of liver damage and drug pipeline failure, and is difficult to study as patient-specific features are not readily incorporated in traditional hepatotoxicity testing approaches using population pooled cell sources. Here we demonstrate the use of patient-specific hepatocyte-like cells (HLCs) derived from induced pluripotent stem cells for modeling idiosyncratic hepatotoxicity to pazopanib (PZ), a tyrosine kinase inhibitor drug associated with significant hepatotoxicity of unknown mechanistic basis. In vitro cytotoxicity assays confirmed that HLCs from patients with clinically identified hepatotoxicity were more sensitive to PZ-induced toxicity than other individuals, while a prototype hepatotoxin acetaminophen was similarly toxic to all HLCs studied. Transcriptional analyses showed that PZ induces oxidative stress (OS) in HLCs in general, but in HLCs from susceptible individuals, PZ causes relative disruption of iron metabolism and higher burden of OS. Our study establishes the first patient-specific HLC-based platform for idiosyncratic hepatotoxicity testing, incorporating multiple potential causative factors and permitting the correlation of transcriptomic and cellular responses to clinical phenotypes. Establishment of patient-specific HLCs with clinical phenotypes representing population variations will be valuable for pharmaceutical drug testing.
ignore patient features, including genetic polymorphisms that are major susceptibility factors to a drug, whose metabolic and physicochemical properties primarily determine the basis for such toxicity 4 .
In an effort to incorporate patient-specific features in the study of idiosyncratic hepatotoxicity, the use of induced pluripotent stem cells (iPSCs) which retain host features including predisposing genetic risk factors has been proposed 5,6 . The ability to differentiate iPSCs to hepatocyte-like cells (HLCs) with appropriate drug metabolizing capacities [7][8][9] , has made general toxicity assessments on these liver-derived cells feasible [10][11][12][13][14] . HLCs have been extensively characterized 15 which has led to their improved functionality as drug-metabolizing cells 16,17 . Still, the demonstration of iPSC-derived HLCs as models for idiosyncratic hepatotoxicity has thus far lacked clear clinical relevance and has been limited to metabolic variations attributable to single well-characterized cytochrome P450 variants 18 or to the observation of donor-dependent differences in drug toxicity screens 19 . In a recent study, toxicity to valproic acid was modeled in iPSC-derived HLCs from two individuals with Alpers sydrome, characterized by mutations in POLG and increased sensitivity to valproic acid 20 . In another study, HLCs derived from alpha-1 antitrypsin (AAT)-deficient patient-specific iPSCs exhibited mutant AAT protein accumulation and autophagic flux reminiscent of the clinical disease 21 . These studies highlighted a known relationship between a specific rare genetic mutation and phenotype as reflected in patient-specific HLCs. However, outside such rare disorders, mechanisms of drug-induced hepatotoxicity are poorly understood in the full patient context 22 , especially as single common risk variants have been found to have limited association with drug-induced hepatotoxicity 23 . iPSCs and their cellular derivatives are genetic matches to their donors, and for donors with clinically identified hepatotoxicity provide unprecedented opportunity for retrospective mechanistic investigation that comprehensively encompasses the multitude of potential risk factors.
As new classes of drugs are introduced, the inevitability of idiosyncratic adverse drug reactions becomes apparent 24 . Therefore, towards fulfilling an unmet clinical need and harnessing the potential utility of patient-specific HLCs, we demonstrate as proof-of-concept the modeling of idiosyncratic hepatotoxicity to drugs with uncharacterized basis and mechanisms of toxicity. In this study, we have attempted a retrospective reconstruction of adverse hepatic reactions to pazopanib (PZ, Votrient ® , GlaxoSmithKine), a clinically efficacious drug commonly prescribed for the treatment of advanced renal cell carcinoma 25,26 , but with a high incidence of hepatotoxicity 27,28 . Grade 3-4 elevation of serum alanine aminotransferase (ALT) and/or aspartate aminotransferase (AST) in 12-17%, and isolated hyperbilirubinemia in 36% of patients is observed at administered dose of 800 mg daily. PZ belongs to the class of tyrosine kinase inhibitor drugs, several among which are known to be metabolized to reactive intermediates partially accounting for their toxicity profiles 29 . However, for PZ no reactive metabolites have been identified in patients 30 , and pharmacogenetic analyses have yielded weak associations with two genetic markers HFE 31 and UGT1A1 32 for elevated ALT and bilirubin levels, respectively. The high incidence of hepatotoxicity, the obscure mechanism underlying this and the paucity of biomarkers for predicting hepatotoxicity, prompted the choice of PZ as the drug to demonstrate the utility of patient-specific HLCs for modeling idiosyncratic hepatotoxicity.
We generated iPSCs from blood-isolated lymphocytes of patients who suffered hepatotoxic side effects from PZ treatment clinically, and differentiated them into functional HLCs (HT-HLCs). Equivalent HLCs were derived from patients who also received PZ but did not have any hepatotoxic side effects to serve as real-world controls (NHT-HLCs). Comparison of in vitro effects of PZ on the two groups of HLCs confirmed greater sensitivity of HT-HLCs to PZ toxicity compared to NHT-HLCs. Affirming the drug-specific nature of this observation, similar high-level toxicity was seen for all HLCs with the paradigm hepatotoxic drug, acetaminophen. Given the ability to infinitely expand HLCs from patient-specific iPSCs, we went on to perform transcriptomics analysis and show that oxidative stress is a potential mechanism by which PZ induces damage in these HLCs. Further comparative transcriptional analysis provided evidence of differential iron metabolism and more extensive oxidative damage in susceptible HLCs to account for the inter-individual variability in PZ-induced hepatotoxicity. This is the first demonstration that patient-matched HLCs, even in the absence of explicit knowledge of genetic variations can recapitulate cellular phenotypes corresponding to variation in the adverse effects for a drug. Our results strengthen the case for iPSC-derived HLCs as a platform for modeling idiosyncratic hepatotoxicity that allows the interrogation of toxicity mechanism in the appropriate background of patient-specific features, with multiple causative factors at play. Of considerable pharmaceutical interest, the establishment of patient-specific HLCs with known clinical phenotypes, as parts of cell banks, with varied genetic backgrounds and range of drug sensitivities, can also aid in population level drug testing.
Results
Generation and characterization of hepatocyte-like cells (HLCs) from patients with differential clinical hepatotoxicity to pazopanib (PZ). Five patients that received standard dosing (800 mg daily) of PZ treatment for metastatic renal cell cancer (RCC) were selected and consented to this study (Supplementary Table 1). Liver function monitoring post initiation of treatment showed that two patients tolerated the treatment (NHT1, NHT2) while three patients suffered clinical hepatotoxicity (HT1, HT2, HT3) as observed from the on-treatment high-grade elevations of ALT and AST levels (≥ 3 × upper limit of normal) ( Supplementary Fig. 1). Sequencing of patient genomic DNA did not reveal any link with variants in HFE 31 and UGT1A1 32 , reported to be associated with PZ-induced ALT and bilirubin elevation, respectively (Supplementary Table 2), suggesting that other undefined patient factors play a predisposing role in developing hepatotoxicity to PZ.
In order to more comprehensively capture patient features, iPSCs were generated for each patient from their EBV-immortalized B-lymphocytes (EBVi) by episomal reprogramming 33,34 with OCT4, SOX2, KLF4, SV40/LT, NANOG and LIN28 ( Supplementary Fig. 2). Each patient-specific iPSC line (HT1, HT2, HT3, NHT1, NHT2), expressed OCT4, NANOG, TRA-1-60 and SSEA-4 which were not detectable in the EBVi lines they were derived from (Supplementary Fig. 3a and b). The ability to form three germ layers was confirmed for each iPSC line by in vitro differentiation to embryoid bodies and in vivo teratoma formation (Supplementary Fig. 3c and d). All iPSC lines showed normal karyotype, except HT1 iPSCs ( Supplementary Fig. 3e), however, given equivalent growth rates, differentiation potentials (both embryoid bodies and below) of all iPSC lines, we did not consider this abnormality to be of consequence. Thus, iPSCs were successfully generated from cells isolated from all five patients that received PZ treatment.
We then generated hepatocyte-like cells (HLCs) from the patient iPSCs to serve as in vitro models to assess PZ-induced hepatotoxicity. A four-step differentiation protocol spanning 20 days was adopted from Roelandt et al. with modification 35 (Fig. 1a). After 20 days of differentiation, majority of the cells adopted an epithelial cell morphology resembling hepatocytes (Fig. 1b) and exhibited a high proportion of albumin + cells, ranging from 77.2% (HT1) to 92.3% (NHT2) (Fig. 1c and Supplementary Fig. 4a), comparable to the proportion of albumin + cells reported for other iPSC-derived HLCs 7 . In undifferentiated iPSCs from both HT and NHT cases this proportion was very low (< 1.5%) ( Supplementary Fig. 4b). The percentage of albumin + cells in iPSC-derived HLCs (referred to as "HLCs" hereafter) was not significantly lower than that of the PHP control except for HT1-HLCs (Bonferroni corrected t-test p = 0.008), but this measure was highly comparable among all the HLCs themselves, including across HT and NHT groups (ANOVA p = 0.087) (Fig. 1c).
We next examined the expression of liver-specific markers in HLCs. When compared to freshly-thawed cryopreserved PHPs, the overall gene expression suggested that all HLCs exhibited a more fetal-like phenotype as indicated by their lower expressions of mature hepatocyte markers, such as ALB, AAT, ASGPR, CYP1A2, CYP3A4, UGT1A1, UGT1A3 and higher expressions of hepatoblast or fetal hepatocyte markers, HNF4α, AFP, CK18 and CYP3A7 36 (Fig. 2a). This is consistent with previous reports that stem cell-derived HLCs have fetal hepatocyte-like phenotypes 37 . We observed that the gene expressions of liver-specific markers varied by up to 2 orders of magnitude between the 5 iPSC-HLC lines, which were similar to the extent of variations observed by Kajiwara et al., where they reported that intrinsic donor variability is a strong determinant of differentiation propensity of iPSCs to hepatocytes, measured as expression of liver-related genes 38 . Single factor ANOVA analysis indicated that there were no significant differences among the different HLC lines for all the genes tested, except for AFP, ALB, CYP3A4 and UGT1A3, which differed significantly between disparate pairs of HLC lines (Bonferroni corrected t-test p < 0.05), without any specific distinction between HT and NHT lines. We conclude that apparent differences in hepatic differentiation capacity of distinct iPSC lines were largely masked by the large variations in gene expression levels within a single iPSC line, often by an order of magnitude (Fig. 2a). We also measured the liver-specific functions of the HLCs. Urea production by all five HLCs was approximately 20-50% of PHP's production rate (Fig. 2b), and were all significantly lower than PHP (p < 0.05, Bonferroni t-test) with the exception of HT1-HLCs. We measured the activity levels of two major cytochrome P450 (CYP) isoforms in the human liver, CYP1A2 and CYP3A4. The CYP1A2 activities of HLCs, ranging from 11-56%, were not significantly different from that of PHP except for HT3-HLCs (p < 0.05, Bonferroni t-test) (Fig. 2c). The CYP3A4 metabolic activities, ranging from 48-217%, were also similar to that of PHP (Fig. 2d), despite CYP3A4 gene expression in HLC lines being between 2-32% of PHP (Fig. 2a). The disparity between expression level and metabolic activity of CYP3A4 could be due to its functional regulation at post-trascriptional level 39,40 . Notwithstanding these differences, we anticipate that HLCs from all five patient-specific iPSC lines will be capable of metabolizing PZ as CYP3A4 and CYP1A2 activities primarily mediate metabolism of PZ 30 . Taken together, the results demonstrate that all patient specific-iPSC lines could differentiate into functional HLCs. There were inter-cell line and intra-cell line variability in liver specific marker expressions and functions. However, more variability was observed in gene expression than manifested in functional studies. Importantly, we could not distinguish between the HT and NHT lines based on the liver-specific marker expressions and functions. This buttresses our initial postulation that PZ-induced hepatotoxicity cannot easily be identified from single or a few variants in liver-specific markers or functions.
Patient-specific HLCs model differential PZ-induced hepatotoxicity in vitro. We then assessed whether differential PZ-induced hepatotoxicity can be detected in vitro in patient-specific HLCs. HLCs were harvested after hepatic differentiation and plated onto 96-well plates and dosed with PZ at five different concentrations, ranging from 0.1-100 μ M (Fig. 3a). This range of concentrations covered the reported cellular IC 50 values for various cell lines and is close to the reported C max value in human (i.e., 122 μ M) 41 within the solubility limit of PZ in aqueous solution. We also tested the HLCs with a paradigm hepatotoxin, acetaminophen (APAP), to serve as a positive control drug (Fig. 3b). The cells were incubated for 24 hours with the drugs before hepatotoxicity effects were evaluated by measuring the cellular metabolic activity using MTS assay.
We observed that the hepatotoxic effects of PZ were less severe than APAP. For collagen-plated PHP controls, whereas APAP resulted in a clear dose-dependent toxicity ( Fig. 3b(i)), cell viability was still 68.7 ± 7.3% even at the highest concentration tested for PZ ( Fig. 3a(i)). HLCs derived from all five patient iPSC lines exhibited similar dose-dependent responses to APAP as PHP ( Fig. 3b) but not to PZ (Fig. 3a). At low PZ concentrations (< 1 μ M), the HLCs did not show appreciable toxicity, with cell viability remaining at > 95% without significant differences among HLCs (ANOVA, p = 0.95) (Fig. 3a). However, at 100 μ M PZ, there was a significant difference in the cell viability of different HLC lines (ANOVA, p < 0.001). Cell viability for NHT1-and NHT2-HLCs were 66.4 ± 1.5% and 68.6 ± 4.3% respectively ( Fig. 3a(ii,iii)). In comparison, HLCs derived from patients with clinical hepatotoxicity had lower cell viabilities of 42.3 ± 2.9% (HT1), 42.6 ± 2.3% (HT2), and 44.6 ± 3.2% (HT3) (Fig. 3a(iv-vi)). We performed post-hoc analysis using pair-wise t-test with Bonferroni's correction to compare the different HLCs. There were no significant differences within the hepatotoxic and non-hepatotoxic groups but the differences in cell viability between the hepatotoxic and non-hepatotoxic groups were all statistically significant (Fig. 3c).
The differences in PZ-induced toxicity were observable only in HLCs and did not arise from intrinsic differences in the properties of the undifferentiated iPSC lines themselves, as no differential toxicity was observed between NHT and HT iPSC lines. All 5 undifferentiated iPSC lines were significantly more sensitive to both PZ ( Supplementary Fig. 5) and APAP ( Supplementary Fig. 6), compared to their differentiated HLC counterparts, and to similar extents compared to one another. To confirm that the differential response of the patient iPSC-derived HLCs to PZ-induced hepatotoxicity was not specific to the MTS assay, we performed drug testing with another assay that measures intracellular ATP levels to indicate cytotoxicity effects. We observed that there was once again a significant difference between the HLCs at 100 μ M PZ (ANOVA, p = 0.002), where approximately 66% of NHT1-and NHT2-derived HLCs remained viable while only 45-47% of HT1-, HT2-and HT3-HLCs survived ( Supplementary Fig. 7). Post-hoc analysis for ATP assay showed similar results as the MTS assay (Fig. 3d). These results indicated that the 20% differential cytotoxic response between the NHT-and HT-HLCs was likely a manifestation of the intrinsic difference in their susceptibility to PZ-induced hepatotoxicity, and not due to assay-dependent fluctuations or due to differences in the potentials of the iPSC lines. This demonstrated that the patient-specific HLCs could recapitulate clinical PZ-induced idiosyncratic hepatotoxicity in an in vitro assay.
Transcriptional changes due to PZ exposure have elements of oxidative stress common to all HLCs. Transcriptional changes that occur in hepatocytes can aid the understanding of the mechanistic basis for a drug's hepatotoxic potential 42 . We availed the five HLC lines to investigate the mechanism behind PZ-induced hepatotoxicity, specifically the enhanced cytotoxic susceptibility of HT-HLCs to PZ. To achieve this we extracted RNA from each HLC line treated either with DMSO as control or with 100 μ M PZ for 24 hours, and subjected the samples to microarray analysis. Dosing at this concentration was anticipated to produce a significant transcriptional signal in all HLCs given that cytotoxicity was observed in all HLCs, although it was distinctly lower in NHT-HLCs. Principal component analysis of expression data revealed that HLCs differ dramatically in their baseline transcriptional profiles, although treatment with PZ does generate a distinct shift in second principal component for all HLCs (Fig. 4a). To answer if gene expression changes, common and exclusive to HT1-3, could account for their greater susceptibility to PZ, first, the most significantly differentially expressed genes in each HLC line were identified (Supplementary Table 3), and analyzed for overlaps ( Supplementary Fig. 8). This and subsequent grouped analysis for HT-HLCs and NHT-HLCs showed that irrespective of the HLCs' PZ sensitivity, the largest gene expression changes due to PZ, including SLC7A11, AKR1C1, AKR1C2 and GDF15, occurred in similar directionality and magnitude in all HLCs (Fig. 4b). Expression of selected genes from differential expression analysis was verified using qRT-PCR showing good agreement with the microarray platform ( Supplementary Fig. 9). It was noted that many of the induced genes have documented functions in responding to cellular oxidative stress (OS), including AKR1C1 and AKR1C2 43 and SLC7A11 44 , that are known to be induced by Nrf2, a transcription factor central to cellular defense against oxidative stress 45 . Furthermore, GDF15 and EGR1 belong to a four-gene consensus signature of drug-induced hepatotoxicity developed from large-scale toxicogenomics data 46 . In addition, we used the gene set enrichment analysis method (GSEA) 47 to investigate coordinated functional changes induced in HLCs by PZ. All HLCs exhibited similar induction of IFN-α /IFN-γ response and bile acid metabolism, and repression of MYC and E2F targets and TGFβ signaling (Supplementary Table 4), reinforcing the notion of a universal effect of PZ in HLCs.
Toxicogenomic studies have shown that at least in the rat liver, oxidative stressors can be detected by the induction of a gene expression signature, distinguishable from other classes of hepatotoxicants and representative of Nrf2 activation 48 . As Nrf2-dependent response to OS is likely a conserved function in mammals, we translated a published signature corresponding to hepatotoxicant-induced OS in rat liver 49 to homologous human genes and visualized changes to their expression in PZ-treated HLCs. Confirming the OS inducing effects of PZ on HLCs, the majority of the genes from the signature followed the expected pattern of induction or repression, and the overlap was much clearer for induced genes (Fig. 4c). Well-known targets of Nrf2 activity including NQO1, HMOX1, AKR7A2 and ALDH1A1 were clearly induced in most if not all HLCs. Among genes that did not have strong induction across all lines (including HSP90 and UGDH), minor induction was still observed in at least two out of five HLCs. Some genes including GRN and CD47, that are repressed in rat liver upon OS showed discrepant induction in HLCs. We cannot rule out a non-specific or a rat liver-specific modulation of repressed genes. The expression of several Nrf2 target genes was verified by qPCR confirming induction ranging from 1.5-to 10-fold for most genes in at least in one PZ-treated HLC (Supplementary Fig. 10).
The degree of coordinated presence of an OS signature in PZ-treated HLCs, was quantified using GSEA which confirmed the 'OS up' signature (genes that are induced upon OS, Supplementary Table 5) was significantly enriched in HT1 (p = 0.05) and HT2 (p = 0.02) treated with PZ (Fig. 4d). Similar treatment with PZ produced a less significant enrichment of 'OS up' signature in HT3 (p = 0.21), NHT1 (p = 0.22) and NHT2 (p = 0.08). This suggest that PZ-induced OS is detectable from gene expression changes and is similar to a hepatocyte-relevant signature generated by other oxidative stressor drugs. The degree of enrichment of OS signature alone does not directly correspond to the susceptibility phenotype of HLCs, being present in all HLCs.
Differential transcriptional regulation of iron metabolism genes and iron accumulation in susceptible HLCs.
We reasoned that the transcriptional signals of large-scale primary effects of PZ (clearly identifiable in each HLC over its baseline) may mask the differential effects of PZ on HT-and NHT-HLCs. As such, we directly identified genes that are most distinct between HT-and NHT-HLCs treated with PZ by comparing the baseline-normalized expression data (Supplementary Table 6). One of the genes exclusively induced in HT-HLCs was TFRC which was on average 1.53-fold higher (p = 4.18E-05) in HT-HLCs compared to NHT-HLCs after PZ treatment ( Fig. 5a and Supplementary Table 6). TFRC codes for the transferrin receptor protein which mediates the cellular uptake of transferrin-bound iron via endocytosis, and its expression was significantly increased only in HT-HLCs after PZ treatment (Fig. 5b). Conversely, the transcript for HFE, the function of which is to limit iron uptake through its interaction with TFRC at the surface of cells 50 and polymorphisms in which have been linked to PZ-induced serum ALT elevation 31 , showed an increase only in NHT-HLCs after PZ treatment (Fig. 5b). This suggests that in HT-HLCs the modulation of TFRC and HFE expression can result in a higher intracellular content of iron, potentially exacerbating OS through the generation of highly reactive hydroxyl ( · OH) radicals from H 2 O 2 catalyzed by the redox-active form of iron (Fe 2+ ) in a 'Fenton' reaction 51 . Further, SPINK1 which was 1.45 times enriched in HT-HLCs (Fig. 5a) is known to be highly expressed in hereditary hemochromatosis-background hepatocellular carcinoma 52 , the genetic background to which is also HFE loss. To verify the perturbation of iron levels suggested by the transcriptional profiles of HLCs, we quantified intracellular iron content following exposure to 100 μ M PZ and determined the ratio of Fe 2+ (the redox active form of iron) to Fe 3+ relative to control-treated cells (Fig. 5c). PZ induced an increase in the ratio of Fe 2+ /Fe 3+ in all HLCs, however the increase was significantly greater in HT-HLCs at 6-to 7.6-fold compared to 2.8-to 3.3-fold in NHT-HLCs (p = 0.001) (Fig. 5c). Therefore, concomitant to the aberrance in expression of iron regulating genes, we confirmed the greater accumulation of redox-active iron in HT-HLCs.
We examined the expression of a set of genes involved in iron uptake, export and homeostasis ( Supplementary Fig. 11), using GSEA and found the coordinated enrichment of iron metabolism in HT-HLCs treated with PZ (p = 0.008) over NHT-HLCs (Fig. 5d). Extending GSEA to curated gene sets from MSigDB 47 showed a strong enrichment of gene set for 'reactive oxygen species' in HT-HLCs (FDR = 0.002) (Fig. 5e) Enrichment was also seen for gene sets for 'oxidative phosphorylation' , 'xenobiotic metabolism' , 'linoleic acid metabolism' , and among KEGG pathways the strongest enrichment observed was for RNA-related processes in the HT group ( Supplementary Fig. 12 and Supplementary Table 7). These reflect the differential effects induced by PZ in HT-HLCs with respect to mitochondria function, which is exquisitely sensitive to cellular redox status, and on lipid and RNA functions, which are the likeliest targets of cellular oxidative damage 53,54 , particularly that caused by · OH radicals. This suggests that while the primary general effect of PZ induces oxidative stress in HLCs, the burden of this stress is likely higher in susceptible HLCs over the time frame of the cytotoxicity assays, potentially contributed to by the differential regulation of iron metabolism. Therefore, utilizing patient-derived HLCs and gene expression changes we were able to obtain insight into a potentially significant mode of PZ-induced toxicity to hepatocytes centering on OS induced damage.
PZ induces glutathione depletion and generation of reactive oxygen species in HLCs.
The transcriptionally inferred induction of OS, was further tested by direct cellular measures of OS in HLCs. Drug-induced OS is typically initiated by a drug reactive metabolite (RM), that is produced via the activity of CYP450s. RMs being electrophilic undergo reactions with cellular glutathione (GSH) via chemical or enzymatic-mediated processes 55 , converting GSH to oxidized glutathione (GSSG). To measure the direct OS load generated by PZ reactive metabolite, we measured GSH depletion in HLCs four hours after drug treatment when secondary effects were minimal. At high dose of 100 μ M, PZ caused a significant depletion of GSH in all HLCs (p < 0.001), except HT3 (Fig. 6a). When treated with 50 mM APAP as a positive control, significant GSH depletion was also evident for all HLCs, in line with the known glutathione-conjugating effect of the APAP reactive metabolite, N-acetyl-p-benzoquinone imine (NAPQI) (p < 0.0001) (Fig. 6a). N-acetyl cysteine (NAC), which serves as precursors to GSH synthesis, could rescue the viability of HT1-and HT2-HLCs when co-incubated with PZ ( Supplementary Fig. 13). In all, the results suggest that PZ initiates early GSH depletion in 4 out of 5 HLCs, which is consistent with the formation of a RM. As GSH is required for the function of several redox regulating enzymes, an alteration to its levels can lead to the disruption of the redox balance of the cell and allow reactive oxygen species (ROS) such as O 2 ·− and H 2 O 2 , produced in cellular components like mitochondria to accumulate. Therefore, we looked for evidence of ROS accumulation in HLCs treated with PZ. Treated cells were exposed to the fluorogenic probe, CellROX Green, the fluorescence of which increases upon oxidation by ROS. Relative to control, CellROX fluorescence intensity increased 1.11-to 1.26-fold in all HLCs, except HT2, when treated with 1-10 μ M PZ for 4 hours (Fig. 6b and c, Supplementary Fig. 14). At 100 μ M, we observed interference of the CellROX fluorescence signal with possible autofluorescence from PZ (data not shown). In control experiments, 50 mM APAP treatment for 4 hours, generated 1.16-to 1.48-fold increase in CellROX intensity (Fig. 6b and c). We noted that CellROX intensity with 10 μ M PZ was strongly correlated to the GSH/GSSG ratio, representing GSH depletion (R = 0.994; p = 0.0157) in HLCs (Fig. 6d). This suggested that at 4 hours after PZ treatment, GSH likely acts as direct scavenger of ROS and is removed from the cellular pool upon its oxidation by the transient increase in ROS initiated by PZ 56 . Importantly, this data accounts for the apparent lack of accumulation of ROS in HT2 (Fig. 6b), and the lack of GSH depletion in HT3 (Fig. 6a), as each occupies an extreme position on the correlation plot (Fig. 6d). Notwithstanding the kinetics of variations in ROS and GSH amounts, it is apparent that incubation of HLCs with PZ leads to perturbation of their redox state, as reflected by both GSH depletion and ROS accumulation. A similar correlation between GSH depletion by 50 mM APAP and CellROX intensity in HLCs was also evident ( Supplementary Fig. 15a), suggesting there is an inverse relationship between the cellular GSH pool and the accumulation of ROS, for both PZ and APAP, appearing to be a general effect of a hepatotoxic drug on HLCs. In all, at 4 hours, HT3 appeared most prone to accumulation of ROS and attendant lack of GSH depletion, while HT2 had the greatest GSH depletion and least ROS accumulation (for both PZ and APAP).
Putting the metabolic activity of HLCs in context with the above observations, we show that the degree of GSH depletion by treatment with 100 μ M PZ is correlated with basal CYP1A2 activity of individual HLCs (R = − 0.711), but not with their CYP3A4 activity (R = 0.0947) (Fig. 6e and f). Similar correlation with CYP1A2 activity was also apparent for GSH depletion induced with 50 mM APAP ( Supplementary Fig. 15b and c). This strongly suggested that in patient-specific HLCs the metabolic activity of CYP1A2 was the major contributor to the formation of putative RM (indirectly measured as GSH depletion over 4 hours), both for PZ and APAP. However, at least in vitro, CYP1A2 activity levels were not distinguishable between HT and NHT groups, suggesting additional underlying patient-specific features act on the initial OS-inducing effect of PZ to elevate it to phenotypically distinct and measurable levels.
In support of the theory that factors not related to the primary metabolism of PZ by CYPs play a role in the increased OS and sensitivity in certain patient-derived HLCs, we measured intracellular parental PZ concentration in HLC lines 4 hours after exposure to 100 μ M PZ. Using Liquid chromatography-Mass Spectrometry (LC-MS), we show that parental PZ accumulates to highly comparable amounts in all HLC lines ( Supplementary Fig. 16). This furthers the notion that differential metabolism and intracellular accumulation alone cannot account for differential cytotoxic effects of PZ.
Genotyping of individuals for polymorphisms of CYPs relevant to PZ metabolism revealed the occurrence of the CC/CA allele for CYP1A2 rs762551 in the HT cases, and AA allele in NHT cases (Supplementary Table 8). This suggests variants of CYP1A2 are distinct in HT-and NHT-HLCs and could be contributing factors in a patient's susceptibility to PZ, over the longer course of drug administration clinically. Additionally, we examined polymorphisms in genes related to pharmacokinetics and pharmacodynamics of PZ and for antioxidant defense to examine further contribution of distinguishing variants to patient variability in hepatotoxicity. HT cases had specific variants for the ABCB1 (mediator of PZ efflux) and VEGFR2 (pharmacological target of PZ) (Supplementary Table 9). Taken together, our results indicate that PZ-mediated hepatotoxicity is initiated by the drug's intrinsic property of inducing OS which is correlated in vitro to CYP1A2 activity measurable as immediate effects. The distinction between susceptibility phenotypes could not be simply attributed to OS induced by reactive metabolites or the contribution of CYP activity in HLCs. The combination of measurable differences in iron metabolism and genetic polymorphisms identified through the comparison of HT and NHT-HLCs are candidates for patient risk factors to PZ-induced hepatotoxicity.
Discussion
Being genetic matches to individuals with distinct clinical phenotypes, iPSCs have been applied for the modeling of several monogenic diseases with well-defined causative mutations 57 , and for drug-induced toxicities with clearly associated genetic variations or mutations 20,58 . In this study, we further the application of genetically-matched iPSCs and demonstrate for the first time their utility in modeling drug-induced idiosyncratic toxicities, which result from potentially multiple uncharacterized predisposing genetic factors harbored by patients. We show that patient iPSC-derived hepatocyte-like cells (HLCs) could successfully model adverse drug reactions to pazopanib (PZ), a drug with clinically relevant hepatotoxicity, and delineate the multi-factorial mechanistic nature of its toxicity.
This work relied heavily on a robust system to generate HLCs from pluripotent stem cells. A relatively facile system for generating iPSCs based on non-integrating episomal method for reprogramming immortalized lymphocytes was adopted 33,34 and for each iPSC line, a similar and high degree of differentiation to HLCs with liver-specific functions, including albumin production and CYP activities were achieved 35 indistinguishable in their overall hepatic functions, the HLCs were suitable for studying patient-specific drug-induced hepatotoxicity (requiring drug metabolism) 59 . As exemplified with PZ, the successful generation of functional iPSC-derived HLCs essentially provides an unlimited supply of patient-specific cells to perform transcriptional profiling and diverse cellular assays to shed light on the multi-factorial mechanism of PZ hepatotoxicity, which has remained uncertain despite the high rate of associated clinical hepatotoxicity 28 . We believe this approach to studying idiosyncratic reactions is generally applicable to several hepatotoxic drugs of unknown mechanistic basis.
To our knowledge, this is the first demonstration on the application of toxicogenomics analyses on patient-derived hepatocytes, which has so far being limited to primary hepatocytes [60][61][62][63][64] , to seek differences among individuals that may contribute predisposing features for hepatotoxicity. Using GSEA to obtain a meaningful overview of the cellular mechanisms affected by PZ, we found that perturbations to lipid and RNA metabolism occur in HT-HLCs within the context of differential modulation of oxidative phosphorylation and reactive oxygen species pathways. We identified a potential role for altered iron metabolism in exacerbating redox imbalance in HT-HLCs through the modulation of transcript levels of TFRC and HFE. This finding is particularly relevant, as an association between the HFE polymorphism rs2858996, which has been predicted to reduce its expression compared to wildtype allele, and PZ-induced hepatic dysfunction has been documented 31 . The report suggested that PZ-mediated hepatotoxicity might result from the pharmacological inhibition of its targets including VEGFRs, since HFE induction is suppressed when VEGF signaling is inhibited 31 . Our study has independently implicated altered iron metabolism in PZ-mediated hepatotoxicity, quantifiable as an increase in redox-active iron in HLCs, and generated an alternative hypothesis that altered iron metabolism exacerbates the oxidative stress load produced by PZ and its reactive metabolites.
The generation of patient-derived HLCs also allowed the recapitulation of cellular responses to PZ-induced toxicity. Even in NHT-HLCs and primary human hepatocytes, PZ reduces viability to ~70% compared to untreated cells, suggesting an inherent property of PZ induces damage in hepatocytes. First through transcriptional analysis and then through cellular measures, we show a generalized GSH depletion and ROS accumulation in HLCs within four hours of PZ administration. This demonstration made for the first time in HLCs for PZ, agrees with the evidence of RM formation from PZ in microsomal assays, that show time-dependent inhibition of CYP3A activity and the trapping of reactive intermediates of PZ by glutathione 65 . Although circulating RM have not been identified yet for PZ, future efforts on this front would be of interest in the light of our data and the background knowledge that a number of other TKIs which share physicochemical and metabolism properties with PZ are metabolized to reactive intermediates 29 . Further, the relatively high dose of PZ administration compared to other TKIs makes the potential consequences of RM formation even more relevant 66 .
In placing our results in metabolic context, we found that all three HT patients were carriers of similar germline variants (CC/CA) for CYP1A2 rs762551, for which the 'C' allele has been associated with adverse cardiac effects from chlorpromazine resulting from increased plasma drug exposure 67 . No association has previously been made with CYP1A2 polymorphisms and clinical hepatotoxicity in RCC patients on PZ therapy 32 . There may be differences arising from ethnicity as polymorphic association studies have been conducted primarily in Caucasians 31,32 and all five patients in the present study were of East Asian origin. Interestingly, among the five HLCs studied here, we did find a correlation between basal CYP1A2 activity and GSH depletion from PZ treatment. Several lines of evidence therefore implicate CYP1A2 in the metabolism of PZ to potentially reactive intermediates. Additionally, we also found the specific 'TT' variant for ABCB1 rs2032582 in HT cases. PZ is a substrate of ABCB1 which mediates its efflux, however there is no concordant outcome related to drug exposure and drug effect for this ABCB1 polymorphism 68 .
In conclusion, despite the above polymorphic observations, we could not ascribe any one genetic feature as a risk factor for PZ-mediated hepatotoxicity. This reinforces our assertion that unlike disease phenotypes that can be correlated to a single genetic variant previously modeled with iPSC-derived cells 69 , idiosyncratic hepatotoxicity has multiple causative factors, and making associations with single risk factors inevitably has its pitfalls 23 . Hence, we advocate the use of patient-derived HLCs for the assessment of drug and patient features in their totality, with the application of combined measures of cellular viability, oxidative stress and gene expression as readouts for quantifying degrees of hepatotoxicity.
Given the relatively small sample size for this work (5 patients), the utilization of any given degree of differential cellular sensitivity (e.g.~25% above control lines) as a readout for potential hepatotoxicity may not represent full statistical rigor. We acknowledge this sample size limitation, which would relate to the concern of whether inter-individual variability may account for the differential observations between the 3 cases and 2 controls. Nonetheless, the concordance between clinical observations and multiple (in contrast to single) cellular phenotypic assays -including viability, oxidative stress and iron levels-reduces the possibility of chance arrival at conclusion of differential effects of PZ and supports true variability measurable in cellular models. The concordance observed between the patient and the cellular phenotypes here may therefore be interpreted as a proof of concept, and a foundation for future studies with expanded patient numbers. Despite statistical limitations from sample size, this work provides early support for investigating patient-specific HLCs in modeling a highly specific idiosyncratic drug reaction, when knowledge of factors underlying such individual hepatotoxicity is scarce. In turn, these models can be further interrogated to determine the potential mechanisms driving such toxicity. Patient-specific HLCs are useful tools for interrogating drug-and patient-specific features that contribute to individual susceptibility. Finally, as parts of cellular banks, characterized HLCs can also be valuable in the context of pharmaceutical drug testing by being representative of population genetic diversity.
Cytotoxicity measurement. HLCs were harvested after 20 days of hepatic differentiation cells with 2× TrypLE (Life Technologies), and plated onto Matrigel (Corning)-coated 96 well plate at the density of 3 × 10 4 cells per well for overnight incubation. Cells were then treated with serially diluted concentrations of acetaminophen (APAP, Sigma) and pazopanib hydrochloride (Selleck Chemicals) in the basal differentiation medium supplemented with 20 ng/ml HGF (R&D systems), 100 ng/ml Follistatin-288 (R&D systems) and 20 ng/ml Oncostatin (R&D systems) for 24 hours. Drug dilutions were prepared in DMSO. Cell viability was determined by the MTS assay using CellTiter 96 ® AQueous One Solution Cell Proliferation Assay kit (Promega) and the ATP assay was performed with CellTiter-Glo ® Luminescent Cell Viability Assay kit (Promega).
For viability experiments with N-acetyl cysteine (NAC), a similar procedure was adopted except cells were co-incubated with 0.5 mM NAC (Hidonac ® , Zambon S.p.A.) and PZ for 24 hours. Vehicle controls were appropriately modified to include 0.05% EDTA in PBS (in which NAC was dissolved). Cell viability was measured using CellTiter-Glo reagent.
Cryopreserved primary human adult hepatocytes. Cryopreserved primary human adult hepatocytes were purchased from vendors including Life Technologies (Gibco HP4239, HP4248, Lot 4227 and Lot 8105) and BD Biosciences (Lot 246). Human hepatocytes were thawed and pelleted by centrifugation at 50 g for 5 minutes at 4 °C. For RT-PCR assay, the cell pellet was lysed in Buffer RLT Plus (Qiagen) and stored at − 80 °C for future processing. For hepatic function assay, the pellet was resuspended in hepatocyte culture medium and further seeded on Type I bovine collagen (Advanced BioMatrix) coated tissue culture plate.
Transcriptional analysis of HLCs using microarrays. For expression analysis to study PZ-induced
transcriptional changes, all five HLC lines were treated with 100 μ M PZ or DMSO for 24 hours. For each group, RNA from three biological replicates was isolated using RNeasy Plus Micro kit (Qiagen), except for HT2-DMSO group for which duplicates were used. A total of 29 samples were analyzed. Preparation of samples for microarray hybridization and scanning was done at Biopolis Shared Facilities, Singapore. The Affymetrix Human Genome U133 Plus 2.0 platform was used. CEL files were imported and normalized using rma from affy package in R version 3.2.1. Data was further analyzed using package genefilter to filter microarray data to retain features detectable in at least five samples (out of 29 samples) with log2 scale expression value greater than 9 (6700 probes). Differentially expressed genes were identified using SAM function of siggenes package on the filtered dataset by comparing PZ-treated samples to DMSO controls, labeled as PZ or control. To select genes that were differentially expressed between HT-HLCs and NHT-HLCs after PZ treatment, SAM was applied on control-normalized expression data i.e. normalized to respective DMSO controls and appropriate labels were applied for HT and NHT samples. Heatmaps of gene expression were drawn using heatmap.2 function of gplots package in R. Principal component analysis (PCA) was done using selected features and PCA plots were generated by isolating and first and second principal components and plotting them on x-and y-axes.
Quantification of glutathione/oxidized glutathione and reactive oxygen species (ROS).
Cellular glutathione and oxidized glutathione content was measured in 96-well plates using GSH/GSSH-Glo assay (Promega). HLCs were harvested and seeded at 3 × 10 4 cells per well of Matrigel-coated plate, allowed to adhere overnight, and then treated with drug or vehicle control for 4 hours before cell lysis to measure relative amounts of total and oxidized glutathione. Ratio of glutathione and oxidized glutathione was calculated from net relative light units (RLU, after background subtraction) as recommended in the manufacturer protocol. For the quantification of ROS, the CellROX Green reagent was used (Molecular Probes, Life Technologies). HLCs were seeded in Matrigel-coated 24-well plates at the density of 1 × 10 4 cells per well and allowed to adhere overnight. HLCs were incubated with PZ at indicated concentrations or with vehicle (DMSO) for a total of 4 hours. In the final hour i.e. 3 hours after drug incubation, 10 μ M CellROX Green was added. Cells were washed with PBS, collected by trypsinization, resuspended in 0.5% BSA in PBS for analysis by flow cytometry for green fluorescence and 50,000 events per sample were collected. For imaging of CellROX fluorescence, cells were washed with PBS following incubation with 10 μ M CellROX Green and imaged with a fluorescence microscope (IX81 Olympus). Intracellular iron measurement. Measurement of intracellular iron concentration was done using the Iron Colorimetric Assay Kit (Biovision), as per the manufacturer's instructions with modifications. HLCs were seeded in Matrigel-coated 12-well plates, at density of 500,000-800,000 cells/well and allowed to attach overnight, following which HLCs were treated with 100 μ M PZ or DMSO as control. Twenty four hours after treatment, HLCs were washed with PBS and detached using Accutase. HLCs were centrifuged and washed and finally resuspended in 100 μ l of Iron Assay Buffer, following which the harvested cell count was determined. Cell lysis did not occur in the Iron Assay Buffer, therefore to induce lysis 5 μ l of 1 M SDS was added and cell suspension was thoroughly vortexed. Cellular debris was removed by centrifugation for 10 minutes at 16,000 × g. For determination of total (Fe 2+ and Fe 3+ ) and reduced (Fe 2+ ) iron amounts separately, 100 μ l lysate was split two-ways to give a sample volume of 50 μ l per measurement. Colorimetric measurement of iron concentration was done as per instructions and absorbance was measured at wavelength of 593 nm and converted to nmol amount based on standard curve that was included in each run. Finally, the amounts of Fe 2+ and Fe 3+ (as inferred from the total and Fe 2+ amounts) in nmol was normalized per cell using cell count determined prior to lysis. For each HLC line, ratio of Fe 2+ to Fe 3+ was determined based on cell number-normalized concentration measures for control-and PZ-treated samples.
Statistical Analysis. Data are presented as the mean ± s.e.m. Statistical significance was determined by unpaired Student's t-test and two-tailed p value of < 0.05 was considered to be statistically significant. Correlation analysis was done using Pearson's correlation analysis.
Clinical Samples. Informed consent was received from all subjects prior to inclusion in study. All experimental protocols involving human subjects, including generation of iPSCs from peripheral blood mononuclear cells, were approved by the SingHealth Centralised Institutional Review Board (Protocol #MMHPC-2011). All methods were carried out in accordance with the approved guidelines.
|
v3-fos-license
|
2022-05-28T06:22:55.617Z
|
2022-05-26T00:00:00.000
|
249095455
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": null,
"oa_url": null,
"pdf_hash": "7712dea6c212e360f5a55487040a40e7d6d503d9",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2311",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "942e8774e3f1598da23a422a28a0c525bc37f282",
"year": 2022
}
|
pes2o/s2orc
|
Chromatin accessibility analysis reveals regulatory dynamics and therapeutic relevance of Vogt-Koyanagi-Harada disease
The barrier to curing Vogt–Koyanagi–Harada disease (VKH) is thought to reside in a lack of understanding in the roles and regulations of peripheral inflammatory immune cells. Here we perform a single-cell multi-omic study of 166,149 cells in peripheral blood mononuclear cells from patients with VKH, profile the chromatin accessibility and gene expression in the same blood samples, and uncover prominent cellular heterogeneity. Immune cells in VKH blood are highly activated and pro-inflammatory. Notably, we describe an enrichment of transcription targets for nuclear factor kappa B in conventional dendritic cells (cDCs) that governed inflammation. Integrative analysis of transcriptomic and chromatin maps shows that the RELA in cDCs is related to disease complications and poor prognosis. Ligand-receptor interaction pairs also identify cDC as an important predictor that regulated multiple immune subsets. Our results reveal epigenetic and transcriptional dynamics in auto-inflammation, especially the cDC subtype that might lead to therapeutic strategies in VKH.
V ogt-Koyanagi-Harada disease (VKH) is a systemic autoimmune disorder characterized by bilateral granulomatous uveitis with meningeal, auditory, and dermal manifestations 1 . It is one of the major sight-threatening uveitis entities in Asia and South America [2][3][4] . Aggressive systemic corticosteroids in combination with immunosuppressive agents remain the mainstay of treatment 5,6 , but a large proportion of patients progress and have a poor prognosis, leading to visual impairment, reduced quality of life, and even blindness. In addition, the undesirable side effects (e.g., hyperglycemia, osteoporosis, and obesity) related to the prolonged use of corticosteroids and immunosuppressive agents highlight the need to develop new therapeutic strategies with fewer complications and less risk of treatment failures [7][8][9] .
A better understanding of how pathogenic networks in immune cells influence inflammation is a prerequisite for the treatment success of VKH. Previous studies have shown the involvement of T cells (especially, T helper 17 [Th17] and T helper 1 [Th1] cells) as a part of the systemic inflammatory process in animal experimental autoimmune uveitis (EAU) models and in blood samples of patients with VKH [10][11][12] . Th1 cells were the first T cell subsets considered to be the etiologic agent of VKH because of the cytotoxicity against melanocytes 13,14 . Several reports have implicated Th17 cells in the pathogenesis of VKH disease via IL-23/IL-17 pathway 15,16 . Recent single-cell RNA study has provided insight into the atlas of peripheral monocytes in VKH patients and how interferonstimulated gene changes within monocytes reflects disease activity 17 . However, the role of other immune cell subtypes and their underlying epigenetic dysregulation in the pathogenesis of VKH has not been previously documented.
Single-cell assays for transposase-accessible chromatin sequencing (scATAC-seq) has emerged as a novel approach to delineate single-cell-specific epigenomic regulatory landscapes 18 . This technology enables genome-wide identification of cell-typespecific cis-elements, mapping of disease-associated enhancer activity, and inference of transcription factor (TF) binding and activity at a single-cell resolution 19 . In the current study, we aimed to delineate a multiomic landscape in peripheral blood mononuclear cells (PBMCs) derived from healthy individuals and patients with VKH based on an integrative analysis of single-cell RNA sequencing (scRNA-seq) and scATAC-seq datasets. We revealed a wide range of epigenomic and transcriptomic changes in healthy subjects and patients with VKH disease. Notably, we identified conventional dendritic cells (cDCs) as an important regulator of the pro-inflammatory state and revealed that RELA might be a key transcription factor in cDCs that is associated with highly inflammatory states and with poor prognosis. This study offers insights into therapeutic options for VKH and similar autoimmune diseases.
Results
Single-cell chromatin accessibility and transcription sequencing workflow. We isolated the nuclei and RNA from sex, agematched individual PBMC samples from healthy individuals and VKH patients. In the first cohort, the PBMCs were obtained from patients diagnosed with acute VKH disease (n = 12) and the same healthy control (HC) group (n = 12) ( Fig. 1a and Supplementary Table 1). Both nuclei and RNA were processed through the 10× Genomics platform using the standardized scATAC-seq and scRNA protocols, respectively. The scATAC-seq libraries were sequenced, the reads were de-multiplexed, and the fragments were aligned to the human reference genomes and de-duplicated using Cell Ranger ATAC. The scRNA-seq libraries were sequenced, demultiplexed and aligned to the human reference genomes and de-duplicated using Cell Ranger. The scATAC-seq data were analyzed using ArchR 20 , whereas the scRNA-seq datasets were processed using Seurat 21 . All these data were further analyzed after stringent quality control filtration, the thresholds of the scATAC-seq and scRNA-seq are described in the Methods (Supplementary Fig. 1a-c). For both scATAC-seq and scRNA-seq dataset, we conducted a harmony-based batch correction 22 on each dataset. This allowed for meaningful downstream integrated analyses ( Supplementary Fig. 2a-h and Methods). After quality-control filtering, we retained 133,140 cells for the scATAC-seq and 195,948 cells for the scRNA-seq analysis.
Identification of cell types in healthy blood using scATAC-seq.
To establish a baseline peripheral immune cell normal chromatin profile, we first identified 74,510 cells from healthy individuals, with an average of 9,047 uniquely accessible fragments per cell (Fig. 2a). We performed latent semantic indexing (LSI) for dimensionality reduction and harmony-based batch correction and applied Seurat to identify clusters 21 . Using these approaches, we identified 25 major scATAC-seq clusters, which were then visualized using uniform manifold approximation and projection (UMAP) (Fig. 2a). We first compared the differentially accessible chromatin regions (DARs) for each cell subset and applied ChIPseeker 23 to annotate the distribution of the DARs in the genome. As expected, the distribution of the peak regions was relatively conserved across the different cell types, and the majority of the peaks were located in a promoter region within 3 kb of the nearest transcriptional start site (Fig. 2b).
To comprehensively describe the heterogeneity of the immune cell subsets in the PBMCs, we created a workflow to identify cell type and cell state signatures from scATAC-seq profiles, with reference to the gene expression/ATAC profiles for the cell types/ subpopulations identified previously. We identified 75,654 cisregulatory elements (CREs) across all the clusters and revealed cell type-specific cis-elements. By applying peak annotation analysis from ChIPseeker 23 to identify the nearest genes to a peak, we could identify the cis-elements within a single gene locus. For example, Fig. 2c shows some of the known gene signatures for cDC (e.g., HLA-DRA 24 , CLEC4C 24 , and CD1C 24 ), monocytes (e.g., S100A12 25 , S100A8 26 , VCAN 27 , MS4A14 28 , LYN 29 , and CEBPA 24 ), progenitors (e.g, GATA1 19 and GATA2 19 ), plasmacytoid DCs (pDC) (IRF8 24 ), CD4 + T cells (CD4T) (LEF1 30 The sparsity of single-cell cis-element information prompted us to use gene activity scores (GAS) for cell type annotations in the scATAC-seq profiles. We utilized this analytical approach to confirm the cis-element-defined cluster identities and further classify the immune cell subpopulations 20 . In agreement with the surface phenotypes identified using the CRE approach, the progenitor cells showed a high GAS in the GATA2 19 and naïve T cells in IL7R 30 ( Supplementary Fig. 3c, S4a, b). The high GAS on the surface markers CD8 and granzyme B (GZMB) further identified cytotoxic immune subsets 31 , including CD8T and NKs. The high GAS of TCL1A identified naïve B cells (Fig. 2d, Supplementary Fig. d, S4c, d) 33 . The GAS analysis across all clusters enabled the identification of phenotypically distinct cell subsets for dividing NK cells into three subsets. NCAM1 high FC-GR3A low B3GAT1 low NK cells were defined as early NKs (NK1), NCAM1 low FCGR3A high B3GAT1 low NK cells were intermediate NKs (NK2), and NCAM1 low FCGR3A high B3GAT1 high NK cells were late NKs (NK3) 34 (Supplementary Fig. 3e, S4e, f). We were also able to identify myeloid subsets into monocytes (including classical monocytes [CM], intermediate monocytes [IntM], and non-classical monocytes [NCM]) and dendritic cells (including pDC and cDC) ( Supplementary Fig. 3f, S4g, h). CM cells were CD14 + FCGR3A -, IntM were CD14 + FCGR3A + while NCM were CD14 -FCGR3A +35 . 12 T cell subsets were also identified based on the GAS analysis: CD4 + naive T cells (CD4 Naive), CD4 + central memory T cells (CD4 TCM), T regulatory cells (Treg), T helper 2 cells, Th17 cells, cytotoxicity CD4 + T cells (CD4 CTL), CD8 + naive T cells (CD8 Naive), CD8 + central memory T cells (CD8 TCM), CD8 + effector memory T cells (CD8 TEM), CD8 + mucosal-associated invariant T (MAIT) cells and double-negative T cells ( Supplementary Fig. 3c, S4a, b) 36,37 . In B cells, we identified TCL1A + naive B cells (naive B), Fig. 3d, S4c, d).
In addition to assessing the CREs and GAS for key lineage identification, we also measured chromatin accessibility at ciselements sharing a TF binding motif using chromVAR 40 . In this approach, we incorporated both the TF footprints and TF deviation scores to further annotate and/or validate the rare cell subsets. For example, we identified the pDCs based on the enriched TF deviation scores of the IRF8 factor motif (Fig. 2e) 24 . The naïve T cells showed the activity of the T cell lineagedetermining factor LEF1, consistent with the results from the TF footprint analysis (Fig. 2f) 19 . Surprisingly, we noticed that Fig. 1 The single-cell multiomic experimental design. a Schematic representation of the single-cell profiling of PBMCs from healthy controls (n = 12) and VKH disease patients (n = 12) in this study, sequencing experiments and downstream bioinformatic analyses. All data are aligned and annotated to hg38 reference genome. Dendritic cells (24)(25) B cells (14)(15)(16)(17) CD4 + T cells (2)(3)(4)(5)(6)(7)(8) CD8 + T cells (9)(10)(11)(12) DN T cells (13) NK cells (9)(10)(11)(12) MAIT cells shared and even had a higher activity on the RARrelated orphan receptor (ROR) family than that of the Th17 cells, which are known to be key regulators in autoimmune diseases 37 (Fig. 2f, Supplementary Fig. 3g). As expected, the TF deviation scores for PAX5, a lineage-determining factor for B cells 41 , were increased in all the B cell subsets. It is also important to note that DN2B showed the unique activity of TBX21, whereas IRF4 was active in the remaining B cell subsets (Fig. 2g). DN2B cells have been previously documented as an exhausted memory B cell subset 39 . In myeloid cells, we found that cDC showed higher activity for SPI1 24 , whereas CM showed higher activity for CEBPA 24 (Supplementary Fig. 4i). Collectively, our approach allows for the analysis of chromatin accessibility in both common and rare cell types from human peripheral blood.
Multi-omics analysis of the peripheral immune-cell profiling.
To study the PBMCs of VKH patients, we first integrated the VKH and the HC dataset and performed unbiased iterative clustering followed by Harmony-based batch correction on each sample in HC groups and VKH groups (Supplementary Fig. 2ad). We then used the above-mentioned cell type identification pipeline to identify 25 immune subsets in 195,948 cells (Fig. 3a). Next, we processed the scRNA-seq data of VKH and HC groups and corrected the batch effect for each sample using Harmonybased batch correction (Fig. 3a). We manually annotated the 25 cell types in scRNA-seq dataset based on gene expression of the marker genes, which were consistent with the ones for our scATAC-seq data, to minimize the differences in cell-type composition between the two sequencing methods (Fig. 3a, Supplementary Fig. 5a-c). Next, we sought to illustrate the epigenetic regulation in the VKH patients. We utilized the recently developed method 21 that identifies pairwise correspondences (called "anchors") between single cells across two different types of datasets and projects their transformation into a shared space (Supplementary Fig. 6a-e) 42 . The whole procedure was parallelized and separately aligned using ArchR 20 and Seurat 21 by dividing each cell into smaller groups (see Method). This approach allowed us to integrate the gene expression data from the scRNA-seq data to the scATAC-seq data by mapping the gene score and gene expression to generate an integration matrix with gene expression in the scATAC-seq dataset (Supplementary Fig. 6a-e). As expected, the GAS and gene expression were highly consistent and could distinguish the cell types identified (Fig. 3b). The frequencies of the immune cell subsets between the HC and VKH groups were comparable ( Supplementary Fig. 6f). Our datasets also allowed us to dissect the mechanisms behind causal risk variants previously identified from genome-wide association studies (GWAS) and to identify disease-relevant cell types related to these loci. It was previously proposed that some of the disease-causing loci, while residing in noncoding regions 43 , could exert their effects by altering gene expression via perturbation of the TF binding sites and regulatory element function 44 . We collected known VKH GWAS loci reported in previous publications and mapped the disease-related singlenucleotide polymorphisms (SNPs) in the cis-elements for each cell 45 . Two variants, rs78377598 and rs3032304, were located within the IL23R and HLA-DQA1 loci, respectively (Fig. 3c). The IL23R locus is highly accessible in MAIT cells, while HLA-DQA1 is highly accessible in cDCs. These results may be informative for inferring the cellular impact of disease variants in these loci.
We also explored the potential mechanisms explaining how immune dysregulation results in organ damage in human ocular tissues and human skin. We used previously published ATAC-seq and CHIP-seq datasets of human eye tissues, including retina, macula, and retinal pigment epithelium (RPE)/choroid 46 , and human skin 47 for cell-type-specific peak enrichment in our scATAC dataset (Fig. 3d, Supplementary Fig. 6g). We found that cDCs and monocytes subsets were mainly enriched in the RPE/ choroid and skin. This might have implications in the pathogenesis of VKH, as the autoimmune attack is known to affect pigmented tissues, which results in vitiligo and ocular depigmentation 48 . In addition, we also revealed an enrichment of the CD4 + T cells in the retina and macular region as well as MAIT cells in the skin, retina, and macular regions (Fig. 3d).
Next, we analyzed the differential gene expression (DEG) on scRNA-seq dataset between the six main cell types by comparing the HC with VKH (Fig. 3e). We noticed that the T cells were activated in the VKH patients, with CD69 49 , JUNB 50 , and CXCR4 51 being highly expressed (Fig. 3e). TNFAIP3 was both highly upregulated in the T cells, which has been reported to be a common predisposing gene for autoimmune diseases, including VKH 52 . The NKs in VKH showed a higher cytotoxic capacity and higher chemokine levels, with upregulation of ISG20 49 , DUPS2 49 , CCL4 53 , and CCL3 53 , as compared to that in the HC (Fig. 3e). Moreover, the B cells also had an enhanced antigen-present function with an increased expression of HLA-DQA2 54 and CD83 55 with upregulation of genes of the nuclear factor kappa B (NF-κB) family and of the activator protein (AP-1) family ( Fig. 3e) 55 . Myeloid cells in VKH were also identified as the main proinflammatory factor in patients with VKH disease. In the monocyte population, we found that the genes (e.g., IL1B 17 , TNF 17 , CCL3 17 , CCL4 17 , and ICAM1 56 ) related to cytokines, chemokines, and adhesion were upregulated (Fig. 3e) 17 . Notably, HIF1A encodes the hypoxia-inducible factor (HIF) protein, which is also highly expressed in monocytes in VKH. Finally, for the DC subset, which is known as the key antigen presenter in immunity, they were more mature, with a higher capacity for antigen presentation by CD83 55 , as compared to the other subsets, and HLA-DQA2 was upregulated in the patients (Fig. 3e). In addition, DCs acted as pro-inflammatory players, with high expression levels of cytokine and chemokine genes (e.g., TNF 57 , CXCL8 57 , JUN 56 , JUNB 56 , CCL3 57 , CCL4 57 , IL1B 57 , and DUSP2 56 ). In summary, our results demonstrate that immune cells in VKH patients are generally activated and proinflammatory.
T cell subsets and response in VKH. To dissect the role of T cell subsets in VKH, we first compared the differences between T cell subsets among the HC and VKH patients in scATAC-seq. We first re-clustered the T regulatory (Treg) cells. This led to the identification of two subsets of Tregs with imbalanced frequencies between the VKH patients and HCs (Fig. 4a-c, Supplementary Fig. 5a). In cluster 1 (effector Treg [eTreg]), the effector genes such as RORC, CCR8, and CCR6 were highly expressed. In cluster 2 (resting Treg [rTreg]), the remaining and naïve phenotype gene, such as LEF1, TCF7, and CCR7 were expressed (Fig. 4b, Supplementary Fig. 5b) 58,59 . Although these gene differences were not observed between VKH and HC cells among the full Treg population in RNA expression, the rTreg demonstrated a significantly greater frequency, while eTreg showed a reduced frequency (of borderline statistical significance) in VKH in our scATAC-seq dataset 60 (Fig. 4c).
We further compared the DARs in the five main T cell subsets (Th1, Th17, Treg, CD8 + T effector memory (TEM), and MAIT) between the HC and VKH patients. The MAIT cells exhibited the largest number of peak changes among the T cell subtypes (Fig. 4d). In the VKH patients, CD69 37 , JUNB 61 , CXCR4 61 , and PDRM1 62 were upregulated in the MAIT cells in our scRNA-seq dataset, suggesting the involvement of MAIT cell activation ( Supplementary Fig. 7c). A Gene Ontology (GO) analysis of the nearest peak annotation of VKH-upregulated DARs showed that Th1 cells were involved in the interferon (IFN) and transforming growth factor (TGF)-beta signaling and T cell receptor signaling pathways, with higher accessibility to the IFNG locus (Fig. 4e, Supplementary Fig. 7d). In the Th17 cells, the AP-1 pathway, Janus kinase (JAK)-signal transducer and activator of transcription (STAT) signaling, and IFN-gamma production were activated, while Treg cells in the VKH were involved in IFN type I signaling pathway, CD28 family costimulation, interleukin (IL) 3 signaling, Th17 differentiation, and T cell activation, with higher accessibility to the IL10 locus (Fig. 4e, Supplementary Fig. 7e). Among the CD8 T cells, the GO analysis of the CD8 TEM illustrated the regulation of cell-cell adhesion, enhanced (13) CD8 + T cells (9)(10)(11)(12) CD4 + T cells scATAC-seq gene activity score scRNA-seq gene expression CD8 T cell receptor (TCR) pathway and immune effector process, and enhanced cytotoxicity and MAPK cascade in VKH (Fig. 4e).
The MAIT subset was involved in the MAPK signaling and CD8 TCR pathway and was positively related to cellular adhesion. Thus, MAIT cells may play an important role in adhesion molecules and integrins and in the migration of inflamed tissues (Fig. 4e). As expected, the MAIT cells were also associated with the activation of the cytokine pathway of the IL12 pathway, NF-κB signaling, and TNF pathway, consistent with their activated phenotype status. We further employed TF footprint analysis on the T cells to reveal the distinct TF footprints on the genomic DNA directly from VKH versus that in the HCs. Notably, the runt-related TF family members RUNX1 and TBX21 (also known as T-bet) were enriched in the Th17 cells, which are known to be involved in the production of pathogenic IFN-gamma production 63 ( Supplementary Fig. 7f). In the VKH patients, there was a more pronounced DNA occupancy of RELA and NFKB1 in the MAIT cells (Fig. 4f). The DNA occupancy was also identified higher activity of eomesodermin (EOMES) and TBX21 in the CD8 TEM cells as compared to that in the HCs ( Supplementary Fig. 7g), indicating the enhanced effector states of CD8 + T cells.
Overall, the T cells in the VKH patients exhibited activation phenotypes, with compositional and epigenomic alterations.
CD14 + monocyte subsets and response in VKH blood. The CD14 + monocytes have previously been recognized as proinflammatory players in VKH 17 . To investigate the enhanced inflammation epigenetic reprogramming in CD14 + monocytes, we reclustered the CM and observed three sub-clusters in the CM (Fig. 5a). Based on the peak accessibility, GAS and gene expression of IL1B and HLA-DQA1 17 , the proinflammatory CM characterized by the highest expression on IL1B and the HLA CM showed high expression on human leukocyte antigen (HLA)related gene, and the remaining CM showed high expression on the S100A8 and VCAN loci (Fig. 5b, Supplementary Fig. 8a-c) 64 .
We further compared the DARs between each cell state and noticed that the marker peaks of each state were different (Supplementary Fig. 8d). We used the differential ATAC-seq peaks as an input to conduct TF motif enrichment analysis 65 and identify the TFs associated with their differentiation programs (Fig. 5c). We noticed that pro-inflammatory CM relied on the AP-1 family and Krüppel-like family (KLF) motifs, which are essential for monocyte activation and maturation 66,67 . As for the HLA CMs, we identified increased accessibility of the IRF1 and ETS family TFs, SPI1 (also known as PU.1), which are related to IFN stimulation and major histocompatibility complex (MHC) class II gene expression (Fig. 5c) [68][69][70] . The rest of the CM was enriched in CCAAT/enhancer-binding protein (CEBP) family members and basic leucine zipper ATF-like TF (BATF) (Fig. 5c).
To understand the developmental dynamics of proinflammatory monocytes, we constructed a cellular lineage trajectory of CD14 + monocytes based on their differentiation states, which progressed from rest to a pro-inflammatory state. We generated ordered single cells (termed as 'pesudotime') based on our multi-omic dataset (Fig. 5d, Supplementary Fig. 9a-b). The dynamic TF motif activities across the trajectories were consistent with their differentiation states (Fig. 5e). For instance, IRF1 activity was observed in the HLA CM, consistent with the result of our motif enrichment analysis, followed by the sequential activity of CEBPA, SPI1, and FOS, recapitulating the known order of their functions in rest CM, HLA CM, and proinflammatory CM, respectively (Fig. 5e, f). In addition, HIF1A and KLF TFs are also activated in the pro-inflammatory CM 71 . Moreover, the TF footprint analysis showed changes surrounding the HIF1A binding sites in the pro-inflammatory CM but not in the other CM subsets, suggesting the possibility of the role of HIF1A as a key transcription factor driving monocyte maturation and inflammation (Fig. 5f, g). Together, our results revealed epigenetic reprogramming in the development of monocytes.
Next, we analyzed the differential peaks and genes of CM between the HC and VKH patients. Although we did not observe changes in the frequency of CMs, the difference in chromatin accessibility and gene expression was notable between HC and VKH, which was consistently shown in each dataset (Fig. 5h, i, Supplementary Fig. 4b, S9c). In scRNA-seq dataset, we analyzed the upregulated DEGs and showed that CMs in VKH were characterized by various cytokine and chemokine genes (CCL3L1, CCL4, CCL3, IL1B, TNF, CXCL8, and CXCR4) 17 , with enhanced cellular adhesion capacity (ICAM1) 56 . The high expression of HIF1A in CM suggesting the importance of HIF control in the inflammatory activity of monocytes. The TF footprints also showed an increase in chromatin accessibility in the footprint depth of the NF-κB family (e.g., RELA and NFKB1) in VKH, representing the highly inflammatory state of CMs (Fig. 5j). Collectively, our results shed light on the interdependence of innate immunity inflammation and hypoxic responses in VKH patients, showing that CD14 + monocytes might maintain a rapid inflammatory response through HIF1A-driven chromatin reprogramming immunity.
Disease-specific TF regulatory patterns in the cDCs. To further describe the potential function of cDCs in VKH, we conducted a differential analysis at the epigenomic and transcriptional levels (Fig. 6a, Supplementary Fig. 9d). In accordance with our scATAC-seq data, the cytokine and chemokine genes (IL1B, CCL3L1, CXCR4) were upregulated and were more accessible in VKH than in the HCs, representing the activated states of DCs (Fig. 6a). In addition, we observed an increased expression of the HLA genes (HLA-DQA2) and increased accessibility of LAMP3 and CCR7, indicating a mature and enhanced antigen-presenting capacity of cDCs (Fig. 6a, b).
To assess the pro-inflammatory state of cDCs in VKH, we utilized the marker genes of inflammatory CD1C + DCs documented in a published study 72 (Supplementary Table 2) to estimate the inflammation scores across all cDCs. By comparing VKH with HC, we identified higher inflammation scores in VKH, which exhibited a strong potential to secrete immune mediators and lead to autoimmune disease (Fig. 6c). Fig. 3 Overview of the immune-cell epigenetic and transcriptional landscape of PBMCs from VKH patients and healthy human. a Schematic for Multiomics integration strategy for processing the scATAC-seq dataset and scRNA-seq dataset. b Dot plots of gene activity scores (left) and gene expression (right) of the marker genes in scATAC-seq and scRNA-seq dataset. The dot size indicates the percentage of the cells in each cluster in which the gene of interest. The standardized gene activity score level (left) and gene expression level (right) were indicated by color intensity. c Cis-regulatory architecture at the following GWAS loci and cell types in PBMCs: IL23R and HLA-DQA1. Only connections originating in the loci with peak-to-gene accessibility above 0.2 are shown. d ChromVAR deviation enrichment of the peakset of human tissues (including eyes and skins) from ATAC-seq and CHIP-seq dataset from HCs against the scATAC-seq dataset from healthy peripheral blood cell populations. e Dot plots of the expression level of the differential genes between normal and VKH CD4 + T cells, CD8 + T cells, nature killer cells, B cells, monocytes and dendritic cells in scRNA-seq dataset. All data are aligned and annotated to hg38 reference genome.
To further elucidate the pathogenic pathways and regulators involved in VKH, we next utilized the nearest DAR genes for our GO and motif enrichment analysis (Fig. 6d, e). The top signaling pathways of cDCs in VKH included T cell activation and the IL12-STAT4 pathway, indicating the capacity to activate adaptive immunity (Fig. 6d). We also identified pathways involved in DC activation, maturation pathways (involving MAPK cascade, NF-κB signaling, and TNF signaling), mammalian target of rapamycin (mTOR) signaling pathway, and pathways for cell adhesion (Fig. 6d). Interestingly, we also revealed a pro-angiogenic VEGFA-VEGFAR2 signaling pathway and IL18 signaling that are implicated in the angiogenesis process were also enriched in cDCs (Fig. 6d). By employing the motif analysis of the DARs, we identified a significantly enriched NF-κB family (NFKB1, NFKB2, RELA, REL, and RELB) in VKH (Fig. 6e). The chromatin accessibility of the AP-1 family motifs (JUNB, FOSL1, JUND, and FOS) and BTB and CNC homology (BACH) family motifs (BACH1 and BACH2) were also upregulated (Fig. 6e). In accordance with the motif-enriched data, the cDCs in VKH showed notably higher occupancy in RELA and NFKB1 in our footprint analysis (Fig. 6f).
To illustrate the NF-κB-family-centered regulatory program network in the cDCs in VKH, we employed a recently established method to identify putative TF target genes based on the scATAC-seq and scRNA-seq data 73 (see Methods, Supplementary Fig. 6d). First, we identified differentially linked peaks and genes. Next, the NF-κB family motifs were selected and assembled to identify the linked differential accessibility regions (Supplementary Fig. 10a). Finally, all the linked genes were combined to create a linkage score and the genes needed to exhibit differential expression and accessibility in the groups. Using this approach, we found 1372 genes regulated by the NF-κB family, containing distal elements in VKH (Supplementary Data 1). We further constructed an NF-κB family regulatory network based on the TFs and TF-targeted genes (Fig. 6g). For instance, one of the NFKB1 targeted genes in VKH, SIGLEC1, was previously reported to have genetic associations with autoimmune disease 74 . In summary, this approach provides a comprehensive regulatory network to unveil the role of the NF-κB family in cDCs.
To validate the role of activation of RELA in cDCs among the VKH patients, we included the RNA-seq data from Cohort 2, in which 89 VKH patients were recruited and had their peripheral blood drawn at baseline and at the three-month follow-up (Supplementary Data 2). We used the identified RELA target genes in VKH to stratify patients with different prognoses. We observed significantly decreased survival (p < 0.0001) in patients with a high RELA-target-gene signature (Fig. 6h). The NFKB1 target genes also showed a potential for stratifying VKH patients with different prognoses (p < 0.00012) (Supplementary Fig. 10b). Altogether, RELA is an important TF and acts as a prognostic predictor in VKH.
cDC-centric cellular communication network. To identify the reciprocal communication between cDCs and other immune effector cell subsets, we surveyed the accumulated ligand/receptor interaction database CellPhoneDB 75 . VKHs had a stronger cellcell interaction in cDCs than in HCs (Fig. 7a, b). Myeloid lineage clusters showed the highest capacity for cell-cell interactions (Fig. 7a, b). The cDCs shared the highest number of predicted interactions with monocyte subsets and even increased in patients with VKH disease. In line with the essential roles of cDCs and T cells as immune regulators in VKH, cDCs harbored ligand numbers 51, 45, 49, 56, and 49, with Th17, Th1, Treg, MAIT, and CD8TEM, respectively (Fig. 7b).
After examining the differentially expressed receptor-ligand pairs, we further identified enhanced immunomodulation in the cDCs in VKH (Fig. 7c). In terms of immunomodulation, we identified an increased interaction of cDCs with CD4T and CD8T through the prediction of ligand/receptor pairs of the TNF superfamily in VKH. In our dataset, the Th17 and Th1 cells shared increased TNF-FAS and TNFSF13 (BAFF) -FAS interaction with cDCs in VKH. Increased TNF and BAFF signaling are important factors orchestrating sustained inflammation in Th1 and Th17 cells 76 . The TNF-FAS signaling was also predicted to be activated in the interaction between cDCs and MAIT and in CD8TEM cells (Fig. 7c). The increased TNF signaling in CD8 + T cells resulted in the activation of NF-κB and MAPK cascades, which is in agreement with our previous findings (Fig. 4e). The TNF-ICOS interaction was only increased in the Treg cells, regulating Treg cell function. We noticed that CD40L/CD40 interactions were predicted to be increased in Th1, Th17, Treg, and MAIT cells. This has been reported to be a key regulatory interaction in autoimmune diseases that engages antigenpresenting cells and enhances proinflammatory cytokine production 77 . In our analysis, the cDCs were predicted to increase the chemokine pairs such as CCL3L1-DPP4, CCL3-CCR1, CCL3L1-CCR1, and CCL4L2-VSIR. These interactions, which limit the myeloid cell lineage and T helper cells, promote immune cell chemotaxis and migration (Fig. 7c). Altogether, our data suggest the potential of cDCs in regulating multiple immune cell subsets via cellular interactions, including TNF superfamily and chemokines. Further experiments are required to investigate whether cDCs are responsible for the regulation that might lead to the initiation of the inflammatory response in T cells and myeloid cell activation.
Discussion
Our integrated single-cell multiomic analysis of PBMCs provides a comprehensive understanding of the cellular heterogeneity and cellular phenotypes underlying the pathogenesis of VKH. This analysis enabled us to (1) map the single-cell atlas of PBMCs in VKH and identify rare cell-specific TFs in human peripheral blood; (2) illustrate the pro-inflammatory role of NF-κB in VKH; (3) investigate the chromatin and transcriptional reprogramming in cDCs by integrative analysis of scRNA-seq and scATAC-seq datasets; (4) dissect the pathogenic activation of RELA in cDCs and reveal the link with prognosis in VKH; and (5) study the paired ligand-receptor between cDCs and lymphocytes, supporting the key role of cDCs in immune regulation in VKH patients.
Hu et al. 17 have previously performed a single-cell RNA study on the VKH patients' peripheral monocytes. Consistently, we also identified a group of pro-inflammatory monocytes (Supplementary Fig. 8c) that may be responsible for the induction of cytokine 17 . Further animal studies are needed to evaluate whether these proinflammatory monocytes are regulated by the TF activity of NF-κB family.
Epigenetic reprogramming is known to play an important role in the pathogenesis of autoimmune disease 44 . Imbalance of immune responses and overproduction of inflammatory cytokines in VKH disease has been associated with aberrant epigenetic changes [78][79][80] . Our profiling of patients with VKH disease allows us to demonstrate that T effector cell subsets have a highly activated phenotype, supports these findings [14][15][16] . For example, Fig. 4 Epigenomic and transcriptional signatures of T cell subsets in VKH patients. a Subclustering UMAP of 3,182 CD4 + Treg. Dots represent individual cells, and colors indicate immune cell types (labeled on the below). b UMAP projection of CD4 + Treg colored by gene activity scores to the indicated gene. c Differences in the proportions of rTreg and eTreg among HC (n = 12) and VKH groups (n = 12). The adjusted p values were calculated using two-sided pairwise Wilcoxon test. d Heatmap of Z-scores of DARs in Th1, Th17, Treg, CD8TEM, and MAIT from HC and VKH. e Representative GO terms and KEGG pathways enriched in the nearest genes of upregulated DARs of Th1, Th17, Treg, CD8 TEM, and MAIT cells in the VKH/HC comparison group. f Comparison of aggregate TF footprints for RELA and NFKB1 in MAIT cells from HC and VKH. All data are aligned and annotated to hg38 reference genome.
the activation of Th17 cells in VKH disease might lead to pathogenicity which might be driven by the activation of transcription factor T-bet.
Much attention has been focused on the role of cDCs in autoimmune diseases 47,81 . We identified distinct TF regulatory characteristics in cDCs in the VKH. Further analysis of the putative TF regulated network showed that high RELA activity in the cDCs was associated with poor prognosis. Intriguingly, CCR7 + LAMP3 + cDCs have been recently reported in cancer and have been characterized as mature DCs with a high potential for migration 82,83 . Lysosomal-associated membrane protein 3 (LAMP3) + cDCs are also involved in the pathogenic cellular microenvironment, with resistance to anti-TNF therapy in Crohn's disease 84 . A recent study utilized scRNA-seq with flow cytometry and low-input proteomics to identify cDCs as an important player in ocular cell infiltration in HLA-B27 + uveitis 85 .
Our study suggests that NF-κB and its subunit might be important regulators of cDC activation and maturation. Further animal experiments are needed to confirm their regulatory effects on the antigen-presenting capacity of cDCs 86 . Consistently, enhanced NF-κB signaling in the cDCs has recently been described as a baseline predictive factor for patients non-responsive to anti-TNF therapy in psoriasis 87 . Importantly, we found that patients with higher RELA activity had a poorer prognosis than those with lower RELA activity. Further studies with long-term observations are required to confirm this finding.
In summary, cDCs might work as a key pro-inflammatory player and lymphocyte activator in VKH. Our single-cell multiomic atlas of human peripheral immune cells offers insights into the pathogenesis of VKH and its therapeutic options.
Materials and methods
Human subjects. This study was approved by The Ethics Committee of Zhongshan Ophthalmic Center (Guangzhou, China, 2019KYPJ114). All the participating individuals provided Written informed consent in the study. The relevant ethical regulations regarding human research participants were followed in accordance with the Declaration of Helsinki. All healthy individuals and patients were recruited from Zhongshan Ophthalmic Center. Individuals with comorbid conditions including cancer, immunocompromising disorders, hypertension, diabetes, and steroid use were excluded. The 12 healthy subjects (HC) consisted of 6 men and 6 women, with an average age of 39.9 years old. In the first VKH patient cohort (Supplementary Table 1), there were seven men and five women aged between 16 and 65 years. No significant differences in gender or age was detected between the HC and VKH groups. The diagnosis of VKH disease was based on the revised diagnostic criteria established by the First International Workshop on VKH Disease 88 . In the second VKH patient cohort, 89 VKH patients (38 men and 51 women) were recruited and followed up to determine whether they developed complications such as cataract, glaucoma, choroidal neovascularization, and subretinal fibrosis. During the 3-month follow-up, 35 patients (39.3%) developed at least one complication (Supplementary Data 2), and they were classified as those with a poor prognosis 5,7,8 .
Cell isolation. To isolate PBMCs, all the peripheral venous blood samples were collected from healthy donors or patients using Ficoll-Hypaque density solution, heparinized, and then centrifuged for 30 min. Trypan blue staining was used to determine the viability and quantity of PBMCs in single-cell suspensions. For each sample, we ensured the cell viability exceeded 90% for the following experiment. For each sample with more than 1 × 10 7 viable cells, a fraction of PBMCs was extracted for scRNA-seq analysis, and a fraction of PBMCs was allocated for singlecell assays for transposase-accessible chromatin sequencing (scATAC-seq).
scATAC-seq processing. The nuclei isolation, washing and counting of nuclei suspensions were performed according to the manufacturer's protocol. Based on the number of cells and desired final nuclei concentration, an appropriate volume of chilled Diluted Nuclei Buffer (10x Genomics; PN-2000153) was used to resuspend nuclei. The resulting nuclei concentration was determined using a Countess II FL Automated Cell Counter. Nuclei were then immediately used to generate 10× single cell ATAC libraries in Berry Genomics Co., Ltd. (Beijing, China). Libraries were uniquely barcoded and quantified using RT-qPCR. Each sample library was loaded on an Illumina Novaseq 6000 with 3.5 pmol/L loading concentration after pooling in pair-end mode. Next, Libraries were sequenced to either 90% saturation or 30,000 unique reads per cell on average. We followed the protocols for sample processing, library preparation, and instrument and sequencing settings on the 10× Chromium platform at https://support.10xgenomics.com/single-cell-atac. Raw sequencing data were converted to fastq format using Cellranger atac mkfastq (10× Genomics, v.1.0.0). scATAC-seq data reads were aligned to the GRCh38 (hg38) reference genome and quantified using the Cellranger count function (10x Genomics, v.1.0.0).
scATAC-seq quality control. Arrow files were generated using ArchR v0.9.5 20 by reading in accessible read fragments for each sample, following the default augments, unless otherwise indicated. To make sure that each cell had a high signal and well-sequenced, we filtered cells with less than 2500 unique fragments and enrichment at TSSs below 9. Doublets were inferred and filtered using ArchR 20 . We also removed the cells that mapped into blacklist regions based on the ENCODE project reference.
scATAC-seq dimensionality reduction and clustering. We performed a layered dimensionality reduction approach using latent semantic indexing (LSI) and singular value decomposition (SVD), followed by Harmony 22 batch correction based on each sample. Subsequently, single-cell accessibility profiles were clustered using Seurat's shared nearest neighbor (SNN) 21 graph clustering with 'FindClusters' at a default resolution of 0.8 on the harmonized LSI dimensions. During the reclustering step, clustering with 'FindClusters' at a default resolution of 0.3-1.5 to better identify small clusters. All data were visualized using uniform manifold approximation and projection (UMAP) in two-dimensional space.
scATAC-seq gene activity scores. Gene activity scores were calculated based on the accessibility within the gene body, at the promoter and at distal regulatory elements was correlated with gene expression using ArchR v.0.9.527 with default parameters 20 . We also used additionally imputed weight method MAGIC 89 on the resulting gene activity scores for reducing noise of the scATAC-seq data sparsity.
scATAC-seq pseudobulk replicate generation and peak calling. For differential comparisons of clusters, cell types, and clinical states, non-overlapping pseudobulk replicates were generated from groups of cells using the 'addGroupCoverages' function with different arguments. These pseudobulk replicates were then used to generate the peak matrix (using 'addReproduciblePeakSet'). We further used MACS2 90 to perform peak calling. The pseudobulk peak set was used for downstream analysis.
scATAC motif enrichment and motif deviation analysis. We performed motif enrichment and motif deviation analyses on the pseudobulk peak set. We used the Catalog of Inferred Sequence Binding Preferences (CIS-BP) motif (from ChromVAR) 40 , JASPAR2020 motif 91 and HOMER 65 to perform peak annotation. Additionally, the chromVAR deviation scores for these motifs were computed using ArchR implementation. scATAC-seq differential analysis. The pseudobulked peak set was used for differential analysis between different cell types and different clinical states using the 'getMarkerFeatures' function. We defined peak intensity as log 2 of the normalized read counts. We used Wilcoxon test and Benjamini-Hochberg multiple test to calculate the p value and FDR between any pair of samples. Differentially accessible distal peaks were defined as FDR ≤ 0.1 and log2-fold change ≥0.5 92 .
scATAC-seq Gene Ontology annotation and genomic regions annotation. In the differential analysis, we used the "annotatePeak" function in the ChIPseeker package 23 to annotate the nearest genes in the peak region with default arguments. Subsequently, we used the nearest genes as in the Metascape webtool (www. metascape.org) 93 which allows visualization of functional patterns of gene clusters. Fig. 5d). The TF motif accessibilities are indicated by chromVAR TF-motif bias-corrected deviation. f chromVAR bias-corrected deviation scores for the indicated TFs across CM pseudotime. Each dot represents the deviation score in an individual pseudotime-ordered scATAC-seq profile. The line represents the smoothed fit across pseudotime and chromVAR deviation scores. g Comparison of aggregate TF footprints for HIF1A in CM subsets. h Genome browser tracks showing single-cell chromatin accessibility in the IL1B and TNF locus. i Dot plots of the expression level of the differential genes between normal and VKH in CM in RNA-seq dataset. j Comparison of aggregate TF footprints for NFKB1 and RELA in CM from HC and VKH. All data are aligned and annotated to hg38 reference genome.
Statistical analyses were performed to conduct DEG gene ontology and pathway enrichment. A p value of less than 0.05 was considered statistically significant.
scATAC-seq TF Foot-print analysis. Motif footprint analysis was performed by measuring Tn5 insertions in genome-wide motifs and normalized by subtracting the Tn5 bias from the footprinting signal. For each peak set, we used CIS-BP motifs (from chromVAR motifs human_pwms_v1) 40 or JASPAR2020 motifs 40 to calculate motif positions. We normalized these footprints using mean values ±200-250 from the motif center. We then plotted the mean and standard deviation for each footprint pseudo-replicate.
scATAC-seq ChromVAR deviation enrichment of human eye and skin tissues. In ChromVAR deviation enrichment 40 The.bw files were read and processed using the Rtracklayer package 94 . We identified the cis-elements in each tissue and extended them ±2.5 kb. A 'GRan-gesList' object was created with a feature set of peaks for downstream analysis. Next, we used the pipeline designed by Satpathy et al. 19 to calculate the coaccessibility in our scATAC-seq dataset for each single-cell group using Cicero 95 and created a connection matrix (Supplementary Fig. 6f). To identify co-accessible peaks in each tissue within our scATAC-seq data, we then overlapped the peaks with the connection matrix. We kept the matrix with peaks that over 3 coaccessibility ( Supplementary Fig. 6f). To computed the GC bias-corrected deviations, we used the chromVAR "computeDeviations" and "computeVariability" function with default parameters (Supplementary Fig. 6f).
scATAC-seq peak to gene linkage analysis. To identify peak-to-gene links prediction, we used the ArchR 'addPeak2GeneLinks' function and set the parameter 'corCutOff' as 0.2, 'reducedDims' as the dimensionality reduction results after batch corrected. The returned 'GRanges' object were used for visualization.
scATAC-seq GWAS SNPs liftover and DARs mapping. We downloaded the GWAS data from GWAS Catalog (https://www.ebi.ac.uk/gwas/) using the searching term 'Vogt-Koyanagi-Harada disease'. The collected GWAS data was identified by Hou et al. 45 . All the gene locus of the SNPs was chosen for inferring peak-to-gene linkages. To pinpoint the GWAS SNPs to our datasets, we used the UCSC utility liftOver (https://genome.ucsc.edu/cgi-bin/hgLiftOver) to lift the GWAS SNPs from hg19 to hg38. We then took the set of differentially accessible peaks (in the positive direction) for each cell type and annotated each SNP according to whether it overlapped one of these peaks. Only the locus with over 0.2 correlations between SNPs and genes were kept.
scRNA-seq processing. The scRNA-seq libraries were barcoded and converted using The Chromium Single Cell 5 Library (the 10× Genomics chromium platform), Gel Bead and Multiplex Kit, and Chip Kit (10× Genomics). According to the manufacturer's protocols, we prepared the Single-cell RNA libraries using the Chromium Single Cell 5' v2 Reagent (10x Genomics, 120237) kit. The libraries for scRNA-seq experiments were sequenced on Illumina NovaSeq6000 in pair-end mode. The quality of the libraries was checked using the FastQC software. The sequenced data were first processed and aligned to the GRCh38 reference for each sample using CellRanger software with the default parameter (https://support. 10xgenomics.com, version 3.1.0). The Cell Ranger-count function in CellRanger Software Suite (10x Genomics) was used to demultiplex and barcode the sequences derived from the 10x Genomics single-cell RNA-seq platform. The data were filtered, normalized, and dimensionality was reduced and clustered. We then used CellRanger-aggr to aggregate all the samples for downstream analysis.
scRNA-seq quality control. For quality control, cells were filtered out with more than 11% of mitochondrial genes and fewer than 200 or more than 3000 detected genes using Seurat V3 21 . We further filtered the cell populations identified as red blood cells and platelets that expressed HBB, HBA1, PPBP, and PF4 genes.
scRNA-seq dimensionality reduction and clustering. After normalization, scale data with the top 5000 most variable genes using the 'FindVariableFeatures' function in R package Seurat v3. We performed principal component analysis using variable genes, and the first 30 principal components (PCs) were further used to deal with batch effect issues using the Harmony package based on each sample. We then performed Seurat clustering on the Harmony to batch-correct dimensions at the resolution of 0.8. We further performed the UMAP analysis, a dimensionality-reducing visualization tool, was used to embed the dataset into two dimensions.
scRNA-seq differential analysis. For scRNA-seq differential expression analysis, we used the "FindAllMarkers" function of the Seurat package with default parameters. A p value of less than 0.05 was considered statistically significant 96 .
scRNA-seq signature score analysis. To assess the inflammatory state in circulating dendritic cells, we collected all marker genes from inflammatory CD1c + dendritic cells 72 . Inflammatory signature scores were estimated for all cells as the average of the scaled Z-normalized expression of the genes in the list. The scores were calculated as follows: the score of the gene set in the given cell subset (named as X) was computed as the sum of all UMI for all the genes expressed in X cells, divided by the sum of all UMI expressed by X cell 38,97 .
Multiomics data processing. To integrate scRNA-seq and scATAC-seq dataset, we followed the integration pipeline described in ArchR, Seurat and Signac 98 website. First, we implemented the ArchR built-in method to divide the total dataset into smaller groups of cells and performed separate alignments for saving computational RAM. We then applied Seurat's canonical correlation analysis (CCA) to integrate our epigenetic and transcriptomic data. No further batch correction method was used. For this purpose, the integration analysis was based on the log-normalized and scaled scATAC-seq gene score matrix with the scRNAseq gene expression matrix. By directly aligning cells from scATAC-seq with cells from scRNA-seq, the union of the 2000 most variable genes was used in each modality as input to Seurat's "FindTransferAnchors" function and Seurat's "TransferData" function with "weight.reduction" set to the dimensionality of scATAC-seq dataset after Harmony batch correction while other parameters were set to default. For each cell profiled by scRNA-seq and each cell profiled by scA-TAC-seq, we identified the nearest neighbor cell in the respective other modality by applying a nearest-neighbor search in the joint space CCA L2 space. These nearestneighbor-based cell matches from all gestational time points were concatenated to obtain dataset wide cell matches across both modalities.
Pseudotime analysis. To order cells in pseudotime, we identified a trajectory and then aligned single cells across the trajectory in scATAC-seq dataset, scRNA-seq dataset as well as the merged dataset 42 . Based on the user-defined trajectory backbone, cellular trajectories were established in a low-dimensional space using batch-corrected LSI embeddings. CD14 + monocyte subsets were provided to ArchR 20 using 'addTrajectory' function with "preFilterQuantile" and "post-FilterQuantile" set to 0.95 while other parameters were set to default. Then, a k-nearest neighbor algorithm was used to order cells based on the Euclidean distance of each cell to the nearest cluster's centroid. Cells were then assigned pseudotime value estimates, and a heatmap was plotted using differential feature z-scores that were associated with the pseudotime trajectory.
Identifying TF target genes. To identify significantly shared TFs and their directly regulated target genes in VKH disease, we used the framework designed by Granja et al. 73 . We first identified a set of TFs whose hypergeometric enrichment in differential peaks between VKH patients and healthy subjects, and the enrichment was identified as being transcriptionally correlated with the accessibility of their motifs (see above). Next, for a given TF and all identified peak-to-gene links, we further subset these links by those containing the TF motif. For each peak-to-gene link, we determined whether both the peak and the gene were upregulated in the VKH group. In addition, for each gene that has at least one differential peak-togene link, we summed their squared correlation and defined that as the differential linkage score.
Receptor-ligand pair analysis. Receptor-ligand analysis between cDCs and other immune cell subpopulations was performed using CellphoneDB statistical analysis, v.2.0 75 . We extracted the gene matrix from scRNA-seq data between different clinical state groups to perform this analysis. We selected the ligand/receptor interactions with more significant (p < 0.05) cell-cell interaction pairs in disease states than in healthy groups.
RNA-seq library preparation, sequencing, and analysis. Total RNA was extracted from the blood samples following the manufacturer's instructions. The libraries were sequenced using an MGI-2000 sequencing instrument. The quality control process included adapter trimming and low-quality read removal using Trim Galore (v0.6.4; https://github.com/FelixKrueger/TrimGalore) with parameters '-q 20 -phred 33 -stringency 3 -length 20 -e 0.1'. The clean mRNA data were mapped to the human genome GRCh38 using Bowtie2 99 (v2.3.5.1; http://bowtie-bio.sourceforge. Fig. 6 Epigenomic and transcriptional signatures of cDC subsets in VKH patients. a Dot plots of the expression level of the differential genes between normal and VKH in cDCs in scRNA-seq dataset. b Genome browser tracks showing single-cell chromatin accessibility in the CCR7 and LAMP3 locus. c Box plot of inflammatory signature score in all cells of each group. All p values were calculated using Kruskal-Wallis test. d Enrichment of biological processes associated with nearest genes of DARs in VKH compared to HC regions. e Visualization of TF binding motif enrichment analysis results for DARs in VKH compared to HC regions by using CIS-BP database from chromVAR. f Comparison of aggregate TF footprints for NFKB1 and RELA in cDC cells from HC and VKH. g TF regulatory network showing the NF-κB family and its potential target genes in VKH. The width of an edge indicates the peak to gene linkage correlation. h Kaplan-Meier curve for patients with VKH (n = 89) stratified by putative RELA-target genes (n = 328); average z score log2(expression) (log-rank test p < 0.001). All data are aligned and annotated to hg38 reference genome.
Survival analysis. For survival analysis, we matched FPKM gene expression to each sample ID. We computed row-wise z-scores for all genes that were identified as target genes for NFKB1 (n = 347) and RELA (n = 382). Next, we used the column means of this matrix to obtain an average z-score across all NKFB1 and RELA target genes. We further identified donors based on this expression. We computed p values using R package survival. Kaplan-Meier curve was plotted using the R package survminer 'ggsurvplot' in R.
Statistics and reproducibility. Statistical analysis of the frequencies of immune cell subpopulations between groups was performed using one-way ANOVA tests with Bonferroni's post-hoc correction with GraphPad Prism 8.0. Two-sided p values of less than 0.05, were considered statistically significant. All the statistical details for the experiments can be found in the figure legends as well as in the Method Details section. When comparing the gene expression levels between groups, we estimated the p values using the two-sided Wilcoxon test in R package ggpubr with default parameters. In estimating the GO biological process and pathway, p values were derived by a hypergeometric test with the default parameters in the Metascape webtool. Each figure legends include the details of the size of biological replicates and the assays.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
The scRNA-seq, scATAC-seq and bulk RNA-seq data analyzed in the article are available from the corresponding author upon request under the Project Accession No. PRJCA004696 and GSA Accession No. HRA001643 (Beijing Institute of Genomics).
|
v3-fos-license
|
2021-05-05T00:09:08.874Z
|
2021-03-16T00:00:00.000
|
233690123
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3417/11/6/2660/pdf",
"pdf_hash": "f6f9410a88d51be1c25d8177eac861ed5ad97b47",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2312",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"sha1": "90cb150928fb653d927b44ed7d721e601060a719",
"year": 2021
}
|
pes2o/s2orc
|
Chiral Pyrazolo[4,3-e][1,2,4]triazine Sulfonamides—Their Biological Activity, Lipophilicity, Protein Affinity, and Metabolic Transformations
Referring to our previous laboratory results related to the tyrosinase and urease inhibition by pyrazolo[4,3-e][1,2,4]triazine sulfonamides, we examined here in silico the mechanism of action at the molecular level of the investigated pyrazolotriazine sulfonamides by the molecular docking method. The studied compounds being evaluated for their cytotoxic effect against cancer cell lines (MCF-7, K-562) and for recombinant Abl and CDK2/E kinase inhibitory potency turned out to be inactive in these tests. The pyrazolotriazines were also investigated with respect to their lipophilicity and plasma protein binding using HPLC chromatography in isocratic conditions. The observed small affinity for plasma proteins could be advantageous in the potential in vivo studies. Moreover, the compounds were sensitive to metabolic transformations with phase I enzymes, which led to the hydroxylation and dealkylation products, whereas phase II transformations did not occur.
Introduction
Among 1,2,4-triazines condensed with a five-membered heterocycle, the pyrazolo [4,3-e] [1,2,4]triazine system is a novel scaffold and important source for the construction of bioactive molecules. Moreover, it has been studied less in comparison with the other fused pyrazolotriazines. Its natural derivatives, such as pseudoiodinine, nostocine A and fluviols A-E, have been indicated in the extracellular metabolites of cyanobacterium of the class Pseudomonas fluorescens var. pseudoiodinine [1] and Nostoc spongiaeforme [2]. These compounds inhibit the growth of gram-positive and gram-negative bacteria and exhibit antitumor activity [3]. The lack of significant biological properties in the group of simple substituted pyrazolotriazine derivatives forced further functionalization of the heterocyclic core. The combination of the naturally occurring pyrazolo [4,3-e] [1,2,4]triazine ring system with pharmacophore groups enabled the design of higher potential biological activity for new derivatives. An important group among pharmacophores is a sulfonamide moiety characteristic for many chemical compounds used in medicine [4,5]. Its importance stems from the diverse biological activity of such substituted compounds, which includes antibacterial, antimalarial, hypotensive, diuretic, hypoglycemic, antithyroid, antiparasitic, anti-inflammatory and antiglaucomatous properties [6]. The literature reports show that sulfonamides can act as inhibitors of enzymes such as phosphodiesterase type 5 (PDE5) [7], carbonic anhydrase [8,9], tyrosinase [10,11] or cyclin-dependent kinases (CDKs) [12,13]. Furthermore, studies have shown that sulfonamides may exhibit a cytotoxic effect by inhibiting the activity of carbonic anhydrase in tumor cells (CA; EC 4.2.1.1) [14][15][16]. It has been shown that two isoenzymes of carbonic anhydrase, such as CA IX and CA XII, are clearly associated with significant overexpression in many tumors [17,18], and they are involved in key processes associated with the tumor progression and the response to treatment [19].
In order to characterize the structural and electronic parameters-as well as the reactivity and stability parameters-of all investigated compounds 8a-m, theoretical calculations at the DFT level were performed. Our previous pharmacological studies showed that all chiral sulphonamides 8a-j exhibited significant inhibitory activity on mushroom tyrosinase and jack bean urease [20]. Therefore, molecular docking studies were carried out to investigate in silico the affinity for the active sites of protein enzymes selected from the PDB database.
Adsorption distribution and other pharmacokinetic properties of the molecules were considered in the next step of the drug discovery process. The high throughput measurements of membrane interactions and plasma protein binding (PPB) are performed using different chromatography techniques. Reversed phase chromatography on octadecyl (C-18), cholesterol or phosphatidylcholine (immobilized artificial membrane (IAM)) in a stationary phase are used for lipophilicity determination [21]. To evaluate the extent of compound-toplasma protein binding, HPLC affinity chromatography with immobilized human serum albumin (HSA) and α 1 -glycoprotein (AGP) is applied [21]. Continuing our research on the chiral pyrazolo [4,3-e] [1,2,4]triazine sulfonamides [22], here we will discuss their biological activity, lipophilicity and metabolic transformations. The isocratic HPLC chromatography studies include the assessment of the compounds' lipophilicity using octadecyl (C-18) and IAM stationary phases, while the extent of compound-to-protein binding was determined in the relation to the human serum albumin (HSA) and α1-glycoprotein (AGP). Moreover, the characterization of molecular structures and electronic parameters of the molecules of all investigated compounds was carried out using the theoretical calculations at the DFT/B3LYP/311++G(d,p) level. Virtual screening by the molecular docking method was performed based on the obtained results of biological tests for analyzed sulfonamides.
Chemistry
The multistep synthesis of the target sulfonamides 8a-m is presented in Scheme 1. Briefly, in the first step, oxime 2 was obtained and readily transformed into ketone 3 in a good yield [23,24]. Next, the appropriate hydrazone 4 was prepared as a key intermediate for the preparation of 1H-pyrazolo [4,3-e] [1,2,4]triazine 5. The hydrazone 4 could be converted into derivative 5 under conventional heating (10% HCl, EtOH, reflux, 1 h) [25] or under solvent-free reaction conditions, according to our previously published procedure [26]. In the next step, using palladium-catalyzed cross-coupling reaction conditions and 2-ethoxyphenylboronic acid in the presence of copper (I) 3-methylsalicylate derivative 6 furnished in an excellent yield [27]. The chlorosulfonylation reaction of compound 6 in neat chlorosulfonic acid at 0 • C proceeded smoothly and selectively at the 5 -position of the phenyl ring to give the desired product 7 [28] as a key intermediate for the synthesis of the final sulfonamides 8a-m in a high yield.
Theoretical Calculations
The theoretical calculations at the DFT/B3LYP/311++G(d,p) level for all investigated sulfonamides were carried out in order to characterize the molecular structures and electronic parameters of molecules 8a-m. In addition, theoretical calculations provided the molecular structures of the analyzed molecules for the molecular docking study. The view of molecules 8a-m in conformation obtained after energy minimization and geometric parameter optimization with the vector's dipole moment is shown in Figure 1. The conformation of the common pyrazolo [4,3-e] [1,2,4]triazine-ethoxybenzene-sulfo namide structural part of the molecules was described by five torsion angles: ϕ 1 = N2-C3-C12-C13, ϕ 2 = C12-C13-O18-C19, ϕ 3 = C13-O18-C19-C20, ϕ 4 = C17-C16-S21-N24 and ϕ 5 = C16-S21-N24-X ( Figure 2). The values of these torsion angles are presented in Table 1. The torsion angle ϕ 1 shows that the pyrazolotriazine and benzene rings were twisted to each other, adopting the gauche conformation for all molecules. The ethoxy substituent of the benzene ring had trans-trans conformation, as shown by the torsion angles ϕ 2 and ϕ 3 . The sulfonamide part of molecules 8a-m adopted gauche-gauche conformation, confirmed by torsion angles ϕ 4 and ϕ 5 . It should be noted that in all molecules, a very similar conformation of their common structural part was observed, with greater differentiation being observed in the conformation of the sulfonamide fragment, where the torsion angles ϕ 4 and ϕ 5 varied within a range of 33 • and 53 • , respectively. Moreover, this conformation did not depend on the type of sulfonamide substituent and its chirality. It should be noted that the conformations of 8a-m obtained from theoretical calculations were similar to that observed in the crystalline state for the structurally similar 3-(1,3-dimethyl-1H-pyrazolo [4,3e] [1,2,4]triazin-5-yl)-4-ethoxybenzene-sulfonamide, where the torsion angles ϕ 1 , ϕ 2 , ϕ 3 and ϕ 4 were 38.1(3) • , 172.94(17) • , 178.53 (18) • and −109.85 (19) • , respectively [29].
The electronic parameters theoretically calculated for compounds 8a-m in the conformation of the molecules obtained in their minimum energy are presented in Table 2.
For the reactivity and stability descriptors of molecules 8a-m, the frontier orbitals HOMO and LUMO were used. The energies of the HOMO and LUMO orbitals were very similar for all investigated compounds, changing in the range of 9.124 and 2.253 kcal/mol for E HOMO and E LUMO , respectively. It is worth noting that the lowest value of ionization potential of 145.160 kcal/mol and the lowest energy gap ∆E = E LUMO − E HOMO of 82.304 kcal/mol had a compound 8m, while for the other compounds, the energy gap varied within a narrow range from 87.355 kcal/mol for 8l to 89.927 kcal/mol for 8a. The graphical representation of the wave functions of the HOMO and LUMO orbitals for compounds 8a and 8j, exhibiting the highest inhibitory activity among the tested compounds on mushroom tyrosinase and jack bean urease, respectively, is shown in Theoretical calculations showed that all molecules were polar with the dipole moment values changing from 6.287 D for 8e to 8.453 D for 8i. The dipole moment vectors were directed in most cases from the ethoxybenzenesulfonamide substituent to the pyrazolo[4,3e][1,2,4]triazine system ( Figure 1). The value and spatial orientation of the dipole moment vector is strictly connected with the net charge distribution on the atoms. The net atomic charges calculated using the NBO method are presented for selected atoms in Table 3. As expected, relatively large negative charges were observed at the nitrogen and oxygen atoms of the pyrazolo[4,3-e] [1,2,4]triazine and sulfonamide systems, while the largest positive charge was observed at the sulphur atom. It can be seen that the atomic charges were very similar in all analyzed molecules; however, slight differences were observed in the charge value on the amine nitrogen atom N24, depending on the type of substituent of the sulfonamide group.
Antiproliferative Activity against Tumor Cell Lines
Scientific reports indicated that some pyrazolo[4,3-e] [1,2,4]triazine sulfonamides showed moderate anti-cancer properties and represented new scaffolds of protein kinase inhibitors, which are still of interest for oncological drug discovery, especially because of the emerging resistance to currently used drugs [30,31]. Therefore, cytotoxic activity was suspected for these molecules. We examined the effect of compounds 8a-l on the viability of breast (MCF-7) and leukemia (K-562) cancer cells, the inhibitory potency against protein kinases Abl and CDK2/cyclin E, as well as protein p53 as a tumor suppressor that triggers apoptosis via multiple pathways, including cell cycle arrest and the regulation of autophagy through transactivating proapoptotic genes and repressing antiapoptotic genes. The obtained results are presented in Table 4 as IC 50 . Unfortunately, none of the compounds expressed cytotoxicity within the tested concentration range.
a Data represent the mean ± SD of each compound from four independent experiments.
Lipophilicity and Protein Affinity
Considering the lack of cytotoxicity against the selected tumor cell lines and against several in vitro tests we undertook for the studies presented above, which were expected to explain the reason for these results, we undertook studies on the detailed analysis of the physicochemical properties of our compound as lipophilicity and protein affinity.
At the beginning, the UV-Vis spectra of compound 8g were made to find out the influence of a pH solution on the electronic structure of the compound and its retention in the chromatographic system. The lack of influence of the pH solution on the electron structure of the compounds was demonstrated. As a pH = 7.4 was recommended for the IAM chromatographic studies, C-18 chromatography evaluation was made at this physiological pH. There were regular changes in the retention of compounds in a function of the organic modifier (MeOH, ACN) content in the mobile phase for both chromatographic systems. That relationship is described by the Soczewiński-Wachtmeister equation [32]: where ϕ is the volume fraction of the organic modifier in the mobile phase, log k w is the intercept and S is the slope of the regression curve. Log k w refers to the retention parameter of a compound with pure water as the mobile phase. The S and log k w quantities, estimated by the extrapolation procedure, are commonly applied as lipophilicity descriptors [33][34][35].
The obtained data are presented in Table 5. The estimated log k w values for both phases were significantly different, but they were highly correlated. The relationship between the log k w values is expressed by the following equation: The differences between the log k w(RP-18) and log k w(IAM) values were the result of various interactions of considered pyrazolo[4,3-e] [1,2,4]triazine sulfonamides on the octadecyl and IAM phases. Ong and Pidgeon assumed that partitioning was the principal retention mechanism in IAM retention, including both hydrophobic and polar interactions with the solvated layer(s) of the stationary phases and ionizable groups of immobilized phospholipids [36]. The studies showed that the compounds were characterized by weaker affinity to the immobilized phospholipids than to the octadecyl phase. Significant electrostatic interactions with the IAM phase resulted in different retention of compounds and different values of log k w in the IAM phase compared with those obtained using the C-18 one. A similar trend was observed for other pyrazolo[4,3-e][1,2,4]triazines [37]. A significant though relatively stable difference between the log k w values for both phases (∆ log k w in the range of 1-1.3) may suggest that these were the result of mainly heterocyclingsulphonamide moiety interactions (unmodified element of the considered compounds) ( Table 5).
The molar refractivity (MR) and the polar surface area (tPSA) were estimated according to the fragmentation method introduced by Crippen [38]. ACD log P was calculated using ACD/Labs methodology [39]. C log P and M log P were estimated for comparison (Table 6). Using the selected methods, the calculated log P values proved to be significantly different but colinear (correlation coefficient r was in the range of 0.93-0.98). Particularly, the Moriguchi estimation of the log P values gave low values. The obtained log P values expressed the lipophilicity changes of the studied set of compounds, but they did not describe the actual octanol-water coefficient distribution value. This phenomenon was observed in some groups of new synthesized and studied compounds as well as applied drugs [40]. The results depended on the algorithm and base data used in the log P calculation [41]. In this group of compounds, the differences were particularly pronounced.
The log P values obtained by the numerical methods C log P and M log P were smaller than log k w -obtained by RP C18 HPLC or IAM chromatography-but they were colinear, which was observed for some groups of compounds [42]. The relationship between log k w and C log P is expressed by the following equation: Log k w = 0.50818(±0.0539)Clog P + 2.3107 (0.0504) n = 13, r 2 = 0.8898, s = 0.1238 (3) The obtained chromatographic and numerical data indicated that the lipophilicity of the compounds increased with the increase of the alkyl chain of the substituent at the -SO 2 N-group. The highest lipophilicity expressed by the chromatographic as well as computational descriptors for the compounds with the leucine moiety was found. Slightly weaker lipophilic character was exhibited by the compounds with the 2-amino-3-methyl-1-hydroxybutyl substituent. The lowest lipophilicity was found for the compounds with amine and two hydroxyl groups (8g, 8h and 8k). Their properties were the result of large polar surface areas (tPSAs) ( Table 6). The compounds being the pairs of enantiomers had the identical lipophilic-hydrophilic characterization. The tPSAs of the compounds were on the borderline or above the range indicated as beneficial for potential drugs [43]. a HBDH = the number of hydrogen bond donor protons; MlogP = the Moriguchi estimation log P; C log P; ACD log P = log P calculated using the ACD/Labs algorithm; tPSA = the polar surface area; CMR = the molar refractivity; and pK a = log K a .
The plasma protein affinity of the compounds was analyzed using immobilized human serum albumin (HSA) and α 1 -glycoprotein (AGP), which are the main blood proteins. Measurements were performed under the isocratic conditions using the propan-2ol/acetate ammonium buffer at pH = 7.4 (15:85, v/v) as the mobile phase. The log k values of all compounds are presented in Table 7. Based on the calibration curve log K = f(log k) (Equations (4) and (5)) of a set of drugs of the known percentage of protein binding, the log K values of the compounds were calculated. Next, they were converted to a percentage of protein plasma binding (% PPB) [44]. The results are presented in Table 7. They show that the compounds bound poorly to the plasma proteins, particularly to AGP. The calculated pK a values of the compounds and the analysis of various microspecies distributions indicated the molecular forms of the compounds at pH = 7.4 and, therefore, they bound poorly to the proteins [45,46]. In the case of only compound 8m, 4% of the cationic form was found at pH = 7.4. This was associated with the piperazine ring protonation. This was revealed by the greater extent of compound-to-AGP binding. Thus, this confirms a general trend that glicoproteins bind bases better than the other microspecies. The quantitative structure-binding relationship analysis confirmed the positive contribution of lipophilicity in the HSA binding. The compounds of the highest lipophilicity exhibited the strongest affinity for HSA (compounds 8i, 8j and 8m) [46]. The HSA and AGP binding was also largely correlated with the molar refractivity (MR) of the compounds.
Summing up, the above-described chromatographic studies (IAM and RP-18) and the calculated in silico log P values gave very different values of log k w and log P, but they were collinear and described well the lipophilicity changes in the test series of compounds. The studied compounds were characterized by a small affinity for plasma proteins, which could be advantageous in potential in vivo studies.
Susceptibility to Metabolic Transformation
In looking for the reasons for the poor cytotoxicity of the studied compounds, we also considered their potency to metabolic transformations, which are possible in tumor cells. This metabolism is able to transform active compounds into their less-or non-active metabolites, which would be the case of the investigated agents. Three compounds among the studied pyrazolo[4,3-e][1,2,4]triazine derivatives-8m, 8i and 8j-were selected for the studies on their susceptibility in phases I and II of metabolic transformations, which were performed with rat liver microsomes.
2.5.1. Phase I Metabolism of 8m, 8i and 8j with Rat Liver Microsomes Phase I metabolism was considered in the first step of the studies on transformations of 8m, 8i and 8j compounds. Each compound was incubated with rat liver microsomes (RLMs) and NADPH as a cofactor of phase I metabolism, and the reaction mixtures were monitored by HPLC analysis with UV-Vis detection. The chromatograms recorded after 60 min of incubation are shown for each compound in Figure 4. The chromatographic peaks were analyzed by their ESI-MS spectra, and the m/z values related to the HPLC bands are presented in Table 8. The UV-Vis spectra of 8m, 8i and 8j and their metabolites obtained with RLM in the presence of NADPH are presented in Figure 5. After 60 min of incubation with 8m, we observed in Figure 4a two metabolite peaks of very low intensities: P1 at R t near 17 min and P2 near 19 min, whereas R t of the substrate was much lower. Analyses of the metabolite spectra also indicated the strong changes in comparison with that of the substrate 8m. Bathochromic shifts were observed in the spectra of both metabolites P1 and P2. Thus, a strong difference in metabolites R t and the UV-Vis spectra suggest significant modifications in the structure of the heterocyclic part of 8m metabolites. The analysis of their ESI-MS spectra also showed that 8m underwent metabolic transformations to two products: P1 of m/z = 448.1 and P2 of m/z = 404.1 (Table 8) . Therefore, the attachment of oxygen atoms and the loss of the ethyl group were postulated, respectively, for the metabolites P1 and P2 of the 8m substrate. The results indicated higher polarity of both 8m metabolites and significant changes in their chromophore structures. Thus, metabolic transformations would occur in the N-methyl piperazinyl substituent as well as in the molecule core. All of these together are suspected to improve the metabolite penetration in the living organism.
Three metabolite peaks of different intensities at R t near 6.5 min (P3/P6), 9 min (P4/P7) and 13.5 min (P5/P8) were observed after 60 min of 8i and 8j incubation (Figure 4b,c). The UV-Vis spectra of one metabolite of both compounds (P3/P6) were very similar to those of the substrates, whereas the spectra of the next two metabolites-P4, P5 and P7, P8 of the 8i and 8j substrates-were of low intensity. Therefore, significant changes in the structure of the heterocyclic chromophore in these metabolites were proposed. The analysis of the ESI-MS spectra of these compounds showed that 8i and 8j underwent metabolic transformations to three products described by the following m/z values: 465.1, 435.1 and 421.1 (Table 8) . The comparison of the described metabolite characteristics indicated that the compounds with long aliphatic chains in 8i and 8j resulted in one more product than 8m. Metabolites P3 and P6 of the chromophore, identical to that of the substrate and higher than that of the substrate polarity, would be the result of hydroxylation without changes in the chromophore. Therefore, it would occur in the aliphatic chain. The next products, P4, P5 and P7, P8, were suspected to be the result of dealkylations. Similar to 8m, more polar hydroxylation products as well as dealkylation metabolites should be suspected not only to be easier in the organism distribution, but also express easier interactions with molecular targets, including serum albumins.
Phase II Metabolism with Rat Liver Microsomes
The compounds 8m, 8i and 8j were also studied in respect to their phase II metabolisms. The incubation with rat liver microsomes was performed in the presence of glucuronyltransferase (UGT) and the cofactor of this enzyme family, UDPGA. As a result, we did not observe any glucuronidation product of 8m, 8i and 8j after 60 min of incubation of these compounds with rat liver microsomes (RLMs) in the presence of UDPGA. The ESI-MS analysis also did not indicate any characteristic mass value increase for glucuronidation product equal to m/z + 176 Da. Therefore, the results demonstrated that none of the studied compounds were the substrate for UGT in RLM.
Finally, the three studied compounds 8m, 8i and 8j were incubated with rat liver microsomes (RLMs) together with two activating cofactors, NADPH and UDPGA, to stimulate both phase I and phase II metabolism. However, only the products of phase I (m/z + 16, m/z − 14 and m/z − 28) were found in the chromatograms with ESI-MS detection. The results indicate that the phase I metabolites did not undergo transformation in the following phase II metabolism in the presence of UGTs.
In conclusion, we proposed similar metabolic pathways of pyrazolo [4,3-e] [1,2,4]triazine sulfonamides for compounds 8m, 8i and 8j, which are presented in Figure 6. The studied compounds were sensitive to metabolic transformations with phase I enzymes, which led to oxidized metabolites as hydroxylation and dealkylation products. Phase II transformations were not demonstrated either directly by UGT or upon activation with phase I enzymes. Therefore, the best known detoxification pathway, UGT-mediated glucuronidation, was not observed in the case of the studied compounds. It cannot be excluded that the proposed phase I metabolites were responsible for the deactivation.
Molecular Docking
The investigated pyrazolo [4,3-e] [1,2,4]triazine sulfonamides showed no antiproliferative activity against tumor cell lines in in vitro tests. However, our previous pharmacological studies showed that all chiral sulphonamides 8a-j exhibited significant inhibitory activity on mushroom tyrosinase and jack bean urease [20]. Therefore, the biological activity of 8a-j prompted us to investigate in silico the mechanism of action at the molecular level of the investigated pirazolotriazine sulfonamides by the molecular docking method.
All chiral sulphonamides 8a-j exhibited significant inhibitory activity on mushroom tyrosinase, with IC 50 values in the range of 27.9-40.17 µM, comparable to the activity of kojic acid (IC 50 = 16.69 µM), which was used as a reference compound in the test [20]. The most active compounds 8j and 8b showed inhibition of tyrosinase at IC 50 of 27.9 and 30.76 µM levels, respectively. Moreover, compounds 8a-j were tested for their inhibitory effects on jack bean urease, exhibiting inhibitory activity that changed from an IC 50 of 0.037 µM for 8a to 0.084 µM for 8b, better than the activity of reference thiourea with an IC 50 value of 20.7 µM [20].
Tyrosinase is the enzyme which is responsible for the synthesis of melanin, a ubiquitous pigment in living organisms. In the crystalline state, Agaricus bisporus mushroom tyrosinase occurs as tetramer H 2 L 2 subunits in complex with its inhibitor tropolone, forming a pre-Michaelis complex with the enzyme and a binuclear copper binding site in H subunit (PDB ID: 2Y9X) [47].
The results of the molecular docking of the most active in vitro testing of 8j and 8b to the binding site of tyrosinase are presented in Figure 7. The ligands 8j and 8b bound to the active site with the values of scoring function ChemPLP of 62.70 and 63.50, respectively. These values were significantly better than the value of 50.14 obtained for the re-docked molecule of tropolone, which indicates a greater affinity of 8j and 8b for the enzyme than that observed for tropolone. The molecule of 8j bound to the binding site of tyrosinase by intermolecular hydrogen bonds N2 . . . C (PHE264A) and C11 . . . O (SER282A), with the distances between interacting atoms being 3.152 and 2.921 Å, respectively. Moreover, short contacts between the hydroxyl group of (R)-(-)-leucinol substituent and Cu 2+ ions with distances of 2.493 and 3.474 Å were observed. The molecule of 8b interacted with the active site of tyrosinase through the O22 . . . C (HIS263A; 3.001 Å) hydrogen bond and, similar to 8j, short contacts to the Cu 2+ ions with distances of 2.598 and 2.475 Å were observed.
Urease is a nickel-containing enzyme. Catalyzing the hydrolysis of urea and its inhibitors plays an important role in the therapy of human and plant disorders [20]. For the docking study, the crystal structure of a jack bean urease complex with acetohydroxamic acid, 1,2-ethanediol and two Ni 2+ ions in the binding site was used (PDB ID: 4H9M) [48]. The molecular docking study showed that the compound 8a-j, showing experimentally confirmed activity in the direction of inhibitory effects on mushroom tyrosinase and jack bean urease, had better affinity to these enzymes than their ligands in a crystalline state. The Cu 2+ and Ni 2+ ions present in the active site of tyrosinase and urease, respectively, were important centers in the interactions of ligands with enzymes.
Conclusions
Theoretical calculations performed at the DFT/6-311++G(d,p) level showed that all investigated pyrazol [4,3-e] [1,2,4]triazine sulfonamides showed large similarity in structural and electronic parameters (torsion angles, dipole moments and net charge at the atoms), and they were characterized by similar reactivity and stability indexes. Therefore, it can be assumed that they should behave similarly under physiological conditions.
The molecular docking of the most active compound, in the direction of inhibitory effects on mushroom tyrosinase and jack bean urease, revealed that the analyzed compounds had high affinities to the active sites of these enzymes and the Cu 2+ and Ni 2+ ions in the binding pockets of tyrosinase and urease, respectively, and they may play a key role in the mechanism of these enzymes' inhibition.
All presented sulfonamides were obtained using a multistep procedure and appeared to be inactive against cancer cell lines. They also did not show kinase inhibitory potency toward Abl or CDK2/cyclin E.
The compounds were characterized by a small affinity for plasma proteins, which could be advantageous in potential in vivo studies. Their lipophilicity may be connected with the large polarities of the molecules, confirmed by large values of dipole moments theoretically calculated for the investigated sulfonamides using the DFT method.
The investigated pyrazolotriazines, being not active against the selected tumor cell lines, were sensitive to metabolic transformations with phase I enzymes, which led to the hydroxylation and dealkylation products, whereas phase II transformations did not occur. It cannot be excluded that the observed phase I metabolites would be responsible for the modification of the final activity of the studied compounds. Moreover, polar metabolites would not only be easier for distribution in the organism, but they would also interact easier with molecular targets, including the selected plasma proteins.
Synthesis of Sulfonamides 8a-m
A mixture of chlorosulfonyl chloride 7 (100 mg, 0.29 mmol) and amine (1 mmol) in anhydrous acetonitrile (5 mL) was stirred overnight at room temperature, and then the reaction mixture was concentrated in vacuo to afford the crude sulfonamide as a yellow solid. The residue was purified on silica gel using a mixture of CH 2 Cl 2 :EtOH (25:1) as an eluent to give the titled compounds as a yellow solid. 13 The UV-Vis spectra were recorded in a water (phosphate buffer)-methanol (1:1) solution by means of a UV-160A Shimadzu spectrophotometer. Quartz cuvettes (1 cm) were used for measurements.
Affinity Chromatography
Human serum albumin (HSA) immobilized on the 5 µm, 100 × 3 mm silica gel column (Chiralpac) and an α 1 -acid glycoprotein (AGP) immobilized on the 5 µm, 100 × 4 mm silica gel column (Chiralpac) were used. The mobile phase was composed of a 50 mM ammonium acetate solution (pH = 7.4) and propan-2-ol at 85/15 (v/v). Its flow rate was 0.5 mL min −1 at room temperature. The measurements were conducted at 280 nm. The retention time of an unretained solute (t 0 ) was determined by the injection of a small amount of citric acid dissolved in water. The log k values for the selected mobile phase were determined for all compounds. The percentage of protein plasma binding (% PPB) values were calculated from the calibration curve according to Valko et al. [35].
HPLC measurements were performed using a Knauer liquid chromatograph (Knauer, Berlin, Germany) with a dual pump and a UV-visible detector.
Calibration of the Protein Columns
The column performance check and calibration were performed before the measurements. The racemic mixture of warfarin was used for their performance evaluation. The following calibration set of drugs was applied: bromazepam, carbamazepine, diclofenac, nicardipine, nizatidine, piroxicam and warfarin for the HSA and bromazepam, imipramine, nicardipine, nizatidine, propranolol and warfarin for the AGP. The drugs were dissolved at a 0.5 mg/mL concentration in a 50% propan-2-ol and ammonium acetate solution (pH = 7.4) mixtures. The log k values of the drugs were determined under the assumed conditions. The obtained log k values from HPLC were plotted against the calculated log K values (K = binding equilibrium constant and log K = linearized PPB), based on the literature data for plasma protein binding (% PPB).
Computational Methods
The tPSA values and molar refractivity CMR were calculated using the ChemDraw Ultra 10.0, according to the fragmentation method introduced by Crippen [39]. The Moriguchi estimation of log P was made using the MedChem Designer (TM) version 3.0.0.30 (Simulations Plus, Inc.) and ACD log P using ACD/ChemSketch of ACD/Labs [39]. The pKa values were calculated using MarvinSketch 19.9 (ChemAxon Ltd., Lublin, Poland). Statistics 7.1 (StatSoft, Inc., Lublin, Poland) was used for the statistical analysis.
Chemicals
The following chemicals were obtained from Merck KGaA (Darmstadt, Germany): methanol (gradient grade for liquid chromatography), HEPES, NADPH and UDPGA. The ammonium formate was from Fisher Scientific (Loughborough, UK). All other chemicals and solvents were of the highest purity available.
Methods Metabolism with Phase I Enzymes in Rat Liver Microsomes
The RLMs (2 mg/mL of protein) were assayed for activity toward tested compounds as follows. The proteins were incubated in a buffer containing 0.1 M HEPES (pH 7.4) and 2 mM MgCl 2 with either 0.05 mM substrate in a total volume of 70 µL. The substrates were also added in the buffer HEPES, with a pH of 7.4. Reactions were started by the addition of NADPH (2 mM) and were incubated for a specified time at 37 • C. The reactions were stopped by the addition of 8.75 µL of 1 M HCl, followed by centrifugation at 13,400 rpm for 10 min to pellet the denatured protein. The supernatant fractions were used for highperformance liquid chromatography (HPLC analysis). Control reactions omitting the substrate were run with each assay. All incubations were performed in two repetitions.
Metabolism with Phase II, UGT and Enzymes in Rat Liver Microsomes
The RLMs (2 mg/mL of protein) were assayed for activity toward the tested compounds as follows. The proteins were incubated in a buffer containing 0.1 M HEPES (pH 7.4) and 2 mM MgCl 2 with either 0.05 mM substrate in a total volume of 70 µL. Substrates were also added in the buffer HEPES with a pH of 7.4. Reactions were started by the addition of UGPGA (5 mM) and were incubated for a specified time at 37 • C. The reactions were stopped by the addition of 8.75 µL of 1 M HCl, followed by centrifugation at 13,400 rpm for 10 min to pellet the denatured protein. The supernatant fractions were used for high-performance liquid chromatography (HPLC analysis). Control reactions omitting the substrate were run with each assay. All incubations were performed in two repetitions.
Metabolism with Phase I (NADPH) and Phase II (UDPGA) Enzymes in Rat Liver Microsomes
The RLMs (2 mg/mL of protein) were assayed for activity toward the tested compounds as follows. The proteins were incubated in a buffer containing 0.1 M HEPES (pH 7.4) and 2 mM MgCl 2 with either 0.05 mM substrate in a total volume of 70 µL. The substrates were also added in a buffer of HEPES with a pH of 7.4. Reactions were started by the addition of NADPH (2 mM) and UGPGA (5 mM) and were incubated for a specified time at 37 • C. The reactions were stopped by the addition of 8.75 µL of 1 M HCl, followed by centrifugation at 13,400 rpm for 10 min to pellet the denatured protein. The supernatant fractions were used for high-performance liquid chromatography (HPLC analysis). Control reactions omitting the substrate were run with each assay. All incubations were performed in two repetitions.
HPLC UV-Vis Analysis
HPLC analyses of the supernatants were performed using an LC-2040C 3D HPLC system and the LabSolution software package (Shimadzu, Kioto, Japonia). Samples were separated using a reversed-phase 5 µm Suplex pKb-100 analytical column (0.46 × 25 cm, C18) (Supelco, Bellefonte, PA, USA) warmed to 25 • C. The analyses were performed at a flow rate of 1 mL/min with the two following mobile phase systems, listed individually for the studied compounds.
For the 8m compound, a linear gradient from 15% to 60% methanol was used in an ammonium formate buffer (0.05 M, pH 3.4) for 20 min, followed by a linear gradient from 60% to 100% methanol in ammonium formate for 10 min. The column was then re-equilibrated at the initial conditions for 10 min between runs. The elution of each metabolite was monitored at 380 nm.
For the 8i/j compound, a linear gradient from 50% to 100% methanol was used in an ammonium formate buffer (0.05 M, pH 3.4) for 30 min. The column was then reequilibrated at the initial conditions for 10 min between runs. The elution of each metabolite was monitored at 380 nm.
Liquid Chromatography-Tandem Mass Spectrometry Analysis
HPLC-tandem mass spectrometry analyses of the products were performed by electrospray ionization (ESI) with positive ion detection and an LCMS-2020 mass spectrometer (Shimadzu, Kioto, Japonia). Samples were separated according to the procedure described under HPLC UV-Visible Analysis.
Theoretical Calculations
The energy, geometrical and electronic parameters (torsion angles, frontier orbitals, dipole moments and NBO net charge distributions on the atoms) for all investigated compounds were obtained after energy minimization and geometry optimization of molecules 8a-m with GAUSSIAN 03 [49] at the DFT/B3LYP/6-311++G(d,p) level. The initial geometries were built de novo using the semiempirical method AM1, implemented in the HyperChem ver. 8.0.10 package [50]. The visualization of theoretical calculation results was performed using GaussView [51]. Calculations were carried out at the Academic Computer Centre in Siedlce.
Molecular Docking
The crystal structures of tyrosinase from Agaricus bisporus in complex with tropolone (PDB ID: 2Y9X) [47] and jack bean urease in complex with acetohydroxamic acid (PDB ID: 4H9M) [48], downloaded from Protein Data Bank, were used in a molecular docking procedure carried out for 8a, 8b, 8i and 8j using the GOLD Suite v. 5.8.1 software [46]. The enzyme preparation, including the addition of hydrogens, removal of water molecules and extraction of original ligand from the protein binding site, were done with the GOLD default settings. The binding site of the tropolone molecule in the crystal structure of tyrosinase and the acetohydroxamic acid molecule in the crystal structure of urease were used as the active sites, with a selection of atoms within 6 Å in the molecular docking of the investigated ligands. The tropolone and acetohydroxamic acid, as reference ligands, were removed from the X-ray structures of their protein-ligand complexes and docked back into their binding sites with the RMS values of 2.875 and 1.713 Å, respectively. The docking stimulations were run with the default parameters of GOLD, and the docked ligand was kept flexible, but the amino acid residues of the enzyme were held rigid. The number of dockings to be performed on each ligand was 10, starting each time from a different ligand conformation. The results of the different docking runs were ranked by fitness score. The pose with the best value of the scoring function was used to analyze the ligand interaction with the active site of the enzyme. The ChemPLP scoring function was used to evaluate the degree of ligand fit to the active site. ChemPLP is an empirical fitness function optimized for pose prediction, which is used to model the steric complenmentarity between the protein and the ligand [52,53]. The analysis of interactions between amino acid residues and the ligand was performed using Hermes v. 1.10.5 [52].
Cell Cultures
Detailed descriptions can be found in [54,55].
MTT Assay
The assay was performed according to the method described in [54,55].
|
v3-fos-license
|
2020-03-16T17:12:05.818Z
|
2019-09-06T00:00:00.000
|
212733914
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBYSA",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.29011/2574-7754/100331",
"pdf_hash": "e05d9d5c4387b0c244a8715b5cd755792bf10fff",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2313",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e05d9d5c4387b0c244a8715b5cd755792bf10fff",
"year": 2019
}
|
pes2o/s2orc
|
Certolizumab Pegol is Effective for Granulocyte Colony-Stimulating Factor-Mediated Disease Exacerbation in Rheumatoid Arthritis
Granulocyte Colony-Stimulating Factor (G-CSF) is widely used for treating neutropenia. However, exacerbation of autoimmune diseases after G-CSF injection has been reported. We herein report a patient with Rheumatoid Arthritis (RA) who experienced disease flare after receiving G-CSF and was treated with Certolizumab Pegol (CZP). A 70-year-old woman who had RA with moderate disease activity developed drug-induced neutropenia and was treated with G-CSF. Five days after the start of G-CSF treatment, her neutrophil count increased and she developed severe arthritis in both wrists, suggesting of exacerbation of RA. Since discontinuation of G-CSF or nonsteroidal anti-inflammatory drugs did not improve her joint pain, she was finally treated with subcutaneous CZP injection, which led to a remarkable improvement of her arthritis. Our case demonstrates the potential efficacy of CZP for arthritis exacerbated by G-CSF therapy.
Introduction
Granulocyte Colony-Stimulating Factor (G-CSF) is widely used for treatment of neutropenia in the fields of hematology and oncology. G-CSF has been reported to cause autoimmune diseases including Rheumatoid Arthritis (RA) to flare up [1][2][3][4]. However, a therapeutic strategy for G-CSF-mediated exacerbation of arthritis in patients with RA is yet to be established. Here, we report the first case showing the effectiveness of Certolizumab Pegol (CZP), a novel Fc-free, PEGylated, anti-TNF-α monoclonal antibody, for acute aggravation of RA induced by G-CSF.
Case Presentation
A 70-year-old woman had been diagnosed with RA at the age of 52 years and had been treated with Prednisolone (PSL; 2 mg/day) and Methotrexate (MTX; 4 mg/week). She was admitted to our hospital because of RA-associated Interstitial Lung Disease (ILD), although the Disease Activity Score 28-joint count based on C-reactive protein (DAS28-CRP) was 2.14 (remission). After withdrawal of MTX, she was treated with oral PSL (20 mg/day) in combination with Intravenous Cyclophosphamide (IVCY). Sulfamethoxazole-Trimethoprim (ST) for prophylaxis of Pneumocystis jirovecii pneumonia, repaglinide for treatment of glucocorticoid-induced diabetes mellitus, and esomeprazole Volume 14; Issue 03 Ann Case Rep, an open access journal ISSN: 2574-7754 magnesium hydrate for prophylaxis of gastrointestinal tract disturbance were orally administered. After the first session of IVCY therapy, her ILD improved and she was discharged.
Twelve days after the second session of IVCY therapy and dose tapering of PSL to 12.5 mg/day, she was again admitted to the hospital because of sudden onset of grade 4 neutropenia. Her body temperature was 35.5°C, and she showed swollen joints, including the wrist, metacarpophalangeal, and proximal interphalangeal joints. DAS28-CRP was 2.9, which is suggestive of moderate disease activity of RA. Laboratory data showed leukocytopenia (1500/μL; reference range, 3300-8600/μL), neutropenia (345/μL; reference range, 1155-6278/μL), and elevated CRP level (2.04 mg/dL; reference range, 0.00-0.15 mg/dL). Her hemoglobin level, platelet count, renal and liver function test results, β-D-glucan level, cytomegalovirus antigenemia assay (C7-HRP) results, and urinalysis results were normal. Chest computed tomography revealed no evidence of infectious pneumonia or aggravation of ILD. After withdrawal of ST, repaglinide, and esomeprazole magnesium hydrate, which are known to cause agranulocytosis, she was treated with subcutaneous injection of the G-CSF filgrastim (75 mg/day) ( Figure 1).
Two days after the beginning of treatment, the filgrastim injection dose was increased to 150 mg/day because the neutrophil count did not increase. Five days after the beginning of treatment, the neutrophil count increased to 10295/μL, and filgrastim injection was discontinued. However, 6 days after the beginning of treatment, she developed severe arthritis in both wrists and her serum CRP level was elevated to 13 mg/dL. Discontinuation of filgrastim injection or oral celecoxib did not improve her arthritis, and her joint pain deteriorated. DAS28-CRP worsened to 5.07, which is suggestive of high disease activity. We considered that the disease activity of RA was aggravated by the G-CSF-mediated elevation of granulocyte count and we started subcutaneous CZP injection with an initial loading dose of 400 mg at weeks 0, 2, and 4. After the CZP treatment, her joint pain immediately improved and she achieved remission (Figure 1). During the tapering of PSL dose to 5 mg/day, her physical examination results and laboratory data showed no evidence of RA and ILD relapse over a period of 16 months.
Discussion
We report a previously undescribed case of G-CSF-mediated exacerbation of RA that was successfully treated with CZP injection. G-CSF is commonly used for treating neutropenia caused by chemotherapy, infection, and an abnormal immune system observed in patients with rheumatic diseases. Previous studies have shown an association between G-CSF during neutropenia and exacerbation of RA (Table 1) [5-9]. G-CSF is produced by various cells, including macrophages, endothelial cells and fibroblasts, and is involved in the process of inflammation [10]. In a mouse model of RA, G-CSF promoted macrophage-1 antigen-dependent migration of neutrophils and increased the severity of collageninduced arthritis [11,12]. G-CSF levels in serum and synovial fluid were reported to be elevated in a disease activity-dependent manner in RA patients [13]. However, a therapeutic strategy for G-CSF-mediated exacerbation of arthritis has not been established. In our case, arthritis caused by RA with moderate disease activity was exacerbated after G-CSF injection, and discontinuation of G-CSF injection or oral celecoxib did not improve the joint pain and swelling. On the contrary, the patient immediately achieved remission after the beginning of CZP injection. CZP differs from other TNF-α blockers in its lack of an Fc region, which minimizes Fc-mediated effects such as Antibody-Dependent Cell-Mediated Cytotoxicity (ADCC) or Complement-Dependent Cytotoxicity (CDC) [14,15]. PEGylated certolizumab Fab' binds to and neutralizes both soluble and transmembrane TNF-α with high affinity [16]. It has been reported that treatment with CZP with its initial loading dose resulted in rapid and sustained improvements in disease activity and quality of life in patients with active RA in placebo-controlled, double-blind, randomized studies [17]. Those previous reports and our case suggest that G-CSF is involved in the pathogenesis of RA and that TNF-α may be the major cytokine required for G-CSFmediated exacerbation of arthritis. TNF-α blockers, especially CZP, may be effective for acute aggravation of RA by G-CSF [9, 18].
G-CSF-mediated exacerbation of arthritis is difficult to distinguish from acute-onset arthritis, including gout, pseudogout, and infection. However, in the present case, we observed that arthritis in at least some joints that had been swollen before the G-CSF injection was aggravated with an increasing number of leukocytes, suggesting exacerbation of RA rather than development of gout, pseudogout, or infection, which generally occurs in a single joint. Analysis of synovial fluid obtained through arthrocentesis may be required to confirm the diagnosis.
Conclusion
Our case provides evidence showing that CZP is effective for exacerbation of arthritis mediated by G-CSF in RA patients.
Patient Consent
Written informed consent for this case report was obtained from the patient.
|
v3-fos-license
|
2022-01-12T16:39:55.133Z
|
2022-01-01T00:00:00.000
|
245874761
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.fmre.2021.10.012",
"pdf_hash": "9bdeb1452c5cfb3ef893164abc2eb6fe2a14c783",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2314",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"sha1": "41045d9245b28299d8afadd3faedfebb1b0990a7",
"year": 2022
}
|
pes2o/s2orc
|
Modification and coupled use of technologies are an essential envisioned need for bioaerosol study – An emerging public health concern
The airborne microbiome is one of the relevant topics in ecology, biogeochemistry, environment, and human health. Bioaerosols are ubiquitous air pollutants that play a vital role in the linking of the ecosystem with the biosphere, atmosphere, climate, and public health. However, the sources, abundance, composition, properties, and atmospheric transport mechanisms of bioaerosols are not clearly understood. To screen the effects of climate change on aerosol microbial composition and its consequences for human health, it is first essential to develop standards that recognize the existing microbial components and how they vary naturally. Bioaerosol particles can be considered an information-rich unit comprising diverse cellular and protein materials emitted by humans, animals, and plants. Hence, no single standard technique can satisfactorily extract the required information about bioaerosols. To account for these issues, metagenomics, mass spectrometry, and biological and chemical analyses can be combined with climatic studies to understand the physical and biological relationships among bioaerosols. This can be achieved by strengthening interdisciplinary teamwork in biology, chemistry, earth science, and life sciences and by sharing knowledge and expertise globally. Thus, the coupled use of various advanced analytical approaches is the ultimate key to opening up the biological treasure that lies in the environment.
Introduction
The global interest in bioaerosols has rapidly increased to widen the awareness of their distribution, characterization, quantification, and health impacts (e.g., respiratory diseases, allergies, infectious diseases, and cancer).In recent years, studies on exposure to bioaerosols in both occupational and residential environments have provided data about the probable impacts on human health, showing the benefits and harmful effects of bioaerosols.However, correctly describing their role in the origination or deterioration of diverse symptoms and diseases remains problematic.The widespread distribution and nature of bioaerosols followed by their survival in the atmosphere are the major issues of concern that can shape the understanding of their risk to human health.In 2015, International Astronautical Federation (IAF) publishers printed a conference paper presented at 66th International Astronautical Congress 2015 [1] .The article mentioned standard small satellite architecture for space microbiology.The description provides a vision toward developing cost-effective, space-based microbiology research, especially for university students and professors.Although the research obstacles have not been minimized yet, the concept has opened an alternate method for bioaerosol studies, especially in the real-time aspect of attaining an accurate measure of microbial particles present in the air at a specific time and area.
As illustrated in Fig. 1 , interest and research in bioaerosol studies have increased significantly over the last two decades.Indeed, in comparison to other studies on the environment and pollution, bioaerosol analysis is unlikely to have garnered much attention today.However, some significant steps taken for the analysis of microorganisms present in the air have raised interest in the study of bioaerosols in recent decades [2] .Identification of specific microorganisms or microbial communities in the atmosphere, particularly in the outdoor environment, is a tough but necessary undertaking.It is believed that airborne particles (both physical and biological particles) contain large amounts of information about the environment and microorganisms and their impact on humans and climate change [3] .Several factors impede the risk assessment of bioaerosols, such as the complexity of microorganisms, the techniques of sampling, and the lack of valid quantitative criteria ( e.g ., exposure standards and dose/effect relationships) [4] .Exposure to some microbes is thought to be beneficial for health, but additional research is necessary to appropriately assess their potential health https://doi.org/10.1016/j.fmre.2021.hazards, such as infectious capabilities, dormant nature, interindividual vulnerability, interfaces with nonbiological agents, and some other proven/unproven health effects ( e.g ., atopy and atopic diseases).Thus, the primary objective of this article is to provide an overview of the global state of bioaerosol research in terms of pre-existing knowledge, different methodologies adapted for bioaerosol measurement, the current state of technology, and significant advances in bioaerosol quantification, detection, and characterization research.Furthermore, this short review provides perspectives on bioaerosol research progress and limitations and scrutinizes vitally necessary research techniques that include multidisciplinary collaboration.This article provides brief information on some of the sensitive and effective methods for bioaerosol analysis and the necessity of the use of some cross-disciplinary techniques to better understand the biology of air.
Some successful cutting-edge descriptions of aerosol microbiology
Although Charles Darwin was the first scientist to discover the transport of dust particles in the air, Louis Pasteur initiated methods of bioaerosol sampling and microbial research in the air [4] .However, because all microbes could not be cultivated on a culture plate, several microbes were not observed until DNA-based experiments were established.Following this discovery, many researchers have identified copious microorganisms in the air using modern and advanced tools.
The use of different samplers has provided considerable variation in microbial detection.Bowers et al. (2013) identified the composition of airborne communities with variabilities based on size, season, and duration using coarse (PM 10 − 2.5 ) and fine (PM 2.5 ) quartz filters [5] .A comparative study of different samplers and analysis methods was performed by Xu et al. (2011).They demonstrated that the BioStage impactor, BioSampler, and MCE filter are power pact samplers for culturable biological particles, such as Alternaria, Cladosporium , and Aspergillus [6] .
Miaskiewicz-Peska and Lebkowska (2011) performed a study on filter efficiency by using two woven air filters: P2 and P3 (Secura B.C., Warsaw, Poland).The experimental setup used the PALAS set (PALAS GmbH, Karlsruhe, Germany) conducted with mineral aerosol ISO Fine Test Dust, Model 12103-1-A2 (Powder Technology Incorporated, USA).The results showed that 100% filter membrane efficiency could not be achieved when nonbiological aerosol filters are used to collect biological particles present in the air.However, some potent aerosol microorganisms, such as Micrococcus luteus, Micrococcus variants, Pseudomonas putida, and Bacillus subtilis, were successfully detected, which suggested that spherical cells adhered more strongly to filter fibers than cylindrical cells [7] .
Compared to conventional culture-dependent methods, metagenome sequencing methods (such as Illumina sequencing, Sanger sequencing, pyrosequencing, and NGS) and hybrid/chip technology methods can be used to identify microarrays of genomes present in atmospheric aerosol samples.These methods are highly sensitive, can be applied to any biological matter containing nucleic acids, and represent quick and dependable approaches for detecting the presence of both living and dead cells, as well as pathogenic and nonpathogenic microbes in the atmosphere [8] .The first metagenomics-based study performed by using the next-generation sequencing technique identified some predominant bacteria in the air, such as Proteobacteria, Firmicutes, Actinobacteria , unclassified Enterobacteriaceae, Staphylococcus, Acinetobacter, Leuconostoc, Pseudomonas , and Lactobacillus .Similarly, Penicillium, Aspergillus, Rhizopus, Wallemia , and Hemicarpenteles represented some of the predominant fungi detected [ 8 , 9 ].Thus, the high-throughput sequencing method is a promising tool to explore bioaerosol diversity.Virus detection is difficult to achieve in the environment using a simple conventional method given the size and properties of viruses.As a result, metagenomics research has opened the door to effective virus identification.The sequences of polyomavirus, human papillomavirus, and other active viruses were identified in the metagenomics data from the cattle processing area as well as some indoor and some outdoor areas [10] .Another emerging tool that shapes microbial diagnosis is whole-cell mass spectrometry (WC-MS).This method has identified a wide range of bacteria, including Grampositive and Gram-negative bacteria, as well as various classes of fungi with less time and effort [4] .
Similarly, several real-time bioaerosol detection methods have also been identified and are being applied.For instance, the microoptofluidic platform the BioTrakTM Real-Time Viable Particle Counter and the BioLaz® Real-Time Microbial Monitor, which is a real-time electrostatic sampler useful for the collection, sizing, and enumeration of viable inhalable microbes present in the air, are two examples that are being used to monitor and detect bioaerosols [ 9 , 11 ].These successful attempts have provided new information for understanding and performing further research with improved efficiencies and high accuracy, potentially offering innovations for bioaerosol measurements and controlling their impact on health and the environment.
Major issues of concern
The forthcoming competencies of the technologies discussed above fulfilling the limitations would shape the current potential for determining a reliable, efficient, easy, and fast method with maximum accuracy to accomplish identification and characterization of biological particles present in the atmosphere.Innovative and better systems for quantifying bacterial, fungal, and viral antigens, peptidases, proteases, and other influential enzymes must be developed in the future.The nature, source, biological and physical properties of bioaerosols are vast and require intensive study for better understanding (as shown in Fig. 2 ).A range of serious disease pathogens potentially exists in bioaerosols.However, the vital physical and biological causative agents for such illnesses remain vague.This limited knowledge is outwardly due to a lack of valid and accurate methods to assess those biological agents quantitatively.It is essential to take into consideration why some diseases are seasonal, such as seasonal flu, and some infections become epidemics or pandemics -for example, a debate on the transmission of COVID-19 through aerosols.Bioaerosol particles are generally 0.3-100 m in diameter.In contrast, the respirable size fraction is approximately 1-10 m.Particles with sizes ranging from 1.0 to 5.0 m generally remain in the air for a longer time, whereas larger particles are deposited more rapidly on surfaces.The deposition of airborne particles depends on size, time and wind patterns [4] .Therefore, when an infected person coughs, sneezes, breathes vigorously, or speaks loudly, the virus is excreted and dissolves with the aerosol and becomes a bioaerosol with a particle size of approximately 1-5 mm, which can further spread in the space of approximately 1-2 m (the aerosol can travel hundreds of meters or more) [7] .It is important to note that previous research has confirmed that aerosols are involved in the spread of several respiratory diseases, such as influenza and aspergillosis [4] .COVID-19 may be transmitted through aerosols, but it requires further verification by experiments.
Several other diseases that may be spread through aerosols have potentially not been reported.Hence, extensive studies and experimental research are required to identify the source, type, and pathogenicity of bioaerosols in human health.Sharma Ghimire et al. (2019) briefly discussed essential future acts that need to be applied in the field of bioaerosol research [9] .Atmospheric transference and ecological interaction with bioaerosols are also promising avenues for future investigation, as they are correlated and could provide valuable information about some undefined parameters and advance research on the role of microorganisms in health and the environment.Indeed, an advanced detection system requires equipment designed with specific features, such as a high-capacity power backup, high-quality radiation, high specific wavelengths, high-quality light detection systems, and high-quality laser sources for better performance.Similarly, the interaction of biological Fig. 3. Sampling and identification methods that can be combined and applied for bioaerosol study .
agents with chemical and physical agents in the development of diseases and symptoms is another important concern that needs to be addressed.
On the other hand, the actual progress of this era involves instruments that should be smaller, portable, cheaper, and easier to use, yet faster, more accurate, and more reliable.For example, focusing on the impact of bioaerosols on health and the environment is essential in sensitive areas, such as the Polar region and the Hindu Kush-Himalayas (HKH), which is also known as the Third Pole, and economically developing countries.The remoteness and synoptic atmospheric patterns make polar regions geographically isolated areas that are mainly characterized by changes in albedo, sea-ice extent, ice sheet melt, and glacial retreats, which greatly affect the polar radiation budgets.In addition, the melting of terrestrial ice creates new sites for microbial colonization [2] .The extreme weather conditions and logistics in those areas make regular sampling and sample maintenance difficult.Furthermore, due to the region's severe conditions, it is extremely difficult to obtain reliable samples and data.Similarly, Asian dust episodes are a predominant phenomenon of soil-derived dust being transported over long distances across large sections of the HKH and Tibetan Plateau (TP) regions, East Asia, and even reaching Arctic regions [ 9 , 12 ].A third pole region covering the Himalayas and the TP contains high elevation areas with harsh environmental conditions that pose difficulties in sampling due to lack of transport and logistics, making these regions challenging for such research.As a result, these regions are of particular importance to aerobiology in the present context.However, the size, cost, and scarcity of advanced techniques and expertise hinder research in these areas.
An innovative approach that involves a combination of techniques is a crucial need
One of the most critical gaps in bioaerosol science is the lack of coupled details of biological analyses, culture characteristics, genomic, proteomic, and metabolomics approaches, and real-time data ( Fig. 3 ).In the past decade, several studies have been published using techniques focusing on molecular and isotopic indicators for tracing bioaerosol particles released from various sources in the environment, such as next-generation sequencing and laser/fluorescenceinduced spectroscopy [4] .However, the absence of an accurate, rapid, easy, and inexpensive method for quantifying bioaerosols remains a barrier to assessing biological aerosol concentrations and their health effects.The difficulty persists from sample collection until the analytical procedure.Principally, the sample collection procedures for microorganisms and other particles are similar and are primarily based on filtration, impaction, or liquid impingement.Numerous other factors plague the measurement and analysis of bioaerosol-related parameters, such as sampling inlet, particle exclusion, biological recovery, growth and survival of the individual organism, assay efficiencies, humidity, temperature and pH of the culture medium.Furthermore, the selection of proper sampling and identification media and even accuracy in microscopic observation are other factors that interfere with obtaining appropriate results.Moreover, the connection among the quantitative characterization of bioaerosols present on the surface, planetary boundary layer, and troposphere represents another problem to attaining the overall (vertical and horizontal) circulation of biological particles [1] .Similarly, a portable device has also been devised that can be carried in any particular area (such as residential areas, sterile areas, dumping sites, and hospitals) to identify and characterize microbes present in the air in real-time [4] .For instance, as explained by Saikai et al. ( 2015), for culturable microorganisms, a device with a culturable chamber has been developed that has the ability to incorporate growth media, maintain temperature and measure the abundance and biochemical properties of microorganisms [1] .Other methods may include a high magnification lens to detect the type of living microorganisms with the camera sensor and for mapping the structure, size, characteristics of microorganisms or biofilms and a device with a real-time sequence analyzer in a specific volume of air to identify the microorganisms [4] .Similarly, a data storage system for microbial diversity as well as the setup of the above discussed real-time microbial identification device that can assist in maintaining remote sensing study of microbes in the air represents another innovative method to explore bioaerosols [1] .To report these concerns, the results from real-time detection, NGS sequencing, mass analysis, and biological and chemical analysis can be compared with climatic studies.These data can be mathematically modeled to predict conditions and interactions in Earth's history and future climate.It is crucial to strengthen teamwork among interdisciplinary areas of biology, chemistry, earth science, and life science through shared understanding and knowledge among researchers and experts worldwide.Science and Astronomy published a study by Elizabeth Howell on April 15, 2019, revealing that bacteria and fungi are present all over space.Similarly, Capone and Subramaniam (2015) [13] performed an outstanding study on the use of remote sensing as a resource for tracking marine microbial ecosystem dynamics.These discoveries are advancing our knowledge.If combined with other stable and standard techniques, bioaerosol investigation will substantially improve the pace of our research.The above explanation concurrently suggests that bioaerosol studies will be much advanced and accurate if a simple yet functionally sophisticated device can be used anywhere on the Earth's surface.These creative ideas may appear unrealistic at present, but nothing is impossible in the field of science and technology.
Conclusion
Future challenges, such as climate change, health deterioration, global economic downturns, and an increase in airborne pathogens, are likely due to the negative environmental and social effects of bioaerosols.Hence, a vast amount of investigation is crucial to identify what is present in the air and poses a negative impact on the environment and climate.This short article provides a broad concept of analytical opportunities for researchers, such as conventional culture methods followed by gene-based studies, metagenomics studies combined with molecular studies, and mass spectrometric analysis together with biochemical analysis, which could deliver significant amounts of information about the microbial processes occurring in the environment.No single aerosol sampling and measurement method is expected to be appropriate for all prospects (size, species, and specific research hypotheses).
These limitations can be overcome by using cross-disciplinary strategies with the combined use of several analytical methods that can exclude possible errors.This perspective has attempted to provide new concepts or designs to untangle some innovative possibilities for future research.
Fig. 1 .
Fig. 1.Global representation of statistics of bioaerosol study performed all over the world.(a) The number of publications counts on bioaerosol study (1997-2018).(b) Global statistics of bioaerosol publication from different countries all over the world.The color legend shows the number of publications.The data were obtained from the web of science database search by using bioaerosols as the keynote.
Fig. 2 .
Fig. 2. Airborne bioaerosols: their sources, components, and their impact on health and the environment .
|
v3-fos-license
|
2021-12-02T14:14:41.591Z
|
2021-11-30T00:00:00.000
|
244779960
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2021.707138/pdf",
"pdf_hash": "f070f2280c54ae4109573901bfd69bbf4479a562",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2317",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "f070f2280c54ae4109573901bfd69bbf4479a562",
"year": 2021
}
|
pes2o/s2orc
|
Iron in Cardiovascular Disease: Challenges and Potentials
Iron is essential for many biological processes. Inadequate or excess amount of body iron can result in various pathological consequences. The pathological roles of iron in cardiovascular disease (CVD) have been intensively studied for decades. Convincing data demonstrated a detrimental effect of iron deficiency in patients with heart failure and pulmonary arterial hypertension, but it remains unclear for the pathological roles of iron in other cardiovascular diseases. Meanwhile, ferroptosis is an iron-dependent cell death that is distinct from apoptosis, necroptosis, and other types of cell death. Ferroptosis has been reported in several CVDs, namely, cardiomyopathy, atherosclerotic cardiovascular disease, and myocardial ischemia/reperfusion injury. Iron chelation therapy seems to be an available strategy to ameliorate iron overload-related disorders. It is still a challenge to accurately clarify the pathological roles of iron in CVD and search for effective medical intervention. In this review, we aim to summarize the pathological roles of iron in CVD, and especially highlight the potential mechanism of ferroptosis in these diseases.
INTRODUCTION
Iron is an essential mineral nutrient involved in numerous biologic processes, namely, heme synthesis, iron-dependent catalytic reaction, DNA synthesis, and mitochondrial respiration (1). Iron metabolism is complex and has received much attention. Iron homeostasis is maintained by elaborate mechanisms involving iron consumption, uptake, transfer, and storage (1). Inappropriate iron overload or deficiency correlates to a wide range of cardiovascular diseases (CVDs). Iron deficiency can impair cardiomyocyte mitochondrial function and energy supplement, leading to cardiac dysfunction (2). An excess amount of iron can also be toxic by producing hydroxyl radicals via the Haber-Weiss-Fenton reactions, causing oxidative damage to cellular components like lipids, proteins, and DNA (3). Moreover, iron-mediated cell death, namely, ferroptosis, has recently been reported to induce cardiomyocyte damage and plays an important role in CVD (4).
In 1981, Jerome Sullivan proposed the iron hypothesis to explain the sex differences in the risk of heart disease and the lower incidence of CVD in premenopausal women (5). Since then, it has been investigated for decades to untangle the connection of iron-heart diseases. Numerous epidemiological studies have indicated that body iron levels are associated with CVD. To date, it is definite that iron deficiency is prevalent in patients with heart failure (HF) or pulmonary arterial hypertension (PAH), and intravenous iron supplements can improve the quality of life of these patients and reduce the associated risk of hospitalization (2,6). However, the clinical observation regarding the iron status and atherosclerotic cardiovascular disease (ASCVD) is still controversial despite some studies supporting that elevated iron store positively correlates with the incidence of coronary artery disease (CAD). Besides, iron status in other CVD remains unclear and it is a challenge to clarify their relation.
The precise mechanisms of iron homeostasis in the cardiovascular system are distinct and complicated. In this review, we aim to summarize the pathological roles of iron in CVD, and especially highlight the potential mechanism of ferroptosis in these diseases.
IRON HOMEOSTATIC REGULATION: THE BASICS Systemic Iron Homeostasis
The adult body contains about 3-5 g of total iron, with twothirds in the form of hemoglobin and myoglobin. The rest of iron is almost bound to ferritin, a specialized cytoplasmic iron storage protein. Only about 0.1% of total body iron constitutes extracellular iron. Normally, senescent or damaged erythrocytes are phagocytized by macrophages in the spleen and other organs and release their iron into circulation, which can be recycled for heme synthesis in the bone marrow (7). Except for cyclic utilization of erythrocyte iron, dietary iron intake is an important way for the iron supply to replenish body iron loss via gut mucosa (8).
Dietary iron is absorbed by gut mucosa cells via two distinct mechanisms based on heme and inorganic forms of iron (9). The heme form of iron is absorbed in the apical membrane of epithelial cells via a specific heme transporter, heme carrier protein 1 (HCP1). The heme can be degraded by heme oxygenase-1 (HO-1) to release ferrous iron (Fe 2+ ), carbon monoxide, and biliverdin. The absorption of inorganic ferric ion (Fe 3+ ) requires two key steps: the conversion of insoluble Fe 3+ to absorbable Fe 2+ by cytochrome b reductase 1 (DCYTB) and following transportation of Fe 2+ by divalent metal transporter protein 1 (DMT-1) across the membrane. Internalized Fe 2+ enters the cytosolic labile iron pool (LIP). Extra iron can be stored as ferritin or exported through the basolateral membrane by ferroportin (FPN), the only known iron export protein. The exported Fe 2+ is undergone re-oxidation to Fe 3+ by membranebound hephaestin and binds to transferrin (Tf) for long-distance delivery. Circulating Tf-bound iron can be internalized into the cells of peripheral tissues via binding to its receptor, transferrin receptor 1 (TfR1) (10).
Circulating iron levels are predominantly regulated by the transmembrane protein FPN. Hepcidin is a peptide hormone released mainly by the liver, effectively preventing cellular iron efflux via promoting FPN internalization and degradation. When hepcidin is transcriptionally downregulated in the condition of enhanced erythropoiesis or iron deficiency, more iron is released into circulation from intestinal epithelial cells, macrophages, and hepatocytes (7). High levels of serum iron and chronic inflammatory states can lead to increased levels of hepcidin. The hepcidin-FPN axis tightly regulates systemic iron homeostasis to meet body requirements (7).
Cellular Iron Metabolism
Iron homeostasis in the body is regulated at both systemic and cellular levels. The Tf-bound iron binds to TfR1 and is internalized by endocytosis, while the uptake of non-Tf-bound iron (NTBI) is mediated by DMT-1 protein ubiquitously present on the surface of cells (11,12). In addition, the voltage-gated calcium channels of cardiomyocytes can also be iron transporters for NTBI under iron overload conditions (13). After absorption, iron enters the redox-active LIP, where it is utilized for storage in ferritin, or incorporation into iron-require proteins, or trafficking to mitochondria for the synthesis of heme and iron-sulfur (Fe-S) clusters (14) (Figure 1).
Cellular iron homeostasis is post-transcriptionally regulated by the iron regulatory proteins (IRP1 and IRP2) interacting with iron-responsive elements (IREs) (11). IREs are highly conserved hairpin structures of mRNAs present in 5 ′ or 3 ′ untranslated regions (UTRs) of iron metabolism genes. IRPs inhibit the initiation of translation by binding to the single 5 ′ -UTR IREs of ferritin and FPN, whereas their binding to the multiple IRE motifs within the 3 ′ -UTR of TFR1 and DMT1 prevents mRNAs degradation (15). The capability of IRPs binding to IRE depends on intracellular iron concentration. In iron-replete cells, IRP1 ligates the Fe-S cluster and functions as a cytosolic aconitase, which precludes IRP-IRE interaction to increase ferritin and TfR1 proteins. In the low intracellular iron environment, IRPs stabilize the mRNA of TfR1 and DMT-1 to enhance iron uptake and inhibit iron excretion by suppressing the translation of FPN (16,17).
The hepcidin-FPN1 axis also plays a pivotal role in controlling cellular iron flux, particularly in cardiomyocytes. Distinct from systemic iron regulation, hepcidin can be produced locally in the heart, and functions as an autocrine protein to regulate iron levels in cardiomyocytes (18). Interestingly, cardiac hepcidin is upregulated in response to hypoxia to retain cellular iron, while systemic hepcidin is downregulated (19). Such regulation may be an adaptive mechanism to maintain cardiac function.
Iron Metabolism and Ferroptosis
Ferroptosis is a novel form of regulated cell death driven by iron-dependent lipid peroxidation. It is distinct from other types of regulated cell death ( Table 1), which can be suppressed by iron chelators (e.g., deferoxamine) (20). In the process of ferroptosis, reactive oxygen species (ROS) are overproduced by accumulated intracellular iron, and extensive oxidation of polyunsaturated fatty acid is triggered, resulting in the damage of cellular membrane structure and cell death (20). Thus, modulation of iron metabolism-related genes may regulate ferroptosis by affecting cellular iron homeostasis. The nuclear receptor coactivator 4 (NCOA4) is a selective cargo receptor for the autophagic degradation of ferritin, namely, ferritinophagy, which can increase intracellular iron and induce ferroptotic cell death (21). Overexpression of ferritin heavy chain 1 impaired ferritinophagy and inhibited ferroptosis in PC-12 cells (22). Senescent cells with impaired ferritinophagy were more resistant to ferroptosis (23). Moreover, blockade of cellular iron export via genetically deleting FPN has been reported to develop morphological and molecular features of ferroptosis in hippocampus neurons (24). The overexpression of FPN in neurons could alleviate neuronal apoptosis and ferroptosis after intracerebral hemorrhage (25). Taken together, the modulation of cellular iron metabolism might provide a novel therapeutic target for ferroptosis-associated disease.
Glutathione peroxidase 4 is a selenocysteine-containing, GSH-dependent enzyme capable of catalyzing the reduction of lipid hydroperoxides (26). Genetic manipulation studies revealed that constitutive deletion or inactive mutant of GPX4 leads to early embryonic lethality. As a result, conditional GPX4 knockout mice were generated to study the mechanisms of GPX4 deficiency-induced cell death. The inducible ablation of GPX4 causes mitochondrial damage and lipid peroxidation-mediated ferroptosis event (28). Thus, several ferroptosis inducers, such as Ras-selective lethal 3, can trigger the accumulation of lipid hydroperoxides and result in cell death by directly inhibiting GPX4 (29). Conversely, overexpression of GPX4 has been reported to protect against oxidative injury in various cell types (30). ApoE −/− mice overexpressing GPX4 showed decreased oxidized lipids and atherosclerotic lesions in the aorta compared with ApoE −/− control mice (31). Overexpression of mitochondrial GPX4 can also protect ischemia/reperfusion (I/R)-induced cardiac injury (32).
GSH is synthesized from glutamate, cysteine, and glycine in two steps under the catalysis of the cytosolic enzymes glutamatecysteine ligase and glutathione synthetase, participating in the regulation of ferroptosis (33). Cysteine is the most limiting amino acid for GSH synthesis. Inhibiting its import through the system Xc − is sufficient to trigger ferroptosis in vitro (34). System Xc − is a cystine/glutamate antiporter that facilitates the exchange of cystine and glutamate across the plasma membrane (34). Thus, inhibition of system Xc − can lead to deprivation of cellular GSH and impair the function of GPX4 to suppress lipid peroxidation and ferroptosis.
Recent evidence indicates that the FSP1-CoQ10 pathway co-operates with GPX4 and GSH to suppress phospholipid peroxidation and ferroptosis, as a stand-alone parallel system. FSP1, also called apoptosis-inducing factor mitochondrial 2 (AIFM2), was predicted to induce apoptosis by a caspase-1 independent pathway, due to its biochemical similarities to AIFM1 (35). However, FSP1 is recruited to the plasma membrane by myristoylation, where it functions as an oxidoreductase that catalyzes the regeneration coenzyme Q10 (CoQ10) (35), instead of inducing apoptosis. Ubiquinol is the reduced form of CoQ10 generated by the mevalonate pathway. It can act as a lipophilic radical-trapping antioxidant to regulate ferroptosis by halting the propagation of lipid peroxides (35).
Ferroptosis and Cardiomyopathy
Although the physiological action of ferroptosis remains elusive, it has been studied in several cardiomyopathies, namely, diabetic cardiomyopathy and doxorubicin (DOX)induced cardiotoxicity.
Diabetic cardiomyopathy, characterized by cardiac hypertrophy, diastolic dysfunction, and intracellular lipid accumulation, is the common complication of diabetes (36). It has been reported that GPX4 expression was reduced in both high glucose-treated cardiomyocytes and the left ventricular myocardial tissues of db/db mice (37). A recent study found that inhibition of cardiac autophagy could activate nuclear factor E2-related factor 2 (Nrf2)-mediated ferroptosis and lead to myocardial damage in type 1 diabetic mice (38).
DOX is a second-generation anthracycline chemotherapeutic drug used in many malignancies. It often causes cardiotoxicity (39). In a mouse model of DOX-induced cardiomyopathy, inhibition of ferroptosis significantly improved cardiac function and reduced mortality, which was associated with the release of free cellular iron caused by HO-1 upregulation (40). DOX treatment could downregulate GPX4 and induce ferroptosis predominantly triggered in the mitochondria (41). Another study also proved that ferroptosis was involved in DOX-treated murine hearts, and acyl-CoA thioesterase 1, an important enzyme in fatty acid metabolism, might exert antiferroptosis effect in DOX-induced cardiotoxicity (42). These studies highlight that ferroptosis played a crucial role in cardiomyopathy and might be a therapeutic target.
Role of Iron in Atherosclerotic Cardiovascular Disease
In 1981, Jerome Sullivan first proposed a hypothesis that the increased incidence of heart disease in men and postmenopausal women compared with premenopausal women could be explained by higher levels of body iron stores (5). Based on this hypothesis, numerous epidemiologic studies have investigated the role of iron in the pathogenesis of ASCVD. The Kuopio Ischemic Heart Disease Risk Factor Study (KIHD) carried in eastern Finnish men demonstrated firstly that a high level of stored iron, assessed by increased serum ferritin, is a risk factor for myocardial infarction (MI) (43). Iron deposition was detected in coronary plaques associated with an increase of cholesterol levels in patients with atherosclerotic lesions (44). Plaques of symptomatic patients also showed higher iron concentrations and risk of cap rupture compared with plaques of asymptomatic patients (45). Several clinical studies have revealed that iron chelation therapy is beneficial to patients with CAD (46,47). However, a systematic review and metaanalysis containing 17 prospective studies showed that there was no significant association between serum ferritin, total ironbinding capacity, serum iron, and CAD/MI, while a significant negative association was identified between transferrin saturation and CAD/MI (48). This contradicted the hypothesis that higher body iron stores represented a possible risk factor in heart disease. It may attribute to the inconsistency of the evaluated markers of serum iron in those clinical studies, and the systematic and local iron levels cannot be accurately distinguished.
The mechanism whereby iron may stimulate atherogenesis has been intensively investigated. Plenty of studies have shown an association between iron overload and atherosclerosis (49). The pathological roles of iron in atherogenesis may largely rely on its catalytically active form to generate ROS and induce lipid-peroxidation (49). Within the atherosclerotic lesions, iron overload is presented in monocytes/macrophages, endothelial cells, vascular smooth muscle cells, and platelets that all participate in the process of atherosclerosis (50). Iron overload drives endothelial dysfunction through its prooxidant and proinflammatory effects in endothelial cells; and promotes proliferation, apoptosis, ROS production, and phenotypic switch in vascular smooth muscle cells (51,52). By catalyzing the atherogenic modification of lowdensity lipoprotein, excess iron facilitates the conversion of macrophages into foam cells (52). A recent study has reported that iron overload also enhances glycolysis and inflammation in macrophages and exacerbates the severity of atherosclerosis (53).
During the development of atherosclerosis, lipid oxidative modification and iron deposition are wellobserved in plaques. It is reasonable to speculate that ferroptosis may happen in this process. A study revealed the downregulation of GPX4 in heart tissue of MI mice using quantitative proteomic analysis (54). The exosome from human umbilical cord blood-derived mesenchymal stem cells has been reported to have cardioprotective effects on mouse models of MI by inhibiting ferroptosis through suppressing DMT1 expression (55). Iron chelation therapy using desferrioxamine (DFO) has been shown to inhibit atherosclerotic lesion development (56), suggesting that ferroptosis might participate in the process of myocardial ischemia. Therefore, targeting ferroptosis might be a precise therapy of ASCVD.
Role of Iron in Myocardial Ischemia/Reperfusion Injury
Myocardial ischemia/reperfusion (I/R) injury is an important complication of percutaneous coronary intervention or thrombolysis for acute MI (57). The recanalization of an obstructive coronary artery is effective to restore blood flow and rescue the ischemic zone, but paradoxically, reperfusion can also cause cardiac damage and necrosis due to the massive production of ROS (58). Accumulating evidence suggests that iron overload is implicated in the pathology of myocardial I/R injury (59,60). Early studies supported that high levels of iron were mobilized into the coronary flow following prolonged ischemia, and cardiac cytosolic iron level augmented in rat hearts subjected to I/R (61,62). A hereditary hemochromatosis model of HFE gene knockout mice subjected to I/R injury showed increased iron deposition, cardiomyocyte apoptosis, and ROS production compared with wild-type mice (63). Furthermore, elevated mitochondrial iron was observed in myocardial I/R injury mice and human cardiac tissue samples with ischemic cardiomyopathy, while pharmacological reduction of mitochondrial iron in vivo protects against I/R damage (64).
Myocardial I/R can activate hypoxia-inducible factor-1 signaling and increase TfR1 expression to facilitate iron uptake (65); upregulation of TfR1 expression in I/R-treated rat hearts was along with increased iron content (66). These illustrate that I/R can induce iron overload. Recent investigations have provided evidence that ferroptosis is involved in I/Rinduced cardiomyocyte damage, and targeting ferroptosis might be beneficial for I/R conditions (40). Mitochondria-specific overexpression of GPX4 alleviates cardiac dysfunction following I/R (32). Inhibition of glutaminolysis, a component of the GSH generation pathway, can also attenuate I/R-associated heart injury by blocking ferroptosis (67). Cyanidin-3-glucoside, a subgroup of flavonoids, exhibits a protective effect in the rat model of myocardial I/R injuries via inhibiting USP19/Beclin1mediated ferroptosis (68). Thus, targeting ferroptosis can serve as a potential strategy to prevent I/R-induced myocardial injury. Considerable efforts have been performed to ascertain whether iron depletion by using iron chelators could exert cardiac protection effects. In some animal models, iron chelation therapy improves contractile function, increases cell viability, attenuates cardiac remodeling, and reduces the size of infarction after I/R injury (40,59). However, these results were not reproduced in some experimental animals (69,70). A potential reason for the discrepancy may be species specificity. Further studies are needed to test the potential clinical implications of this therapeutic strategy.
Role of Iron in Heart Failure
The role of iron deficiency is highly pronounced and deeply investigated in HF patients (71). Iron deficiency occurs in about 50% of chronic HF patients and is independently associated with increased morbidity and mortality (72). There are several mechanisms to explain HF-associated iron deficiency like dietary nutritional deficiency, reduced absorption caused by gut edema or proton pump inhibitors use, and gastrointestinal blooding due to the use of antiplatelets and anticoagulant agents (72).
Iron content was significantly decreased in left ventricular tissues of human failing hearts compared to HF-free organ donors, independent of anemia (73). Moreover, cardiac iron deficiency in HF was accompanied by reduced activity of aconitase and citrate synthase and reduced expression of ROS-protective enzymes (catalase, glutathione peroxidase, and superoxide dismutase 2), indicating that myocardial iron deficiency may contribute to the exacerbation of mitochondrial dysfunction that exists in HF (73). These findings are consistent with the recent research of Hoes et al., who demonstrated that energy production and contractile function are reduced in iron-deficient human cardiomyocytes (73). Cardiomyocytes with genetic deletion of TfR1 developed mitochondrial dysfunction, interrupted mitophagy, and promoted the metabolic switch to the fetal-like pattern (74). Thus, iron deficiency-induced mitochondrial dysfunction may reciprocally impair cellular energy supplement and cardiac function.
Because iron deficiency is associated with the pathophysiology of HF, iron repletion seems to be a therapeutic strategy for HF patients. Three main clinical trials (FAIR-HF, EFFECT-HF, and CONFIRM-HF) have proven that intravenous iron supplementation improves the quality of life, exercise tolerance, and reduces hospitalization risk for aggravated HF (75)(76)(77). Other smaller trials strengthened this evidence (78,79). However, there is no significant clinical benefit from oral iron preparations in the IRONOUT-HF trial (80). This may be explained by the impaired iron absorption in HF (81). Although intravenous ferric carboxymaltose for symptomatic HFrEF patients with iron deficiency has been recommended in guidelines, there are several unsolved issues about iron repletion in this field. The longterm safety of intravenous iron supplementation in HF patients remains to be determined. The methods of iron administration and the potential side effects of iron should be fully considered due to iron overload-related oxidative damage.
Role of Iron in Calcific Vascular and Valvular Disease
Vascular and valvular calcification refers to ectopic mineralization in vessel walls and heart valve leaflets. It is an important risk factor for adverse cardiovascular events (82). Although there are differences in the morphology and structure of heart valves and vasculature, the biological characteristics of vascular and valve calcification are similar. An osteoblast-like phenotypic transition of VSMCs and valvular interstitial cells (VICs) contributes directly to ectopic calcium deposition, respectively (83,84). Both the vascular and valvular calcifications are clinically associated with the presence of diabetes, smoking, hypertension, and dyslipidemia (84). Experimental evidence demonstrates that oxidative stress has been verified to participate in pathological vascular and valvular calcification for a long time (85)(86)(87). In this context, iron-triggering oxidative stress is rational and speculated to be involved in vascular and valvular calcification.
Valve calcification mainly occurs in aortic valves. VICs differentiate into the pathological myofibroblasts and osteoblastlike cells which promote inappropriate extracellular matrix remodeling and calcification (88). Previous studies have observed that intraleaflet hemorrhage is associated with the progression of valve calcification, and iron deposition is observed within calcific valves. Interestingly, iron deposition can also be detected in non-calcified valves, suggesting that iron deposition occurs before calcium deposition at the sites of valve calcification (89). Valvular iron accumulation was observed in human calcific aortic valves, positively correlating with the degree of calcification (90). Furthermore, differentiated VICs induced by tumor necrosis factor-α and transforming growth factor-β showed significantly decreased expression of iron-exporter FPN, and in VICs isolated from stenotic aortic valves. In the presence of ferrous sulfate, VICs expressed increased ferritin subunits and exhibited proliferation capacity (90).
Ectopic vascular calcification is generally located either in atherosclerotic intima or non-atherosclerotic tunica media (91). Intimal calcification is related to arterial obstruction and atherosclerotic plaque rupture. Medial calcification leads to vascular stiffness and elevated blood pressure and pulse pressure (92). Although vascular calcification has been noted as a degenerative aging process for decades, calcification in both intima and media layers is recognized as an actively regulated process driven partly by VSMCs (91,93). It was reported that holo-transferrin iron could promote human aorta VSMCs calcification via upregulating interleukin-24 (94). Some circumstantial evidence supported the relationship between iron accumulation and atherosclerosis progression (95,96). However, there were conflicting results that iron citrate could reduce high phosphate-induced calcium deposition in VSMCs by preventing apoptosis and inducing autophagy (97), and inhibit the osteochondrogenic shift in VSMCs (98). Moreover, ferritin heavy chain exerted inhibitory effects on vascular calcification due to ferroxidase activity and antioxidant properties (99).
Role of Iron in Pulmonary Arterial Hypertension and Systematic Hypertension
PAH is an abnormal hemodynamic state characterized by a sustained increase in pulmonary artery pressure (≥25 mmHg) and normal pulmonary capillary wedge pressure (≤15 mmHg) in the absence of other causes of precapillary pulmonary hypertension (100). PAH is classified into idiopathic, heritable, drugs and toxins-induced, and other origins (e.g., congenital heart disease, connective tissue disease, and chronic hemolytic anemias) (101). Clinical evidence supports that iron deficiency is prevalent and correlates with reduced exercise capacity and poor outcomes in both idiopathic and heritable PAH patients (101). Intravenous iron supplementation could improve quality of life and exercise endurance capacity in PAH patients with iron deficiency in two placebo-controlled studies (102,103). To confirm the long-term effects of iron repletion, 117 PAH patients were recruited and received placebo or intravenous iron supplementation. Eighteen-month treatment with intravenous iron supplementation brought long-term clinical benefits, namely, improved risk status and reduced PAH-associated hospitalization (104). On the other hand, oral iron appeared ineffective because of impaired gastrointestinal iron absorption caused by upregulated hepcidin (105).
The underlying pathological mechanisms of iron deficiency to the PAH may involve hypoxia, inflammation, and functional alterations of pulmonary vascular cells. Hypoxia exposure can induce vasoconstriction in pulmonary arteries, resulting in increased pulmonary artery systolic pressure (PASP). Furthermore, hypoxia-induced vasoconstriction and PASP could be augmented by iron chelation with DFO in healthy adults (106). The pulmonary hypertensive response caused by altitude-induced hypoxia could be reversed by iron infusion, reducing PASP by 6 mmHg in the sea-level residents, whereas patients with chronic mountain sickness undergoing progressive iron discharge by venesection resulted in a 25% increase in PASP (107). It is reasonable to speculate that iron deficiency, analogous to hypoxia, can increase PASP, which may partly account for the pathogenesis of PAH.
It is well-known that the pathologic hallmarks of PAH contain sustained vasoconstriction, vascular remodeling, and perivascular inflammation. Since the VSMCs in pulmonary arteries are pivotal in controlling vasoconstriction, intensive attention has been paid to deciphering the role of pulmonary arterial smooth muscle cells (PASMCs) in pulmonary vascular remodeling and hypertension. Chelation of iron in vitro increased the metabolic activity and proliferation of human PASMCs, while iron supplementation inhibited this process. The rats fed with an iron-deficient diet developed pulmonary vascular remodeling and hemodynamic changes similar to PAH patients, which can be reversed by iron supplementation (108). Systemic iron homeostasis is controlled by FPN and its antagonist peptide hepcidin. Hepcidin treatment caused cellular iron accumulation by internalizing FPN in human PASMCs (109). In the mice expressing hepcidin-resistant isoform fpnC326Y, specific iron deficiency of PASMCs is sufficient to develop pulmonary hypertension, which was associated with markedly increased endothelin-1 (110). These results highlight the importance of intracellular iron deficiency, other than systematic iron deficiency, in the pathogenesis of PAH.
Different from pulmonary pressure, systemic blood pressure seems to be positively associated with iron markers. Two cross-sectional studies in Korea reported that serum ferritin was positively associated with the prevalence of hypertension (111,112). Moreover, in a large-scale longitudinal study of the Chinese population, hemoglobin and transferrin levels were positively correlated with the risk of blood pressure and incident hypertension (113). Hypertensive patients with iron overload were accompanied by sympathetic overactivation but not the parasympathetic component of cardiovascular autonomic function (114). In experimental animals, dietary iron restriction attenuated cardiovascular hypertrophy, fibrosis, and inflammation in hypertensive Dahl salt-sensitive rats (115). The dietary iron restriction also prevented the development of hypertension and renal fibrosis in aldosterone/salt-induced hypertensive mice (116). These data suggested that dysregulation of iron metabolism may be an important independent risk factor for hypertension. However, detailed mechanistic information is lacking for the role of iron in systemic hypertension.
Role of Iron in Arrhythmogenesis
Iron overload in the heart can lead to a gradual deterioration in both cardiac mechanical function and electrical activity. Chronic iron overload has been demonstrated to induce prolonged PR-interval, heart block, and atrial fibrillation in mice (117). Abnormal electrocardiograms including prolonged PR-intervals and QRS-intervals were also observed in isolated hearts of irontreated gerbils (118). Long-term effects of iron overload resulted in frequent arrhythmias in gerbils in vivo, including premature ventricular contractions and supraventricular/ventricular tachycardia (119). However, arrhythmias did not occur in gerbils and guinea pigs receiving iron overload treatment despite significantly increased cardiac and hepatic iron concentrations (120,121). The molecular mechanisms of iron-induced arrhythmias remain elusive. Many in vitro studies performed in isolated cardiomyocytes have verified that free iron can directly interact and interfere with a variety of ion channels of cardiomyocytes including the L-type calcium channel, the ryanodine-sensitive calcium channel, voltage-gated sodium channel, and delayed rectifier potassium channel (13,122). Furthermore, excessive ROS production induced by iron overload could trigger the mitochondrial inner membrane anion channel opening, resulting in mitochondrial depolarization for the cytoplasmic anion efflux, which may be one of the reasons for arrhythmias (123,124).
As for clinical studies, the incidence of arrhythmias associated with iron overload has been well-described in β-thalassemia and hereditary hemochromatosis (125). Moreover, patients with severe thalassemia and hemochromatosis could develop HF simultaneously. Iron toxicity may contribute to cardiac structural remodeling, which disturbs cardiac electrophysiological conduction. Thus, the arrhythmias occurrence induced by iron overload is confounded by the presence of HF and may not reflect the single effect of iron overload on arrhythmias. Despite limited information regarding arrhythmias occurrence with iron overload before the development of HF, a study has demonstrated that arrhythmias were significantly increased as myocardial iron deposition in patients with β-thalassemia and preserved left ventricular systolic function (126), which suggested the independent arrhythmogenic effect of iron toxicity to some extent. In addition, cardiac arrhythmias have been reported to be ameliorated by chelation in patients with iron load, which highlights iron toxicity is associated with cardiac arrhythmias.
CONCLUSION AND FUTURE DIRECTIONS
Iron is an indispensable micronutrient for basic biological processes. Dysregulation of iron homeostasis, inappropriate iron overload or deficiency, is harmful to a living organism. Although the understanding of the role of iron in the cardiovascular system has been advanced considerably in recent years, there remains unclear in some issues. Traditional methods cannot exactly reflect iron distribution and metabolism, especially in different tissues. Application of novel instruments or methods to measure iron, like T2 star (T2 * ) cardiac magnetic resonance imaging, is important to identify the pathophysiological roles of iron. Although iron repletion has been employed for the treatment of HF or PAH patients with iron deficiency, future studies are still necessary to pay more attention to the clinical significance of iron status and figure out the exact association between iron homeostasis and CVD (Figure 3). Ferroptosis is closely associated with the pathogenesis of CVD including cardiomyopathy, ASCVD, and myocardial I/R injury. However, the mechanisms of ferroptosis in the heart and vasculature remain elusive. The safety and efficacy of iron chelation to treat ferroptosis-related CVD require further verification.
|
v3-fos-license
|
2018-05-25T23:38:05.797Z
|
2018-05-01T00:00:00.000
|
29150792
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.2196/resprot.9813",
"pdf_hash": "b3d4742959a5c5737132210cff7ebeef358c983c",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2318",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"sha1": "3a16dae722d33a660d8b5886e0ecf25709c33295",
"year": 2018
}
|
pes2o/s2orc
|
Confocal Laser Endomicroscopy and Optical Coherence Tomography for the Diagnosis of Prostate Cancer: A Needle-Based, In Vivo Feasibility Study Protocol (IDEAL Phase 2A)
Background: Focal therapy for prostate cancer has been proposed as an alternative treatment to whole-gland therapies in selected men to diminish side effects in localized prostate cancer. As nowadays imaging cannot offer complete prostate cancer disease characterization, multicore systematic biopsies are recommended (transrectal or transperineal). Optical imaging techniques such as confocal laser endomicroscopy and optical coherence tomography allow in vivo, high-resolution imaging. Moreover, they can provide real-time visualization and analysis of tissue and have the potential to offer additive diagnostic information. Objective: This study has 2 separate primary objectives. The first is to assess the technical feasibility and safety of in vivo focal imaging with confocal laser endomicroscopy and optical coherence tomography. The second is to identify and define characteristics of prostate cancer and normal prostate tissue in confocal laser endomicroscopy and optical coherence tomography imaging by comparing these images with the corresponding histopathology.
Introduction
Prostate cancer (PCa) is the leading noncutaneous cancer in men and the third cause of cancer-related death [1].To date, patients with a clinical suspicion of PCa, based on elevated serum prostate-specific antigen (PSA) and/or suspicious digital rectal examination (DRE), are recommended to undergo transrectal ultrasound (TRUS; +/− multiparametric magnetic resonance imaging, mpMRI)-guided systematic biopsies [2].This work-up for PCa diagnosis carries some important drawbacks.Due to the heterogeneous nature of PCa, this procedure has a known risk of missing PCa lesions or underestimating PCa aggressiveness, besides overdiagnosis of insignificant lesions [3,4].In the last decade, the diagnostic pathway for PCa has, therefore, moved more and more into imaging-based targeted biopsies instead of random systematic biopsies.Reliable prostate imaging is key for the reduction of unnecessary biopsies, insignificant PCa detection, increasing detection of significant PCa, reducing the number of cores, and to facilitate monitoring during active surveillance.Moreover, reliable imaging would play a pivotal role in treatment planning, and monitoring of focal treatment for low-to intermediate-risk localized PCa [5][6][7][8].Especially, mpMRI of the prostate has evolved as an increasingly appealing tool in the PCa diagnostic armamentarium and is recommended in men with suspicion of PCa following a negative initial biopsy, and currently, it is even proposed to select patients for biopsies [2,5].For focal therapy, in which the aim is to target treatment of significant disease with minimal toxicity, accurate disease identification, localization, demarcation, and grading of a lesion are essential.Focal therapy selection with mpMRI-targeted biopsies may be an option in experienced hands, but to date, there is a substantial proportion of false positives in lesions 3/5 or 4/5 scored with the prostate imaging reporting data system (PI-RADS) [9].Moreover, the assessment of mpMRI-negative areas or the prostate as a whole using a transperineal prostate mapping biopsy using a template-guided approach is recommended [2,10,11].Transperineal template mapping biopsies (TTMB) are able to sample the prostate at every 5 mm, and coordinates are correlated to the tumor location.Limitations of this procedure are the large numbers of cores needed per prostate, the rate of urinary retentions, and the operating room time with its accompanying hospital admission [12,13].Moreover, pathologists face a substantial increase in workload with a high number of biopsies, which often turn out to be benign.
Optical imaging technologies offer real-time imaging with excellent spatial and temporal resolution and are easily integrated into the operating room.In conjunction with mpMRI/TRUS-fusion image targeted biopsy, these real-time technologies in a needle-based form could provide valuable information for tissue characteristics.Adding real-time, in vivo diagnostic information of prostate tissue structure and architecture to already known information could improve PCa disease characterization.Optical imaging has the potential to make the diagnostic procedure less invasive, speed up the pathway, and reduce the currently existing workload of histopathological analysis.
Two optical imaging techniques currently used for needle-based optical biopsies are confocal laser endomicroscopy (CLE) and optical coherence tomography (OCT) [14][15][16][17].CLE and OCT differ in background technology and image geometry and, therefore, show different images of the scanned tissue, see Figures 1 and 2. CLE uses low-power laser bundles in a fiber optic probe, which can be inserted into the lumen of a needle to obtain real-time microscopic images of the tissue that is investigated.Backscattered light, from one specific tissue plane, is focused through a pinhole, whereas the backscattered light from surrounding tissue is rejected.This leads to high-resolution imaging of one specific plane of tissue in focus.The fluorescent light originates from the fluorescent dye nested in the extracellular matrix after topical or intravenous application.The most commonly used fluorescent dye is fluorescein.CLE is under investigation for gastrointestinal, urothelial, and pulmonary diseases, whereas for PCa, so far, only one study on CLE has been reported [14,[18][19][20].Lopez et al performed CLE during robot-assisted laparoscopic prostatectomy (RALP) in 21 patients to investigate the ability of CLE to assess surgical margins and nerve tissue with promising CLE-based characteristics of prostatic and periprostatic tissue [20].In addition, no adverse events were reported related to the CLE procedure.However, these authors did not assess the ability to differentiate malignant from benign prostate cells.
OCT is the optical equivalent of ultrasound imaging, based on the backscattering of near-infrared light.Flexible OCT probes, which can be inserted into a needle lumen, enable side looking real-time imaging with an axial resolution up to 10 μm and an effective penetration depth of around 2 mm [21].Cross-sectional images are generated using an automated pullback system while the probe is rotating a small laser light bundle over the tissue.Within urology, OCT has been applied in evaluating malignancy of bladder, upper urinary tract, kidney, testes, and prostate lesions [22].In PCa, OCT has been applied for intraoperative identification of neurovascular bundles, surgical margins, and extracapsular extension with the goal to preserve patient's functional and oncological outcomes [23][24][25][26].A limited number of studies have looked at OCT's diagnostic role in differentiating benign and malignant microscopic tissue of the prostate gland.Muller et al demonstrated with the use of a histopathologic validation tool that ex vivo needle-based OCT measurements of radical prostatectomy specimens could differentiate between cancer and healthy prostate tissue [27][28][29].The quantitative analysis of the OCT signal by means of the attenuation coefficient was significantly higher in malignant compared with benign tissue with an area under the curve ranging from 0.64 to 0.89 depending on the histopathological analysis used [29].
The development of CLE and OCT toward real-time optical biopsies of prostate carcinoma may lead to advances in diagnosis and (focal therapy) treatment.Following phase 2a of the IDEAL criteria [30], we have separated the study protocol into 2 sequential aims with different procedures.
Study Objectives
The objective of procedure 1 is to assess the technical feasibility and safety of in vivo, needle-based, focal imaging of prostate tissue with CLE and OCT.
Procedure 2 has as primary objective to identify and define characteristics of PCa on CLE and OCT images.The secondary objectives are to correlate CLE and OCT images with histopathology, to develop an in vivo CLE and OCT image atlas of the prostate, and to assess procedure-related adverse events.The atlas will differentiate prostate tissues including benign glands, cystic atrophy, regular atrophy, stroma, inflammation, fat as well as different grades of malignant tissue using the Gleason score.The procedure-related adverse events will be evaluated using the Common Terminology Criteria for Adverse Events.
Study Design
This study is an investigator-initiated, multicenter, prospective in vivo feasibility study, using in vivo needle-based imaging methods with CLE and OCT.Approval of the local institutional review board (IRB) has been obtained for the study protocol under registry number: NL57326.018.17 on July 7, 2017, and the study was registered on the clinicaltrials.govdatabase (NCT03253458) on August 18, 2017.Any amendments to the trial protocol will be submitted for review by the IRB.Trial registrations will be updated, and participants will be informed about the risks and benefits of participation both verbally by one of the investigators and in writing in the form of an extensive patient information brochure.Participants will only be included after written informed consent has been obtained.Patients can leave the study at any time for any reason if they wish to do so without any consequences.The investigator can decide to withdraw a subject from the study for urgent (medical) reasons.Patient data will be anonymized and stored in a secure database.
The study design consists of 2 sequential procedures.CLE images are recorded with the AG-Flex 19 fiber optic mini probe-based system (Cellvizio System, Mauna Kea Technologies, Paris, France) with an outer diameter of 0.9 mm, a field of view of 325 µm, and a resolution of 3.5 µm.OCT images are recorded with a small rotating C7 Dragonfly Imaging Probe using the Light Lab OCT system (St.Jude Medical, Saint Paul, Minnesota, USA).Both devices and probes are illustrated in Figure 3.
For CLE imaging, a fluorescent contrast agent is needed to stain the extracellular matrix.Fluorescein (fluorescein sodium, Fresenius Kabi, Zeist, the Netherlands), a nontoxic and commonly used fluorescent dye, will be administered intravenously through an intravenous cannula [31].Two times a bolus of 2.5 mL of 10% sodium fluorescein will be administered, one bolus per CLE measurement.The probes are transperineally introduced through a 17-gauge needle under ultrasound guidance.CLE images are recorded at a scan rate of 12 frames per second during a push and scan technique after placing the probe in direct contact with prostate tissue.
The OCT probe will be placed with a trocar needle in the prostate tissue under ultrasound guidance.After removal of the trocar needle, the inner part of the probe, the laser lens system is automatically pulled back while it is rotating, which creates a 3D image of the tissue.
RenderX
In procedure 1, patients scheduled for TTMB will undergo in vivo CLE or OCT imaging, before colocalized biopsy for standard histopathological assessment.
If it is shown in procedure 1 that in vivo CLE and OCT imaging are technically feasible and safe to perform, then procedure 2 will be initiated.In procedure 2, patients scheduled for RALP will undergo in vivo CLE or OCT imaging during surgery, before prostate removal.In general, 2 recordings of 90 s each will be made for the per-patient chosen modality.Recorded CLE and OCT imaging will be analyzed, at a later stage, by blinded independent observers and compared with the corresponding histopathologic evaluation of the prostatectomy specimen.Histopathological analysis is performed according to the standard clinical protocol and will be performed by a uropathologist, blinded for OCT and CLE imaging results.The uropathologist will, next to the standard examination procedure, perform a detailed reporting method; prostate tissue will be analyzed and annotated for various structures (benign glands, cystoid atrophy, regular atrophy, stroma, malignant tissue using the Gleason score, inflammation, and fat) on the whole mount histology slice or biopsies specimens.Histopathology is correlated with CLE and OCT data in a 3D computer environment.Adverse events are registered with a follow-up of 30 days.
Population
Patients (aged ≥18 years) who are indicated for a TTMB will be included for study procedure 1.All patients will be recruited in the AMC Hospital (Amsterdam, the Netherlands), and all study procedures will be performed in this institution.A total of 14 patients will be included in this study (Figure 4).Four patients will be included for procedure 1, 2 patients for optical imaging with CLE, and 2 with OCT.For procedure 2, 10 patients scheduled for RALP will be included, 5 of these patients will be imaged by CLE, 5 patients by OCT.Patients will be recruited in the AMC Hospital and VU Medical Center (Amsterdam, the Netherlands), and study procedures will be performed in both institutions.To increase the focal targeting of a PCa lesion, patients included in procedure 2 should have prostate mpMRI data available before the RALP with a visible (>5 mm) and suspect (PI-RADS v2: ≥3) region of interest.The other inclusion and exclusion criteria are listed in Textboxes 1 and 2, respectively.These sample sizes are based on prior publications and comply with the IDEAL 2a recommendation: low number of selected patients [29,30,32].
Procedure 1: Transperineal Template Mapping Biopsy (4 Patients, 2 Confocal Laser Endomicroscopy Imaging and 2 Optical Coherence Tomography Imaging)
The standard TTMB protocol is performed using local spinal or general anesthesia, and patients are positioned in the lithotomy position.Hereafter, the biopsy stepper is placed using a stabilizer and table mount.A clinical ultrasound scanner (HI VISION Preirus, Hitachi Medical Systems, Japan) with the biplanar probe (EUP-U533, Hitachi Medical Systems, Japan) and the endocavity balloon is used.After transrectal probe placement, dimensions and prostate volume are measured including checking of the pubic arch interference.The perineum is cleaned for surgery and draped.A sterile, disposable (brachy) template grid, consisting of rows and columns with holes spaced 5 mm apart, is used to guide the imaging probe/biopsy needle.The optical imaging acquisition is then started.As the CLE measurement technique differs from the OCT measurement technique; both techniques are described separately below.The measurement trajectories will be mapped with the ultrasound console.A corresponding biopsy will be taken following the same trajectory as the focal imaging technique (CLE or OCT).When the CLE or OCT measurements are performed, the standard biopsy cores will be taken, and the procedure is finished.Flowchart of procedure 1 is displayed in Figure 5.To be eligible to participate in this study, a subject must meet all of the following criteria: • Multiparametric magnetic resonance imaging data are available (only for procedure 2) • Visible (≥5 mm diameter) and suspect (prostate imaging reporting and data system, PI-RADS v2: ≥3) region of interest (only for procedure 2)
Textbox 2. Exclusion criteria.
A potential subject who meets any of the following criteria will be excluded from participation in this study:
Confocal Laser Endomicroscopy Measurement Technique
For the CLE measurement, 0.5 mL of fluorescein (2.5% fluorescein diluted in saline) is intravenously injected for contrast.The CLE probe is inserted using a 17-gauge trocar needle.When the CLE is in contact with prostate tissue, the measurement begins; while recording, the probe and needle are pushed from apex to base.During this push and scan technique, the probe stays in contact with the tissue.
Optical Coherence Tomography Measurement Technique
The OCT probe is inserted through a 17-gauge trocar needle.The needle is placed at the end of the measurement trajectory.Then, the trocar needle is pulled back, so the probe is in contact with the surrounding tissue.When the probe is in contact, an OCT measurement will be made.The measurement is performed from base to apex.
Procedure 2: Robot-Assisted Laparoscopic Prostatectomy (10 Patients, 5 Confocal Laser Endomicroscopy Imaging and 5 Optical Coherence Tomography Imaging)
In the operating theater, before the RALP, the CLE or OCT measurements will be obtained in the same fashion as in procedure 1. Dimensions of the prostate will be measured on the ultrasound console.Following marked regions from the mpMRI, the CLE or OCT measurements will be made following the technique described earlier.After measurement, a plastic cannula will be left in the specific trajectory as a localization marker.This marker shows the measurement location necessary for analysis once the prostate has been removed.After the cannula placement, the TRUS-probe and stepper will be removed, and the standard RALP can start.The plastic cannulas will remain in place during the removal of the prostate.Figure 5 shows the flowchart of procedure 2.
Multiparametric Magnetic Resonance Imaging
MpMRI is a combination of T2-weighted MR imaging, diffusion-weighted MR imaging, and dynamic contrast-enhanced MR imaging.MpMRI of the prostate enables detection of the prostate tumor with reasonable sensitivity and specificity values [33].MpMRI will be evaluated by a uroradiologist for evidence of PCa localization according to the PI-RADSv2 criteria [34].
Data Analysis
Demographic and disease-specific characteristics of the study populations (eg, age, PSA, DRE, biopsy localization, tumor location on imaging and pathology, tumor size, and Gleason score) will be collected.First, CLE and OCT data will be evaluated in a qualitative way.The data will be compared with histopathology, and characteristics of the following different tissues in the prostate will be described; benign glands, cystoid atrophy, regular atrophy, stroma, malignant tissue using the Gleason score, inflammation, and fat.The data will be obtained and analyzed by nonblinded investigators, and subsequently, investigators blinded to the results will interpret all individual measurements for diagnostic evaluation.An independent uropathologist, blinded for the CLE and OCT results will perform the histopathology.Second, OCT data will be analyzed quantitatively.We will determine and report the attenuation coefficient, the decay of light in tissue, per tissue type in the prostate [15,35].
Safety
The investigators will monitor patient safety.They can withdraw a patient from the study for medical reasons.In accordance to section 10, subsection 4, of the "Wet Medisch-Wetenschappelijk Onderzoek met Mensen" (medical research involving human subjects act in the Netherlands), the investigators will suspend the study if there is sufficient ground that the continuation of the study will jeopardize patient's health or safety.The investigators will notify the accredited IRB if this is the case.
In case of an adverse event or serious adverse event, the responsible authorities will be informed.
Benefits and Risks
As the patients included in this study are already scheduled for radical prostatectomy or TTMB, no direct benefit exists.The results of this study may be relevant for patients in the future for PCa diagnosis, grading, and staging.CLE and OCT are promising imaging techniques that in conjunction with the TRUS/mpMRI fusion guided biopsy procedure can provide real-time, high-resolution 3D microscopic imaging and tissue characteristics of PCa.
Previous in vivo studies using CLE or OCT did not report any adverse events, and these modalities are performed by needle guidance with the same diameter or smaller as the standard biopsy needles.In case of a RALP, 2 plastic cannulas will be placed using an intravenous needle.The plastic cannulas will stay in the prostate during the surgery and could, therefore, harbor an increased risk for infection, positive surgical margin rate, or other (unknown) complication during surgery.Standard antibiotic prophylaxis (ciprofloxacin) will be administered 2 h before surgery to reduce the risk of infection.The proposed needle-based imaging techniques also imply a puncture into the prostate and, therefore, have a risk of complications such as bleeding.However, bleeding is believed to be limited as only 2 needles will be placed; complications will be documented and critically analyzed in this safety and feasibility study.
Fluorescein is a commonly used fluorescent dye that will be administered intravenously through an intravenous cannula.Previous reports have proven that it is safe and easy to administer [36][37][38].Possible side effects include nausea, vomiting, abnormal taste sensations, thrombocytopenia, and allergic reactions.Patients with a known allergic reaction to fluorescein are excluded from participation in this study.
Standard care and pathological evaluation as stated by the internal protocols will not be affected in this study.In conclusion, we believe that the burden and risk associated with participation in this study are limited.
Results
Presently, recruitment of patients is ongoing in the study.Results and outcomes are expected in 2019.Summarized raw data will be made available through publication in an international peer-reviewed medical journal.This first part contains multiple similarities with the protocol of Wagstaff et al using needle-based OCT in the kidney [16].
Using ultrasound guidance, a trocar needle was placed to guide the OCT needle and subsequently the standard biopsy needle, both sampling the same location.Instead of using a trocar needle, in this protocol, a transperineal grid will be used as a guidance tool.This transperineal grid will allow targeting of the suspected lesion based on cognitive fusion with prostate mpMRI, which has shown to be as good as automatic fusion [39].The expected burden for the patients is thought to be minimal by using only 2 extra needles; by target placement of the 2 needles, the possibility of sampling the lesion is as high as possible.
The second part of the protocol enables one-to-one comparison of in vivo data with histology for both CLE and OCT.Our approach is similar to the approach of Muller et al [28] that compared ex vivo needle-based OCT measurements of radical prostatectomy specimens with histology by cutting through the measurement trajectories.In our measurements, data will be obtained from in vivo tissue, in which red blood cells will absorb and scatter light different from regular cells.Due to the perfusion of prostate tissue, the acquired data will most probably differ from the ex vivo measurements [29,40].Nonetheless, this study will enable us to understand the in vivo OCT and CLE images and challenges in co-localization of acquired in vivo data with ex vivo histology.
In the described study, safety and feasibility of both imaging techniques are assessed in patients under operating theater circumstances with general or spinal anesthesia.Although safety and feasibility could be different in patients under local anesthesia in an outpatient setting, we expect that both proposed needle-based imaging techniques are easily translated to an outpatient setting as both use equal or smaller diameters as biopsy guns and both are designed to be integrated into the outpatient workflow.
Several studies have provided in vivo images of CLE, but do not show a comparison with histology or an in-depth interpretation of the prostate images [38].On the basis of histopathology, it is expected that benign prostate tissue differs in extracellular structure compared with malignant prostate tissue.The fluorescein, provided by intravenous injection, gives contrast to the extracellular matrix on the CLE images, which could potentially allow to discriminate between benign and malignant prostate tissue.The described protocol will compare histology and CLE to provide knowledge of visual characteristics on CLE images.
Locating and recording the position of the in vivo measurements is difficult and will be less precise than an ex vivo measuring environment.The in vivo measurement locations will be mapped by ultrasound, and the measurement trajectory will be marked for ex vivo histology comparison.Regardless of this precise measurement mapping, the size and shape of the prostate will change after removal and formaldehyde fixation and could cause potential correlation errors [41].These changes in dimensions will be recorded by measuring the size of the in vivo prostate by ultrasound and when fixated, to be able to correct for prostate shrinkage.During the comparison of in vivo measurements and ex vivo histopathology, the measurement trajectory will be scaled.The length of the measurement trajectory will be scaled following the shrinkage of the prostate.Shrinkage of tissue over the trajectory is not uniform, but this is in our opinion is the best available option to correct for the shrinkage.
This study is an essential first step for the clinical evaluation of optical imaging in PCa diagnosis.In the clinic, a tool for optical histology could potentially guide a biopsy needle with instant feedback of the region of interest for reliable diagnosis and treatment of PCa.
Procedure 1
aims to evaluate the technical feasibility of needle-based in vivo imaging with CLE and OCT in the prostate.Procedure 2 aims to describe characteristics to be used for PCa detection, which allows us to create an atlas of CLE and OCT characteristics of normal and malignant prostate tissue based on a one-to-one comparison with histology.
Figure 1 .
Figure 1.Two examples of confocal laser endomicroscopy (CLE) images with Cellvizio AQ-flex 19 probe of ex vivo prostate tissue soaked in fluorescein solution for 2 min.
Figure 2 .
Figure 2. One b-scan of fixated ex vivo prostate tissue visualized with optical coherence tomography (OCT) with C7-XRtm Imaging System interfaced to a C7 Dragonflytm Imaging Probe (St.Jude Medical, St. Paul, Minnesota, USA).
This protocol describes the first in vivo study for needle-based optical biopsies using CLE and OCT in the prostate.Both techniques may enable real-time pathological information by showing cellular characteristics on CLE images and microarchitecture on OCT images.The study comprises 2 parts: feasibility of the technology and comparison with histology.
|
v3-fos-license
|
2020-04-30T09:07:39.289Z
|
2020-04-28T00:00:00.000
|
216646350
|
{
"extfieldsofstudy": [
"Medicine",
"Geography"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0232281&type=printable",
"pdf_hash": "2c106ba6dde9f3e1588dc42c9a7093f4a51e7851",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2320",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"sha1": "907b5b6bab0fdeb83d4d2d511d95421fab262856",
"year": 2020
}
|
pes2o/s2orc
|
Implications of multimorbidity on healthcare utilisation and work productivity by socioeconomic groups: Cross-sectional analyses of Australia and Japan
Background Multimorbidity, the presence of 2 or more non-communicable diseases (NCDs), is a major contributor to inequalities of health in Australia and Japan. We use nationally representative data to examine (i) the relationships between multimorbidity with healthcare utilisation and productivity loss and (ii) whether these relationships differed by socioeconomic groups. Methods Cross-sectional analyses using the Household, Income, and Labour Dynamics in Australia (HILDA) and the Japanese Study of Aging and Retirement (JSTAR) surveys. We examined 6,382 (HILDA) and 3,503 (JSTAR) adults aged ≥50 years. We applied multivariable regression, logistic and negative binomial models. Results Prevalence of multimorbidity was overall 38.6% (46.0%, 36.1%, 28.9% amongst those in the lowest, middle and highest education group, respectively) in Australia, and 28.4% (33.9%, 24.6%, 16.6% amongst those in the lowest, middle and highest education group, respectively) in Japan. In Australia and Japan, more NCDs was associated with greater healthcare utilisation. In Australia and Japan, more NCDs was associated with higher mean number of sick leave days amongst the employed and lower odds of being employed despite being in the labour force. The association between multimorbidity and lower retirement age was found in Australia only. Conclusion Having more NCDs pose significant economic burden to the health system and wider society in Australia and Japan. Targeted policies are critical to improve financial protection, especially for lower income groups who are more likely to have multiple NCDs. These individuals incur both high direct and indirect costs, which lead to a greater risk of impoverishment.
Introduction Non-communicable diseases (NCDs) are the leading cause of premature morbidity and mortality [1][2][3][4], and is a major contributor to health inequalities in many countries [5,6]. In Australia, about a third of the population have multimorbidity, with 17% of the population suffering from complex multimorbidity whereby 3 or more body systems are affected by at least 1 NCD [7]. In Japan, a study on elderly aged 75 years and above in Tokyo found that the prevalence of those with 3 or more NCDs was 65%, and multimorbidity was associated with increased number of outpatient visits and hospital admissions [8]. As the populations in Australia and Japan continue to face an exponentially ageing population and an increased exposure to risk factors [9,10]. multimorbidity will likely worsen.
Evidence from high-income countries has established that apart from negative healthrelated outcomes, multimorbidity imposes significant economic costs to individuals [11,12]. Studies have shown that patients with more NCDs have higher healthcare utilisation, such as having more outpatient visits, hospitalisations, medical equipment, and medicines [13]. These economic costs from higher treatment burden may not only include substantial medical expenditures, but also encompass loss of potential income due to involuntary absence from work [14][15][16]. There are only a few studies on the associations between multimorbidity and productivity in high-income countries, including Australia and Japan, and no study has used nationally representative datasets to the best of our knowledge.
The current literature has highly heterogenous findings on work productivity, and the existing studies have different populations, study designs, range of NCDs, and outcome measures [17][18][19][20][21]. Existing studies have also primarily examined productivity loss and decrease in work performance in working adults [14,22]. Importantly, studies have been primarily on the impact of self-perceived ill health or single NCDs like hypertension, diabetes, or mental illness [23,24], instead of multimorbidity, in terms of increasing numbers of NCDs. There is a need to fill the knowledge gap on how increasing number of NCDs is associated with work productivity loss in nationally representative populations. In addition, the burden of being unemployed and having to retire early, is likely compounded by the treatment burden of multimorbidity, such as higher healthcare utilisation. Also, majority of the previous studies have not investigated whether there is an association between number of NCDs with healthcare utilisation and work productivity loss, across all levels of income and levels of education [14,15,22,25,26].
This study utilises nationally representative data at the population level to examine (i) the relationships between multimorbidity with healthcare utilisation and productivity (ii) whether these associations differed by socioeconomic groups in both countries.
Sample and data
This study conducted secondary data analyses using cross-sectional data from the Household, Income, and Labour Dynamics in Australia survey (HILDA) Wave 17 (2017Wave 17 ( -2018, and the Japanese Study of Aging and Retirement survey (JSTAR) (2013).
The HILDA survey is a household-based panel study on nationally representative Australian residents aged 15 years and above. The HILDA survey is an annual survey that provides longitudinal data on personal well-being, economics, labour market dynamics and family life [27]. It was conducted to provide insights about Australia to policymakers in the areas of health, education and social services [27]. The JSTAR survey is a longitudinal study on nationally representative samples of subjects aged 50 years and above from Japanese residents of 10 cities across Japan [28]. The JSTAR survey was conducted by the Research Institute of Economy, Trade and Industry (RIETI), Hitotsubashi University, and the University of Tokyo, and covers in depth information regarding living aspects, including the economic, social, and health conditions of older adults [28].
Majority of current studies focus on the working population (i.e. employed at time of research study) and their decrease in work performance [14,22], with a dearth in studies on how having more NCDs impacts the involuntary exit from the labour force due to unemployment and early retirement. There are also currently little to no studies that use nationally representative datasets to examine multiple chronic diseases and work productivity loss. The HILDA survey Wave 17 and the JSTAR survey Wave 4 are one of the few high-quality surveys on nationally representative populations in HICs that included questions on the different aspects of work productivity like sickness absence, being unemployed while in the labour market, and early retirement, healthcare utilisation, as well as multiple NCDs [29][30][31][32].
For the HILDA survey, households were selected using a multi-staged approach. A stratified random sample of 488 Census Collection Districts, which contained 200 to 250 households in each district, was selected from across Australia [29,30]. A random sample of 22 to 34 dwellings were then selected based on expected response and occupancy rates of the areas, and within each dwelling, up to 3 households were randomly selected [29,30]. Smaller states and territories were not over-sampled to produce nationwide representative population estimates [29,30]. For data collection, there were approximately 175 interviewers in Wave 17, with 145 interviewers conducting face-to-face interviews and 30 interviewers conducting telephone interviews [29,30].
For the JSTAR survey, subjects were selected using multi-staged sampling, whereby predefined sites within each municipality were randomly selected, followed by randomly selecting individuals within each site [31,32]. The samples were also weighted based on socio-demographics like age, city of residence, and employment status. Collaborative efforts with governmental officials in each municipality allowed better response rates [31,32]. At the time of this study, the JSTAR survey had conducted 4 survey waves (2007,2009,2011,2013), with the most recent wave in 2013 including data collection on both healthcare utilisation and labour force participation [31,32].
Both datasets were nationally representative as HILDA was weighted for the multi-stage sampling technique, and the sample from JSTAR was weighted based on socio-demographics like age, city of residence, and employment status.
The response rate for HILDA and JSTAR was 96.4% and 82.9%, respectively. For HILDA, our study included respondents aged 50 years and above (n = 7,285 of 23,415), and subsequently excluded respondents with missing data on covariates and NCDs (remaining n = 6,382 of 7,285; dropped 12.4%). JSTAR had 4,091 respondents aged 50 years and above, and we subsequently excluded the respondents with missing data on covariates and chronic conditions (remaining n = 3,503 of 4,091; dropped 14.4%). (S1 and S2 Figs). The amount of missing data was checked to be missing at random, and hence there was a low likelihood of selection bias. This means that the associations between exposures and outcomes were not different among those with missing data.
Ethics statement. This study obtained ethics approval from the National University of Singapore Institutional Review Board (NUS-IRB) with reference code S-19-178. In addition, all the data were fully anonymised before we accessed them, and participants provided written consent for their responses in the WHO SAGE survey to be used for research purposes.
Variables
Predicting variable. The predicting variable was the number of NCDs each subject selfreported. Subjects were defined to have multimorbidity if they had two or more NCDs.
The HILDA survey had questions on 12 NCDs, which included arthritis/osteoporosis, asthma, cancer, chronic bronchitis/emphysema, type 1 diabetes, type 2 diabetes, depression, anxiety, other mental illness, heart disease, high blood pressure/hypertension, and any other serious circulatory condition. We defined respondents as having the NCD if they answered affirmatively to the following question: "Have you been told by a doctor or nurse that you have any of these conditions?".
The JSTAR survey had questions on 18 NCDs, which included, the 18 NCDs included heart disease, high blood pressure, hyperlipidemia, cerebral /cerebrovascular accident, diabetes, chronic lung disease, asthma, liver disease, ulcer/other gastrointestinal disorder, joint disorder, osteoporosis, eye disease, ear disorder, bladder disorder, Parkinson's disease, depression/emotional disorder, dementia, and cancer. We defined respondents as having the NCD if they answered affirmatively to any one of the following 3 questions: "Have you been newly diagnosed with or advised to seek medical advice for any of the listed illness since the time of the last interview?", or "If you are receiving treatment for any other illness, is it the same illness as what you had at the time of the last interview?", or "Is the illness relapse of an illness that you had prior to the last interview?". These additional and relapsed illnesses still refer to the 18 NCDs in the survey.
Outcome variables. The 2 outcome variables were healthcare utilisation and work productivity loss. Healthcare utilisation referred to the mean number of outpatient visits in the past 12 months, and the mean number of nights spent in hospital in the past 12 months. Productivity loss was assessed via three outcomes: mean retirement age for retired subjects, mean number of days of sick leave in the past 12 months for respondents who are employed, and the odds of being unemployed despite being in the labour force. We defined individuals in the labour force as subjects who are employed as well as those who are unemployed but actively looking for employment. Individuals who are not in the labour force are commonly stay-athome parents, retired, studying, and voluntarily unemployment. Detailed survey questions are in S1 Table. All subjects were stratified by education level, to study the effect of educational attainment on the associations between multimorbidity with healthcare utilisation and productivity loss. For HILDA, education levels were: (i) lower education (year 11 and below) (ii) mid-level education (year 12, certificate 3 or 4, advanced diploma, diploma) (iii) higher education (bachelor or honours, graduate diploma, graduate certificate, post-graduate). For JSTAR, education levels were: (i) lower education (no education, primary school, middle school) (ii) mid-level education (high school, junior college) and (iii) higher education (university, technical college, graduate school).
Statistical analysis. The analyses were conducted on HILDA and JSTAR separately. It was not a comparative study. We summarised sample characteristics for each dataset. We presented the prevalence of individual NCDs, and the prevalence of multimorbidity unstratified and stratified by education level and age. Multivariable negative binominal, linear, and logistic regression models were used to examine associations between multimorbidity and each outcome (S2 Table). Specifically, the multivariable negative binomial regression model was fitted for the outcomes of mean number of outpatient visits, mean number of nights in a hospital, and mean number of sick leave days given the skewed nature of count data. The multivariable linear regression model was used to examine the continuous outcome of mean retirement age. Finally, multivariable logistic regression model was applied to examine unemployment despite being in the labour force (binary outcome).
We adjusted for covariates listed above. These sociodemographic covariates were considered important potential confounders for the relationships between number of NCDs with healthcare utilisation and work productivity. All the regression models were tested for collinearity.
To assess the impact of socioeconomic status on the associations between multimorbidity with healthcare utilisation and productivity loss, subjects were stratified by education level (both HILDA and JSTAR) and income quintiles (only HILDA, as JSTAR did not have data on household income), and the same analyses for healthcare utilisation and productivity loss (i.e. multivariable negative binominal, linear, and logistic regression models) were conducted. Income in the HILDA survey referred to actual earnings and not assets.
We performed the analyses using Stata 15 (Stata Corp.) and at 5% level of statistical significance.
Sample characteristics
Prevalence of multimorbidity was overall 38.6% (46.0%, 36.1%, 28.9% amongst those in the lowest, middle and highest education group, respectively) in Australia, and overall 28.4% (33.9%, 24.6%, 16.6% amongst those in the lowest, middle and highest education group, respectively) in Japan. Table 1 shows the sample characteristics of subjects in HILDA and JSTAR separately. S3 and S4 Figs show the prevalence of individual NCDs. Figs 1 and 2 show the prevalence of multimorbidity (with errors bars of 95%CIs) stratified by education level in each country. In Australia and Japan, the prevalence of subjects with no NCDs was the greatest for those in the highest education group and lowest for those in the lowest education group. In contrast, the prevalence of subjects with 2 or more NCDs was greatest for those in the lowest education group and lowest for those in the highest education group.
Figs 3 and 4 show the prevalence of multimorbidity stratified by both education level and age in each country. In Australia and Japan, the prevalence of multimorbidity increases as age increases; and in every age group, the prevalence of multimorbidity is highest for subjects in the lowest education group.
Healthcare utilisation
Australia. An increasing number of NCDs was associated with a higher mean number of outpatient visits and mean number of nights in a hospital (ß coefficients >0) (Fig 5). There were no statistically significant differences among socioeconomic groups. However, there appears to be a positive trend between number of nights in a hospital with income level.
Japan. An increasing number of NCDs was associated with a higher mean number of outpatient visits and mean number of nights in a hospital (Fig 6). There were no differences among education levels. However, there appears to be a positive trend between number of nights in the hospital with educational level, and subjects in the highest education group had a statistically significantly higher mean number of nights in a hospital (ß = 2.29, 95%CI = 1.26. to 3.33), P-value <0.001) compared to those in the middle-level education group (ß = 0.60, 95%CI = 0.32. to 0.88, P-value <0.05).
Productivity loss
Australia. An increasing number of NCDs was associated with a lower mean retirement age, greater mean number of sick leave days, and lower odds of being employed despite being in the labour force (Fig 7). There were no differences among socioeconomic groups. However there appears to be a positive trend for the outcome on mean retirement age and on the odds of being unemployed, and subjects in the lowest household income quartile have a higher mean number of sick leave days (ß = 1.1, 95%CI = 0.71 to 1.49, P-value <0.001).
Japan. An increasing number of NCDs is associated with a greater mean number of sick leave days and lower odds of being employed despite being in the labour force (Fig 8). There is no association between more NCDs with mean retirement age. There are no differences among education levels. However, there appears to be a positive trend for the outcome on the odds of being unemployed.
Principal findings
Prevalence of multimorbidity was overall 38.6% (46.0%, 36.1%, 28.9% amongst those in the lowest, middle and highest education group, respectively) in Australia, and overall 28.4% (33.9%, 24.6%, 16.6% amongst those in the lowest, middle and highest education group, respectively) in Japan. There is a higher proportion of individuals with multimorbidity in the lowest socioeconomic groups and education levels.
In both Australia and Japan, having more NCDs was associated with higher mean number of outpatient visits and number of nights in the hospital. In both Australia and Japan, more NCDs was associated with greater productivity loss, including higher mean number of sick
PLOS ONE
leave days amongst the employed, and lower odds of being employed despite being in the labour force. In Australia only, having more NCDs was associated with a lower mean retirement age. Another finding is that across all levels of income and levels of education, there is an association between number of NCDs with healthcare utilisation and work productivity loss. This may appear to suggest that adults with multiple NCDs from lower socioeconomic levels could have substantial financial burden, since they have fewer resources to cope with the burden from healthcare utilisation and being unable to work.
Comparison with literature
Our study is the first national assessment on the relationship between having more NCDs with health care utilisation and work productivity loss among adult and elderly in Australia and Japan. Our finding on the positive association between multimorbidity and healthcare utilisation is consistent with limited local studies in Australia, Japan, and other high-income countries, as are our findings that increasing numbers of NCDs are associated with more outpatient visits, more hospital admissions, and longer hospitalisations [26,[33][34][35].
PLOS ONE
Implications of multimorbidity on healthcare utilisation and work productivity Our study findings on the adverse association between having more NCDs and productivity is consistent with the limited number of studies in the literature [14,15,36,37]. Furthermore, these limited number of studies only examined a narrow aspect of productivity loss, such as poorer work performance in employed adults [22]. Consistent with our results on multimorbidity and early retirement in Australia, a local study in Australia focused on civil servants found that older workers with chronic health conditions were less likely to work beyond 65 years of age [38]. The lack of associations between multimorbidity and early retirement in Japan may be attributed to culture and societal pressure on employers to provide employment until pension age, and stigma of early retirement and unemployment [39,40]. Another plausible explanation is that unlike Australia, subjects in Japan may delay retirement as they have an incentive scheme to work beyond retirement age and they do not have asset and income limits to pension payouts [41][42][43]. Third, another finding is that there is an association between number of NCDs with healthcare utilisation and work productivity loss, across all levels of income and levels of education. This finding may appear to suggest that individuals with the least resources to cope with the burden from healthcare utilisation and work productivity loss, may face more substantial financial hardships from having multiple NCDs. A study in South Korea showed that having catastrophic health expenditures is associated with having an individual with a chronic disease in the household, being of a lower socioeconomic status, and/or having an elderly person in the household [44]. A study that examined catastrophic health payments in 59 countries found that risk factors include the over-reliance of health systems on out-of-pocket payments, low capacity of households to pay, and the general lack of health insurance and financial risk <30 subjects). + The 18 noncommunicable diseases (NCDs) include heart disease, hypertension, cerebral/cerebrovascular accident, diabetes, chronic lung disease, asthma, joint disorder, depression or emotional disorder, cancer, hyperlipidaemia, liver disease, stomach ulcer or other stomach disorder, osteoporosis, eye disease, ear disorder, bladder disorder, Parkinson's disease, dementia. https://doi.org/10.1371/journal.pone.0232281.g004
PLOS ONE
Implications of multimorbidity on healthcare utilisation and work productivity protection [45]. Having catastrophic health expenditures is an important issue for healthcare systems as it is associated with severely reduced quality of life and it drives impoverishment [46][47][48][49][50].
Study limitations
Self-reported medical history may be poorly correlated to medical status, and likely more so in less educated, poor, and rural populations [51]. Outpatient visits were not specific for NCDs and might include visits for other unrelated conditions [27,52]. Hospitalisations were studied as the number of nights in the hospital, and the study did not examine the number of episodes of hospitalisation, or the severity and nature of each hospitalisation [27,52]. Productivity loss in terms of individuals in the labour force being unemployed due to other non-chronic conditions like acute and infectious illnesses, or physical injury unrelated to NCDs [27,52]. However, it is much less likely for individuals in the labour force to face unemployment due to acute and infectious conditions. Another study limitation was that work performance or on-
PLOS ONE
the-job productivity was not examined, as this was not assessed by the surveys. Self-reported number of sick leave days may be subject to recall error. Actual retirement age may be younger than what respondents self-report (respondents may self-report a higher age) due to social desirability bias and not accurately reporting being unemployed or having involuntary early retirement [39,40].
A direct comparison between the results from HILDA and JSTAR was limited by the differences in the surveys, in terms of the exact wording of questions and number of questions for each outcome measure. However, our findings were still able to show how outcomes can be similar across high-income countries.
This study was based on a limited number of NCDs, so further work could examine more conditions, like the large-scale Scotland study with 40 NCDs [53]. The cross-sectional design does not allow for causal interpretations, and further studies that use prospective cohort designs are needed to examine how multimorbidity causes healthcare utilisation and productivity loss over an individual's life course [3].
Policy, clinical, and research implications
Our study presents evidence that a higher proportion of individuals with less education experience more NCDs. In addition, it appears that across all levels of income and levels of education, there is an association between number of NCDs with healthcare utilisation and work productivity loss. This may suggest that those from lower socioeconomic levels could have more financial burden, because they tend to have fewer resources to cope with greater healthcare utilisation and forced to exit the labour force. Policymakers should consider health financing strategies, such as the removal of user charges or subsidising for the poorer population [11,54]. Flexible payment plans that allow instalments, subsidised premiums, and removal of co-payments, are measurements that can reduce the catastrophic financial burden by the poor [48]. The substantial burden of more healthcare utilisation from multimorbidity in Australia and Japan, need urgent addressing. Health systems, from a planning, delivery and evaluation perspective, need to shift from single-disease models to a paradigm that account for the complexity of multimorbidity [3,55].
Clinical guidelines for patients with multiple NCDs could be refined to consolidateand coordinate the management of multiple NCDs, and have patient-centred approaches (e.g. reflecting patients' preference in treatments and medications) which minimise the impact of multimorbidity on high healthcare utilisation [55][56][57].
Our findings showing that having more NCDs is associated with substantial productivity loss highlighted the implications and costs of multimorbidity goes beyond health system but also on the individual and household finance, and wider society [14,15,25,58]. Policies need specific aims on curbing involuntary early retirement, absence from work, and not being employed despite being in the labour force and actively seeking employment [14,18,25,58,59].
Employers and governments should be aware that having more NCDs affects work force productivity and implement prevention programmes to reduce impact of chronic conditions on the workforce. For instance, employers could implement health programmes that promote healthier lifestyles, such as more balanced diets, increased physical activity, and workplace environmental modifications that reduce sedentary behaviour [60][61][62]. Policies are needed to motivate companies and relevant stakeholders to implement flexible work schedules for workers who need time off for treatment and therapies [63], in order to minimise unnecessary forced unemployment. Strategies to mitigate the adverse effect of multimorbidity on productivity should be considered as investment rather than costs.
Clinicians should consider the impact on patients' employment (impact on sickness absence or needing to leave employed work) Clinicians should take into account the challenges of treatment, such as potential side effects from medications which affect cognitive function and physical agility, and time-consuming treatments which require a substantial time away from work, like kidney dialysis or chemotherapy.
As this is one of first studies that used nationally representative data to examine work productivity loss, and there is still a major dearth in the literature on the impact of multimorbidity on productivity loss in both HICs and MICs on nationally representative samples. Hence, future chronic disease surveys need to include questions on frequency and duration of sickness absence, not being employed due to NCDs, and early retirement. There should also be questions pertaining to work performance or on-the-job productivity. While this paper examined how having more NCDs was linked to unemployment and early retirement, further investigation can be conducted on quantifying the loss of income from being unemployed and retiring early for persons with different numbers and clusters of NCDs. Additionally, building on our study, further studies could investigate the financial impoverishment from healthcare utilisation and productivity loss faced by persons with multiple NCDs in the lower socioeconomic groups. Sex differences, such as differences in retirement age or being unemployed in males versus females would also be interesting to investigate in future work.
Existing studies on work productivity loss have been primarily on the impact of self-perceived ill health or single NCDs like hypertension, diabetes, or mental illness [23,24]. Additionally, majority of the existing literature focuses on the working population (i.e. employees), with a dearth in studies on how having more NCDs impacts the involuntary exit from the labour force from unemployment and early retirement [21,64]. Hence, future studies can build on this paper by looking at the impact of NCD combinations, as well as nationally representative data from other countries. Future research work could also examine additional aspects of healthcare utilisation, such as pharmaceuticals, laboratory testing, and medical equipment.
Conclusion
Having more NCDs poses significant economic burden to the health system and wider society in Australia and Japan, and the impact appears to occur across all socioeconomic groups. Decisive action is critical for improving universal health coverage and improve financial protection, especially for lower income groups who are more likely to have multiple NCDs. These individuals incur both high direct and indirect costs, which lead to a greater risk of impoverishment.
Supporting information S1
|
v3-fos-license
|
2016-10-14T01:18:46.145Z
|
2015-10-19T00:00:00.000
|
34127599
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/s1808-8694(15)30760-6",
"pdf_hash": "e5084b16e4e272cc99633397284bbc200ce2cc09",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2322",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ce688d306228f611981f85b8acc1237f0067bc5b",
"year": 2015
}
|
pes2o/s2orc
|
Comparative study between school performance on first grade children and suppression of otoacoustic transient emission
Summary School learning can be hampered if there are defects on the central auditory process. Since those with auditory deficiency can be rehabilitated, it is fundamental that we identify them. Otoacoustic emissions test has low cost and operational ease. Study design: clinical and experimental. Aim to study the relationship between school learning and transient otoacoustic emission suppression by contralateral stimuli. Material and Methods 39 individuals, from 7 to 12 years of age were evaluated, 19 (48.7%) with good school performance and 20 (51.3%) poor performers. Results A transient otoacoustic emission suppression failure for contralateral acoustic stimuli was more frequently found among children with poor school performance. We established a value of 1.6 dB SPL for emission reduction that characterized those children as belonging to the poor learning performance group: sensitivity 65%, specificity 72,2%, accuracy of 68.4%, positive predictive value of 72.2%. Conclusion The contralateral emission suppression test of the right ear can be predictive of school difficulties in individuals from six to twelve years of age.
INTRODUCTION
The auditory function and most specifically that of auditory communication have been extensively studied. The hypothesis that losses in auditory perception may be associated with the difficulty in learning the sound-symbol relationships which make up the very basis of phonetic rules, and that there is a relationship between acquiring reading and writing skills and the underlying speech and hearing skills have been growing in number of participants. Some researchers have studied the relationships between problems associated with temporal processing of auditory stimuli and losses of some speech and hearing skills and phonemic segmentation. First grade schools are true laboratories for cognition evaluation. Apparently, a number of factors interfere on the school performance of children; among them we have the functional integrity of the auditory system. Identifying this deficiency can be relevant in the process of rehabilitating these children.
Hearing is highly complex, when seen from its peripheral component, however well understood in most of its aspects. Notwithstanding, the central physiology associated with auditory communication still is an open field, both for basic and for applied research. As to the efferent pathways, Kimura 1 noticed that when auditory stimuli are presented in a dichotomist fashion, the ipsilateral pathways are suppressed by their contralateral counterparts. According to her, verbal auditory information that reaches the right ear would go to the left cerebral hemisphere, which is dominant for verbal language, by means of the contralateral auditory pathways, going through the commissure of the corpus callosum.
In 1978, Kemp 2 , concluded that the sound generated by the physiological activities of the outer hair cells is then taken through the middle ear to the external acoustic meatus, where its emission can be recorded. Since then, many papers have discussed the suppression of otoacoustic emissions in human beings by means of a contralateral stimulation. [3][4][5][6][7] This phenomenon is due to a stimulation of efferent synapses of outer hair cells 7 , which would occur through the olivo-cochlear bundle and would depend on descending pathways originating on cortical and sub-cortical regions. Thus, emission suppression could be influenced by the most varied central pathological situations.
The possibility of assessing otoacoustic emissions has helped the semiology of hearing peripheral organs because it is an objective, sensitive and specific method. Observing a reduction in the otoacoustic emission amplitudes evoked by the contralateral sound stimulus, it was considered that this phenomenon may be used to assess not only the acoustic nerve, but also the central efferent pathways of the auditory system.
Anatomical and physiological evidence state that the function of both ears is interdependent and coordinated by the efferent neural pathways, which connects one side of the auditory system to the other side, through the medial and lateral components of the olivocochlear system. The medial olivocochlear bundle is made up of approximately 80% of crossed nerve fibers and 20% of ipsilateral never fibers, it projects its nerve endings mainly to the contralateral cochlea, ending just below the outer hair cells.
The lateral olivocochlear bundle, which is made up of about 90% of ipsilateral nerve fibers and 10% of crossed fibers, projects its endings mainly to the ipsilateral inner hair cells, ending at the efferent radial auditory endings that leave these cells.
Many researchers have shown that the contralateral inhibition of emissions is a neural phenomenon, caused by the efferent system. [8][9][10] In their studies, they measured transient and distortion product otoacoustic emissions, with and without a narrow band contralateral stimulus to activate the efferent olivocochlear nervous system, and the results led their authors to consider the test a useful tool in the set of procedures for the diagnosis of retrocochlear disorders. [8][9][10] The current paper aims at checking the lack of otoacoustic emission inhibition on the right ear by a contralateral stimulus, which can be used as screening tool or when the physician suspects of auditory processing dysfunction seen when children between six and twelve years of age underperform at school.
MATERIALS AND METHODS
This study was submitted and approved by the Ethics in Research Committee of our institution and approved under protocol no. 390/04. A municipal first grade school of a neighboring town to São Paulo was chosen for the study and the guardians of the participating children signed and informed consent form. All regularly enrolled students who fit the methodology criteria were included. The children included in the study were divided between the ones with the best and those with the worst school performance in their classes.
The following inclusion criteria were observed: no family history of hereditary hearing deficiency, no family history of repetition otitis, no use of ototoxic medication, no exposure to occupational noise of hearing thresholds up to 25 dBHL in the frequencies of 250, 500, 1000, 2000, 3000, 4000, 6000, and 8000 Hz, bilaterally, type A immitance curves and bilateral presence of contralateral stapedial acoustic reflex, with normal otoscopic examination.
Exclusion criteria were: psychological problems, uncorrected visual deficiency, hearing deficit, neurological dysfunction or low IQ.
Evaluation procedures
In the clinical history, we questioned them on the prior existence of otologic diseases, the use of ototoxic medication, tinnitus, other diseases and complaints and current ear problems. Physical exam involved facial inspection, ears, external auditory meatus and tympanic membranes. Audiologic exams were: tonal audiometry, speech understanding, speech recognition threshold, immitanciometry and tympanometry, stapes muscle threshold reflex at 500, 1000, 2000, and 4000 Hz and reflex fatigue study at 500 and 1000 Hz. In order to collect otoacoustic emissions, we used the Echoport ILO 288 system, from the English Otodynamics, sold in Brazil by Siemens. We used linear and non-linear clicks.
In order to capture transient otoacoustic emissions, we used non-linear sound clicks, three of them in one polarity and one of inverse polarity, with amplitude three times higher than that of the first, lasting for 100ms, with intensities between 70 and 80 dBSPL, in a total number of 3000 stimuli accepted and also linear clicks.
Whether or not the child had suppression was checked with linear and non-linear clicks, depending on the type of the original stimulus. The frequencies encompassed by the stimulus were between 500 Hz and 4000 Hz. The clicks presented as stimuli were condensed, in such a way that the first part of the stimulus pushes the tympanic membrane medially and, consequently, the base membrane is moved outwardly.
For the contralateral stimuli, we used a concurrent narrow band noise on the contralateral ear. The noise intensity should be of approximately 10 dB above the sound stimuli that caused otoacoustic emissions, however below the stapes reflex level on the tested ear.
For contralateral stimulation we used an AC 33 audiometer, from Interacoustics, a Danish device, sold in Brazil by Siemens.
To try and achieve transient otoacoustic emissions suppression, the noise range was fixed between 750 Hz and 3000Hz. The suppressive noise was presented at approximately 9ms after the onset of the stimulus used to acquire the otoacoustic emissions.
Statistical Analysis
All variables were analyzed descriptively (table 1). For the quantitative variables this analysis was carried out through observing the minimum and maximum values, calculating the averages, standard deviations and median values. For the qualitative variables we calculated the absolute and relative frequencies.
In order to analyze the equality-between-the-groups hypothesis, we used the t Student test, and as for the assumption of data normality we used the Mann-Whitney non-parametric test for independent samples.
In order to test for group homogeneity in relation to the proportions we used the chi-squared test. In order to assess if some measure carried out could predict poor school performance we used the logistics regression model. We obtained a cutting point for the measure and calculated the efficiency indices. We used 5% as signifi-cance level for the tests.
RESULTS
Our results are shown on tables 1, 2 and 3. We have noticed that the performance groups were not different as far as age and gender were concerned. They did not bear significant differences in the left ear measures with and without suppression ( Table 2).
On the right ear, there were statistically significant differences between the different measures with and without suppression; the poor performance group had significantly lower values when compared to the group with good performance ( Table 2).
Analyzing the different variables with and without suppression on the right ear by means of a logistic regression11, we noticed that such variable was associated with school performance (p=0.034). Graph 1 shows the likelihood of poor performance associated with the values of differences measured with and without suppression on the right ear.
On Table 3 we see some poor performance likelihood values estimated by means of the logistics regression model for some values of different measures with and without suppression for the right ear.
Thus, a child with a measure value equal to zero has an 83% likelihood of having poor performance, for a measure value equal to 3, this likelihood falls down to 18%.
Through logistics regression model we can find a cutting point from which we have a greater chance of poor performance.
Such value is equal to 1.6 and gives us a sensitivity of 65.0%, specificity of 72.2%, accuracy of 68.4%, positive predictive value of 72.2% and negative predictive value of 65%.
The children with differences below 1.6 have 4.83 (confidence interval at 95%: 1.21; 19.22) fold higher chance of poor performance when compared to those who presented differences above 1.6.
Future studies with larger samples are necessary in order to validate this method as being useful for the screening to identify those with auditory processing dysfunction among children with learning disorders.
DISCUSSION
Poor performance at school is a source of great concern to parents and teachers. The causes for this deficiency are numerous, such as social, nutritional, family, teaching system, even problems intrinsic to children such as neurologic, psychiatric, psychological, visual and hearing problems, besides a lack of maturation or dysfunction of the cognitive nervous system.
Children with learning disabilities end up having a lower intellectual and social development than their con- ditions allow. To locate the cause of this deficiency and to overcome it can change their lives. When we observe a reduction in the otoacoustic emission amplitude values evoked by a contralateral sound stimulus, the possibility was considered that such phenomenon cold be used to assess, in a practical way, not only the acoustic nerve, but also the auditory system efferent central pathways, certainly connected with auditory communication.
Elementary schools are true laboratories where cognition is assessed. The present investigation aims at comparing the otoacoustic emission amplitude values evoked by the contralateral sound stimulus of the students ranked in first and last in performance from an elementary school at the State of São Paulo.
The efferent pathways were identified and studied by numerous authors 3,4,5,6,12,13 , and contralateral inhibitions suppression was initially studied by Collet 8 and later confirmed by many others 10,14,15 . In 1999, Pialarassi 10 studied the suppression of transient and distortion product otoacoustic emissions with contralateral stimulus by a narrow band noise in 48 individuals with normal hearing and 9 individuals with retrocochlear disease. In the normal group there was significant suppression of otoacoustic emissions. In the group with the disease, sometimes they found mild suppression and sometimes it did not occur, and sometimes there was intensification. The results show that the otoacoustic emission suppression with contralateral stimuli is a useful tool in the set of procedures used to diagnose retrocochlear auditory disorders.
Laterality is an important factor for the satisfactory performance of multiple body functions, including hearing and auditory processing. Research 16 have shown that the left brain hemisphere prevails over the right side in speech auditory processing; while the right side prevails in the processing of tones and musical stimuli. Kimura1 states it in a basic research paper published in 1963, that the verbal auditory information presented to the right ear come to the left hemisphere, which is dominant for verbal language, through the contralateral auditory pathways, going through the commissure of the corpus callosum. In the sample analyzed, we could learn that the auditory inhibition disorder manifestation by a simultaneous contralateral stimulus manifested clearly and significantly when hearing was assessed on the right ear.
The meaning of this observation is, to start with, an indication that if this test is used in the study of auditory processing disorders, it must be made with a stimulus being presented to the right ear and a competitive sound in the left contralateral ear. The same thinking must be used when we rehabilitate individuals with auditory processing disorders, especially those that have concurrent auditory impairment, giving preference to amplification and rehabilitation stimuli in the right ear.
Tests such as SSW were applied to identify auditory processing problems in school-age children. In 1984 Berrick et al17 studying the performance of 93 children without learning complaints and 97 children with learning disabilities, in the age range between 8 and 11 years by the SSW test, observed that the children without school complaints presented a statistically significant better performance when compared to those children with learning Graph 1. Logistics regression model. Suppression failure in decibels. 19 . These tests proved efficient and in certain ways objective; however, their application requires complex equipment. Both SSW and PSI are screening tests which are not specific for the type of auditory processing deficiency; however, very safe in relation to the results. Later studies must be applied to a similar group, with SSI and SSW tests, besides the suppression failure study in order to validate the importance of this research in the diagnosis of processing dysfunction, as it was stressed in the introduction, hearing processing is not the only cause of learning disorders.
Musiek and other authors 20,21,22 observed that central auditory processing disorders are, usually, cortical or subcortical dysfunctions that can be secondary to maturation delays or morphological abnormalities.
The possibility of using a simple screening test for children with low school performance in an attempt to identify those with processing problems, is important to indicate the need to refer these students to more complex tests and finally guide their rehabilitation.
Our study showed very stimulating results as to the chances of obtaining a low and efficient test with a reasonable predictive value to identify auditory processing potential disorders. We need longitudinal tests with larger cohorts and broader samples to assess test specificity and sensitivity. The confirmation of learning disorders with children that have previously been considered of risk may turn this test into an accurate and mandatory instrument in the assessment of pre-school age children.
Knowing that the children with auditory processing dysfunction, when properly diagnosed, may be rehabilitated without speech and hearing training, changing not only their immediate school performance, but also the life style and quality of these children in the long run is a powerful stimulus to carry out new studies in this filed.
Future studies with larger series and comparing the results with SSI and SSW are necessary to validate this method as useful in the screening of children with auditory processing disorders among those with learning disorders.
CONCLUSION
The present investigation suggests that the otoacoustic emission contralateral inhibition failure test by a contralateral auditory stimulus be predictive of school performance disorder in individuals between six and twelve years of age.
|
v3-fos-license
|
2023-08-19T15:29:02.404Z
|
2023-08-01T00:00:00.000
|
260995662
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6694/15/16/4132/pdf?version=1692190389",
"pdf_hash": "ba1a8fc4b36c97740072afd05e950e3eee9d5b6f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2325",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "4ca6f5933ac688726f2f7113be009f2a490b4397",
"year": 2023
}
|
pes2o/s2orc
|
Conditional Survival in Prostate Cancer in the Nordic Countries Elucidates the Timing of Improvements
Simple Summary Prostate cancer (PC) is the most common male cancer, and the numbers of new cases increased hugely when prostate-specific antigen (PSA) testing became commonplace. The consequence was that the diagnostic age shifted toward younger men with less-advanced PC. Such changes are known to improve cancer survival, and in the Nordic countries, the 5-year survival for PC increased from about 60% to 90%; however, since testing stabilized, this improvement has slowed, and the 5-year survival had reached 95% by the year 2020. By analyzing survival in different periods after diagnosis, we observed that the most critical time for death was between years 1 and 5, assumably because of metastatic deaths. Some metastases are difficult to detect at diagnosis, and some arise later in the course of the disease. For continued survival, improvements in early diagnosis and more effective treatment will be required. Abstract Background: The incidence of prostate cancer (PC) increased vastly as a result of prostate-specific antigen (PSA) testing. Survival in PC improved in the PSA-testing era, but changes in clinical presentation have hampered the interpretation of the underlying causes. Design: We analyzed survival trends in PC using data from the NORDCAN database for Denmark (DK), Finland (FI), Norway (NO) and Sweden (SE) by analyzing 1-, 5- and 10-year relative survival and conditional relative survival over the course of 50 years (1971–2020). Results: In the pre-PSA era, survival improved in FI and SE and improved marginally in NO but not in DK. PSA testing began toward the end of the 1980s; 5-year survival increased by approximately 30%, and 10-year survival improved even more. Conditional survival from years 6 to 10 (5 years) was better than conditional survival from years 2 to 5 (4 years), but by 2010, this difference disappeared in countries other than DK. Survival in the first year after diagnosis approached 100%; by year 5, it was 95%; and by year 10, it was 90% in the best countries, NO and SE. Conclusions: In spite of advances in diagnostics and treatment, further attention is required to improve PC survival.
Introduction
The incidence of prostate cancer (PC) increased vastly upon the introduction of prostate-specific antigen (PSA) testing in the public domain, with concomitant changes in the clinical presentation of PC.In the Nordic countries, opportunistic PSA testing began in the late 1980s/1990, but it began later in Denmark [1,2].Clinical changes in the PSA Cancers 2023, 15, 4132 2 of 10 era included a lower diagnostic age, a lower T stage and a lower proportion of patients presenting with distant metastases [2,3].The vast increase in the incidence of PC with stable or decreasing mortality raised concerns about overdiagnosis, which has been estimated to vary from 10 to 80% depending on many factors, such as age at testing and PSA level [2,4,5].
In Sweden, nation-wide data on the principle reasons for a diagnosis of PC are available from 2004 onwards (National quality register for prostate cancer, https://statistik.incanet.se/npcr/(accessed on 22 June 2023)); 28.6% of patients were non-symptomatic men diagnosed due to elevated PSA levels, and this proportion increased to 52.9% in 2020.Lower urinary tract symptoms and other symptoms accounted for more than 30% each in 2004, and by 2020, both accounted for no more than 20% of the diagnosed cases of PC.The PSA level may also increase in cases of benign prostatic hyperplasia, which is one of the most common urological diseases affecting elderly men and often requires surgical treatment [6].Unfortunately, PSA determination cannot distinguish between cancer and hyperplasia, which is one of the reasons for the overdiagnosis of PC.Nevertheless, the diagnostic PSA level is used in risk stratification and treatment planning for PC patients (https://statistik.incanet.se/npcr/(accessed on 22 June 2023)) [7,8].
In the Nordic countries, between 7 and 14% of PC patients have been diagnosed with metastases (de novo/synchronous, M1 in TNM staging), but for a large proportion of patients, the metastatic status remains unverified at diagnosis (Mx) [9,10].Cancer registries consider metastases only at the time of diagnosis, and information on metastases (recurrent and metachronous) that appear later is limited; this is also true of many clinical studies which do not specify the timing of recurrent metastases.A US estimation of PC metastases assigned 45% to de novo and 55% to recurrent types, and the same proportions were found in a patient cohort [11,12].A Swedish study covering the years 1987-2006 found that in 50% of PC deaths, the cause was assigned to PC [13].The longer PC patients survive after diagnosis, the larger the proportion of deaths are assigned to non-cancer causes [14].Data from the Swedish hospital discharge register showed that 89% of all PC metastases (including multiple metastases in the same patient) were located in the bone, 10% in the liver and 7% in the lung [15].According to that study, which investigated the bone metastases of all common cancers, about 75% of bone metastases originated from PC among male cancers diagnosed at an age of more than 70 years.Bone scanning has been the common means of diagnosing metastatic PC.
A Danish study analyzed the clinical characteristics of patients who died of PC in two periods: 1995-1999 and 2009-2013 [16].The proportion of metastatic tumors decreased from 49.4% to 38.3%, while the proportion of locally advanced tumors (clinical T3-4 and/or N+ and M0) increased from 8.6% to 27.3%; the median survival increased from 1.11 to 2.15 years in the metastatic group and from 1.41 to 3.75 years in the locally advanced group.As in this study, the increasing survival of metastatic patients has been reported in other Nordic studies.The median survival from 2010 to 2015 was 2.7 years in Sweden, and from 2015 to 2018 it was 3.3 years in Norway [9,10].The traditional treatment is androgen deprivation therapy (ADT), and for castration-resistant tumors, several new drugs have become available [8][9][10].Some 10% of PC patients have been diagnosed with locally advanced tumors, characterized by T3 or T4 (PSA < 100 ng/mL); among the patients diagnosed in the 2008-2011 period, 83% survived for 5 years [17].In Sweden, some 15% of patients with locally advanced tumors received radical treatment in the year 2000, but this increased in 15 years to over 40%.Radical radiotherapy (with ADT) was more commonly applied than radical prostatectomy, for which robotic surgery was introduced after the year 2000 [8,17].
Survival is commonly reported for up to 1 or 5 years and sometimes up to 10 years.The routine 1-and 5-year survival data were sufficient at times when most cancer patients died within 5 years after diagnosis [18].The situation has completely changed in the past 50 years, and in the Nordic countries, the relative 5-year survival exceeds 60% for most solid cancers [18].With increasing survival times, we must be aware of the life-threatening periods for patients beyond years 1 and 5. Conditional survival is a useful survival metric for this purpose as it estimates survival probabilities in those who have already survived X years [19].In fatal cancers, deaths are often due to metastases, but in cancers such as PC, for which many metastases appear after diagnosis, conditional survival may pinpoint critical periods.Conditional survival has become increasingly important in clinical survival estimation through its relationship to event-free survival [20,21].In the present study, we assessed relative PC survival rates in Denmark (DK), Finland (FI), Norway (NO) and Sweden (SE) over a period of 50 years, until 2020.Cancer registration was initiated early in these countries and is generally characterized by high coverage and minimal loss to follow-up [22].We obtained PC survival data from the NORDCAN database for 1-, 5-and 10-year relative survival and developed conditional survival data for the years 2 to 5 (5/1), 5 to 10 (10/5) and 2 to 10 (10/1), allowing for the assessment of changes in survival at various intervals in the four countries and correlations with known developments in PC diagnostics and treatment.
Methods
The source of the data on the incidence and survival of PC was the NORDCAN database 2.0, and we examined data from the years 1971 to 2020; the database was accessed in the winter of 2023 [22,23].The database is located at the International Agency for Cancer (IARC) and was accessed at the following website: https://nordcan.iarc.fr/en(accessed on 22 June 2023) [24].Relative survival data for 1-, 5-and 10-year survival were obtained.The NORDCAN 5-and 10-year survival data are based on the cohort survival method for all but the last period, for which the hybrid method is applied [25,26].Age standardization for relative survival applies the Pohar Perme estimator, using national life tables to derive the expected rates [27].Age groups 0 to 89 were considered.
For statistical modeling and data visualizations, R statistical software (https://www.r-project.org(accessed on 22 June 2023)) was used in the R studio environment (https: //posit.co/(accessed on 22 June 2023)) [28].Relative survival trends (NORDCAN 5-year periodic %) were generated using Gaussian generalized additive models (GAMs) with thin plate regression splines in a Bayesian framework [28].The methods for the estimation of the conditional relative survival are described elsewhere [28].Changes in survival trends were estimated through annual % changes and through "breakpoints", which marked times at which the annual changes in survival could be defined with at least 95% plausibility.These are described in the legends for the figures, and the detailed estimation methods are available in Reference [28].
The approximate initiation of opportunistic PSA testing in FI, NO and SE was around 1990, despite the national authorities' recommendations against screening [29].Such a recommendation probably caused the delay in the initiation of PSA testing in DK until about 1995 [29].
Results
Numbers of PC patients are shown for 1971-75 and 2016-20 in the Nordic countries (Table 1).The number of cases increased the most for FI, increasing by 7.6-fold, and the least for SE, demonstrating a 3.1-fold increase between the two periods.The age-standardized (world) incidence of PC for each Nordic country is shown in The age-standardized (world) incidence of PC for each Nordic country is shown in Figure 1.The plots show the raw incidence data (A) and the smoothened data with bandwidths of 0. S1.For FI, NO and SE, the 1-year survival started at over 80% (in DK, it was below 80%) and had approached 100% by the year 2010.In NO and SE, the 5-and 10-year survival curves were quite similar, with upward shifts occurring around the introduction of PSA screening in 1990, after which the average annual improvements reached 2% for 5-year survival and 3% for 10-year survival.These improvements stagnated by 2010.In FI, and particularly in DK, the shapes of the curves resembled those for NO and SE, but as the starting levels were lower, the annual increases were steeper; in DK, they were 4% for 5-year survival and over 5% for 10-year survival.In DK, the 5-and 10-year survival curves remained stable until 1990.In FI, the curves had already plateaued after the year 2000, and in DK, the final plateaus remained below those of the other countries.S1.For FI, NO and SE, the 1-year survival started at over 80% (in DK, it was below 80%) and had approached 100% by the year 2010.In NO and SE, the 5-and 10-year survival curves were quite similar, with upward shifts occurring around the introduction of PSA screening in 1990, after which the average annual improvements reached 2% for 5-year survival and 3% for 10-year survival.These improvements stagnated by 2010.In FI, and particularly in DK, the shapes of the curves resembled those for NO and SE, but as the starting levels were lower, the annual increases were steeper; in DK, they were 4% for 5-year survival and over 5% for 10-year survival.In DK, the 5-and 10-year survival curves remained stable until 1990.In FI, the curves had already plateaued after the year 2000, and in DK, the final plateaus remained below those of the other countries.
In Figure 3, we plot the 1-year relative survival together with the conditional 5/1and 10/5-year relative survival to allow for a stepwise assessment of survival in year 1, between years 2 and 5 and further, between years 6 and 10; the exact values are shown in Supplementary Table S2.The curves for conditional survival did not improve until 1990 except in FI.At all times, the conditional 10/5-year survival was on top of the 5/1-year survival, with the largest margin in DK, a smaller margin in FI and a diminishing margin in NO and SE after the year 2000.The annual changes were the largest in DK, but these peaks occurred about 5 years later than the peaks in the other countries.In Figure 3, we plot the 1-year relative survival together with the conditional 5/1-and 10/5-year relative survival to allow for a stepwise assessment of survival in year 1, between years 2 and 5 and further, between years 6 and 10; the exact values are shown in Supplementary Table S2.The curves for conditional survival did not improve until 1990 except in FI.At all times, the conditional 10/5-year survival was on top of the 5/1-year survival, with the largest margin in DK, a smaller margin in FI and a diminishing margin in NO and SE after the year 2000.The annual changes were the largest in DK, but these peaks occurred about 5 years later than the peaks in the other countries.In Figure 4, we plot the 5-year survival together with the conditional 10/5-year survival.The starting levels in DK and FI were below those of NO and SE but with steeper increases, and the final plateau was approximately equal for countries other than DK, which shows a lower level.The curve for 10/5-year survival was on top of 5-year curve, with the largest margins in DK and FI.In Figure 4, we plot the 5-year survival together with the conditional 10/5-year survival.The starting levels in DK and FI were below those of NO and SE but with steeper increases, and the final plateau was approximately equal for countries other than DK, which shows a lower level.The curve for 10/5-year survival was on top of 5-year curve, with the largest margins in DK and FI.In Figure 4, we plot the 5-year survival together with the conditional 10/5-year survival.The starting levels in DK and FI were below those of NO and SE but with steeper increases, and the final plateau was approximately equal for countries other than DK, which shows a lower level.The curve for 10/5-year survival was on top of 5-year curve, with the largest margins in DK and FI. the pre-PSA-screening era, survival differed extensively between the 1-, 5-and 10-year metrics.During the implementation of PSA screening, the differences between these three survival metrics narrowed, and in the final period, the 1-year survival approached 100%, and the difference between the 5-and 10-year survival rates had stabilized to about 5 % units.For the pre-PSA-screening era, the present results show that conditional 10/5-year survival from the years 6 to 10 (5 years) was about 15 % units better than survival in the first 5 years (Figure 4).During the implementation of PSA screening, the difference between these survival metrics narrowed, and in the last 15 years, they merged in SE and narrowed to less than 2 % units in the other countries.In a global PC survival study covering the years 2010-2014, the Nordic countries FI, NO and SE were placed in the >90% category for 5-year survival [31].The current results show further improvements for NO and SE, which are approaching 95%, while FI is at 94% and DK has surpassed 90%.However, even the best figures were below the present US 5-year survival of 97.1%; however, this rate was probably achieved via intense PSA testing [32].
The present novel observations for the pre-PSA-screening era until 1990 were the slow improvements in the 5-and 10-year survival rates in FI and SE, the marginal improvement in NO and the lack of improvement in DK.An additional novelty was the demonstration of the catching-up of the 5-year survival with the 10/5-year survival rates during the PSAscreening-implementation phase and the final culmination of these survival metrics.The changes could probably be largely rationalized by a complete change in the previous pool of PC patients, with a huge number of PSA-diagnosed early-onset PC patients (approximately a fourfold increase in patient numbers).PSA-tested patients were characterized by a low T stage and a low proportion of patients presenting with distant metastases [2,3].According to the present results, in the pre-PSA year of 1980, about 15% of PC patients died in FI and NO during year 1 after diagnosis (somewhat less in SE and more in DK), 50% of patients died by year 5 and 65% died by year 10.In the post-PSA era of 2016-20, 1% of patients died by year 1, 5% by year 5 and 10% by year 10 (with a higher death rate in DK).In the last period, conditional survival data show that 1% of patients died by year 1, an additional 4% died from the years 2 to 5, and an additional 5% died from the years 6 to 10.A Korean relative survival study on PC patients diagnosed until 2013 showed improved survival and decreased mortality after 4 years post diagnosis [33].The NORDCAN data extend to the year 2020, which implies that the present data are as up to date as is achievable by any national cancer registry.These data show that in the last 15 years, survival improvement has slowed down, probably indicating that PSA screening has reached its peak, and further survival improvements depend on novel gains in diagnostics, treatment and patient care.
Improvements in survival have been reported for metastatic and locally advanced PC in the Nordic countries and in the Netherlands [9,10,17,34].Survival in metachronous metastatic PC has been reported to be better than in synchronous metastatic PC [35].ADT has been the basis of treatment for metastatic PC for decades, with few improvements made until the last 20 years.Several months of survival benefit were first shown with docetaxel and continued by enzalutamide, abiraterone and radiotherapy with 223Ra [9,10].In recent years (not affecting our presented results), more treatments with survival benefits have been introduced and used earlier in the hormone-sensitive time space, including upfront triplet treatments (the use of ADT, docetaxel and either abiraterone or darolutamide) [8,36].In locally advanced PC in SE, the use of radical radiotherapy and prostatectomy increased from 15 to 43% of patients in 15 years [17].
Simultaneously, fewer active treatments have been used in low-risk cancers as active surveillance has gained popularity [8,36].While this has benefitted most patients in the form of fewer treatment-related adverse events, some patients might have missed the possibility of achieving a cure [37].Knowledge of biopsy-related complications might also have led to more PSA-only or PSA-and MRI-based follow-ups [38].These changes in practice, as well as the aging PC population, might explain why minimal improvements have been seen in the recent survival data points.Another plausible explanation is that the overall survival benefits seen in the selected trial populations have not thus far affected the epidemiological landscape in spite of positive reports [9,10].
In the Nordic countries, national guidelines for the diagnosis and treatment of PC are regularly updated, and these are greatly inspired by the guidelines of the European Society for Medical Oncology (ESMO) and the European Association for Urology (EAU) [8,36].The guidelines recommend risk/stage adaptive therapies and diagnostics.Active surveillance is preferred in less-aggressive PC, while more aggressive local cancers should typically be treated with prostatectomy or radiotherapy [8,36].In more advanced states, ADT and chemotherapy should be applied with the addition of novel agents in castration-resistant cases [8,36].
The limitations of the present study are the lack of any diagnostic (PSA and clinical) and pathological (TNM) information about cancers at diagnosis and any treatment data.It is, however, not feasible to assume that comparable pathological data were available over 50 years as even the closely collaborating Nordic cancer registries have difficulties in comparing data on tumor characteristics (stage) over the last decades [22,39].A further limitation is that NORDCAN does not allow for survival analysis by age, which is an important determinant of survival.
Conclusions
We showed shifts in PC-related relative survival coincident with the introduction of PSA testing.Relative 5-year survival was around 50% in the pre-PSA-screening era, and it increased to almost 95% in the post-PSA-screening era.Using conditional survival, the critical period was shown to be from year 1 to year 5 after diagnosis.As the major improvement, survival in the year 1 to 5 period almost reached the level of survival from year 6 to year 10.For PC, mortality in year 1 is low, but later mortality requires attention and is likely related to an unverified metastatic status at diagnosis or recurrent metastases.
Figure 1 .
The plots show the raw incidence data (A) and the smoothened data with bandwidths of 0.1 (B) and 0.2 (C).The approximate initiation times for opportunistic PSA screening are shown by arrows on top of the x-axes.For FI and NO, sharp incidence peaks emerged in 2003; in SE, the first discrete peak occurred in 2007, and in DK, a sharp peak emerged in 2008.In the raw data, the discrete peaks may indicate random variations or regional introductions of PSA screening [30].Cancers 2023, 15, x FOR PEER REVIEW 4 of 11 1 (B) and 0.2 (C).The approximate initiation times for opportunistic PSA screening are shown by arrows on top of the x-axes.For FI and NO, sharp incidence peaks emerged in 2003; in SE, the first discrete peak occurred in 2007, and in DK, a sharp peak emerged in 2008.In the raw data, the discrete peaks may indicate random variations or regional introductions of PSA screening [30].
Figure 1 .
Figure 1.Age-standardized incidence in prostate cancer in the Nordic countries, showing raw incidence data (A) and smoothened data with bandwidths of 0.1 (B) and 0.2 (C).The approximate starting times for opportunistic PSA screening are shown by arrows on top of the x-axes (the first arrow is for FI, NO and SE, and the second one is for DK).Relative 1-, 5-and 10-year surviva for PC are shown in Figure 2 for each Nordic country; the exact values are shown in Supplementary TableS1.For FI, NO and SE, the 1-year survival started at over 80% (in DK, it was below 80%) and had approached 100% by the year 2010.In NO and SE, the 5-and 10-year survival curves were quite similar, with upward shifts occurring around the introduction of PSA screening in 1990, after which the average annual improvements reached 2% for 5-year survival and 3% for 10-year survival.These improvements stagnated by 2010.In FI, and particularly in DK, the shapes of the curves resembled those for NO and SE, but as the starting levels were lower, the annual increases were steeper; in DK, they were 4% for 5-year survival and over 5% for 10-year survival.In DK, the 5-and 10-year survival curves remained stable until 1990.In FI, the curves had already plateaued after the year 2000, and in DK, the final plateaus remained below those of the other countries.
Figure 1 .
Figure 1.Age-standardized incidence in prostate cancer in the Nordic countries, showing raw incidence data (A) and smoothened data with bandwidths of 0.1 (B) and 0.2 (C).The approximate starting times for opportunistic PSA screening are shown by arrows on top of the x-axes (the first arrow is for FI, NO and SE, and the second one is for DK).Relative 1-, 5-and 10-year surviva for PC are shown in Figure 2 for each Nordic country; the exact values are shown in Supplementary TableS1.For FI, NO and SE, the 1-year survival started at over 80% (in DK, it was below 80%) and had approached 100% by the year 2010.In NO and SE, the 5-and 10-year survival curves were quite similar, with upward shifts occurring around the introduction of PSA screening in 1990, after which the average annual improvements reached 2% for 5-year survival and 3% for 10-year survival.These improvements stagnated by 2010.In FI, and particularly in DK, the shapes of the curves resembled those for NO and SE, but as the starting levels were lower, the annual increases were steeper; in DK, they were 4% for 5-year survival and over 5% for 10-year survival.In DK, the 5-and 10-year survival curves remained stable until 1990.In FI, the curves had already plateaued after the year 2000, and in DK, the final plateaus remained below those of the other countries.In Figure3, we plot the 1-year relative survival together with the conditional 5/1and 10/5-year relative survival to allow for a stepwise assessment of survival in year 1, between years 2 and 5 and further, between years 6 and 10; the exact values are shown in Supplementary TableS2.The curves for conditional survival did not improve until 1990 except in FI.At all times, the conditional 10/5-year survival was on top of the 5/1-year survival, with the largest margin in DK, a smaller margin in FI and a diminishing margin
Cancers 2023 , 11 Figure 2 .
Figure 2. Relative 1-, 5-and 10-year survival in DK (A), FI (B), NO (C) and SE (D).The vertical lines mark significant changes in the survival trends ("breakpoints"), and the bottom curves show the estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color-coded (see the insert).The approximate starting times for opportunistic PSA screening are shown by arrows on top of the x-axes.
Figure 2 .
Figure 2. Relative 1-, 5-and 10-year survival in DK (A), FI (B), NO (C) and SE (D).The vertical lines mark significant changes in the survival trends ("breakpoints"), and the bottom curves show the estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color-coded (see the insert).The approximate starting times for opportunistic PSA screening are shown by arrows on top of the x-axes.Cancers 2023, 15, x FOR PEER REVIEW 6 of 11
Figure 3 .
Figure 3. Relative 1-, 5/1-and 10/5-year survival in DK (A), FI (B), NO (C) and SE (D).The vertical lines mark significant changes in the survival trends ("breakpoints"), and the bottom curves show the estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color-coded (see the insert).The approximate starting times for opportunistic PSA screening are shown by arrows on top of the x-axes.
Figure 3 .
Figure 3. Relative 1-, 5/1-and 10/5-year survival in DK (A), FI (B), NO (C) and SE (D).The vertical lines mark significant changes in the survival trends ("breakpoints"), and the bottom curves show the estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color-coded (see the insert).The approximate starting times for opportunistic PSA screening are shown by arrows on top of the x-axes.
Figure 3 .
Figure 3. Relative 1-, 5/1-and 10/5-year survival in DK (A), FI (B), NO (C) and SE (D).The vertical lines mark significant changes in the survival trends ("breakpoints"), and the bottom curves show the estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color-coded (see the insert).The approximate starting times for opportunistic PSA screening are shown by arrows on top of the x-axes.
Figure 4 . 4 .
Figure 4. Relative 5-and 10/5-year survival in DK (A), FI (B), NO (C) and SE (D).The vertical lines mark significant changes in the survival trends ('breakpoints"), and the bottom curves show the Figure 4. Relative 5-and 10/5-year survival in DK (A), FI (B), NO (C) and SE (D).The vertical lines mark significant changes in the survival trends ('breakpoints"), and the bottom curves show the estimated annual changes in survival.The curves are solid if there is >95% plausibility of the growth or decline.Shadow areas indicate 95% credible intervals.All curves are color-coded (see the insert).The approximate starting times for opportunistic PSA screening are shown by arrows on top of the x-axess.
Table 1 .
Numbers of prostate cancer patients in the Nordic countries in the pre-and post-PSA periods.
Table 1 .
Numbers of prostate cancer patients in the Nordic countries in the pre-and post-PSA periods.
|
v3-fos-license
|
2022-09-21T15:03:36.631Z
|
2022-09-01T00:00:00.000
|
252397042
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3921/11/9/1830/pdf?version=1663333584",
"pdf_hash": "010ae18c1ac2fbfdfc9510e88157e3af27683736",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2326",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "e4a5fdc00f25ab2c392bbb79958e0cd144b4cf6e",
"year": 2022
}
|
pes2o/s2orc
|
NADPH Oxidases in Aortic Aneurysms
Abdominal aortic aneurysms (AAAs) are a progressive dilation of the infrarenal aorta and are characterized by inflammatory cell infiltration, smooth muscle cell migration and proliferation, and degradation of the extracellular matrix. Oxidative stress and the production of reactive oxygen species (ROS) have been shown to play roles in inflammatory cell infiltration, and smooth muscle cell migration and apoptosis in AAAs. In this review, we discuss the principles of nicotinamide adenine dinucleotide phosphate oxidase (NADPH oxidase/NOX) signaling and activation. We also discuss the effects of some of the major mediators of NOX signaling in AAAs. Separately, we also discuss the influence of genetic or pharmacologic inhibitors of NADPH oxidases on experimental pre-clinical AAAs. Experimental evidence suggests that NADPH oxidases may be a promising future therapeutic target for developing pharmacologic treatment strategies for halting AAA progression or rupture prevention in the management of clinical AAAs.
Introduction
Aortic aneurysms are defined as a localized dilation of the aorta and can be classified into three sub-types by their location within the aorta: ascending aortic aneurysms (AAs), descending thoracic aortic aneurysms (dTAAs), and abdominal aortic aneurysm (AAAs). While all types are generally clinically silent phenomena until impending rupture, TAAs tend to dilate until impending dissection, with approximately 30% of cases having a genetic component from either (1) a clinical syndrome such as Marfans, Loeys Dietz, or vascular Ehlers-Danlos, or (2) a genetic predisposition such as Bicuspid Aortic Valve, or familial Thoracic Aortic Aneurysm and Dissection [1]. In contrast, AAAs tend to dilate until impending rupture and most cases have no genetic predisposition to formation or progression [2][3][4][5][6]. Often asymptomatic, AAAs can quickly expand and undergo rupture, resulting in 80-90% mortality after rupture and 13,000-15,000 deaths per year in the United States [7,8]. Common to both types of aneurysms is the destruction of the extracellular matrix, an influx of inflammatory cells, the activation of pro-inflammatory cytokines, and apoptosis of smooth muscle cells that leads to eventual dissection or rupture.
Aneurysms tend to display heterogeneity in terms of their clinical severity, and continued progressive dilation of the aorta without intervention will no doubt lead to lethal aortic rupture with high morbidity and mortality rates [7,8]. In terms of AAAs, to date, the most significant predictive factor of aortic rupture is maximal aortic dilation [9,10]. Other factors suggested to affect AAA progression include smoking status, biological sex, rate of growth, and hemodynamics conditions [11][12][13][14][15][16][17][18][19]. Current clinical recommendations suggest surgical intervention using either open or endovascular therapy for AAAs at 55 mm; however, some AAAs rupture at sizes smaller than current recommendations [20]. The lack of known causes of AAA progression or rupture suggest that mechanisms of AAA progression and rupture remain poorly understood. These currently unknown mechanisms complicate the identification of medical treatment therapies specific for AAAs that could halt progression or prevent rupture [2][3][4]. Advances in imaging technology and screening programs have led to increased identification of early stage AAAs [21][22][23][24][25][26][27][28][29]; however, without a medical treatment therapy, these AAAs can only be monitored until clinical recommendations suggest surgical intervention. Recent evidence suggests that, despite these increased screening efforts and attempts at prevention management, AAAs remain high in the United Kingdom for both men and women with age-standardized death rates (ASDR) of 7.5 per 100,000 and 3.7 per 100,000, respectively, and the ASDR in many European countries for AAAs have increased slightly since 2012 despite national screening efforts [30]. These data suggest that additional methods are needed to monitor AAAs and that AAAs remain a relevant health concern despite declines from smoking cessation. In this review, we discuss the role of nicotinamide adenine dinucleotide phosphate (NADPH) oxidases and their known downstream mediators in aortic aneurysm formation and rupture.
Nicotinamide Adenine Dinucleotide Phosphate Oxidases (NADPH Oxidases/NOX)
Nicotinamide adenine dinucleotide phosphate oxidase (NADPH oxidase/NOX) comprise a multi-subunit enzyme complex that utilizes nicotinamide adenine dinucleotide phosphate to produce both superoxide anions and other reactive oxygen species. Under normal circumstances, reactive oxygen species (ROS) mediate several important cellular functions, including the maintenance of blood pressure and adaptive immunity. However, high levels of ROS over prolonged periods of time can lead to cellular damage, oxidative stress, and DNA damage, which elicit either cell survival or apoptosis mechanisms depending on severity and duration of exposure. Prolonged exposure to ROS is hypothesized to be a hallmark of AAA formation [31,32]. ROS includes free radicals such as superoxide (O 2− ), hydroxyl radical (OH), and non-radicals such as hydrogen peroxide (H 2 O 2 ). Increased endothelial permeability has been linked to ROS activation and is believed to play a critical role in initiation of vascular diseases such as atherosclerosis and aortic aneurysms. The effects of ROS on the endothelium are discussed in greater detail in Section 3 of this review.
The main function of NADPH oxidases/NOX, a family of enzymes implicated in cardiovascular diseases, is to produce ROS [33]. The first characterized NADPH oxidase was defined in neutrophils and macrophages and is known to be a multi-component complex that catalyzes the formation of O 2 •− during phagocytosis [34]. In the resting cell, the NADPH oxidase contains a membrane-bound catalytic core of the enzyme, flavocytochrome b 55 8, and cytosolic regulatory subunits p47phox, p40phox, p67phox, and small G-protein Rac1 or Rac2 ( Figure 1). The flavo-cytochrome b 55 8 is the catalytic center of NOX and comprises two tightly complexed membrane-integrated flavocytochromes: gp91-phox and p22-phox [35]. Meanwhile in the cytosol, the cytosolic components of the complex contain p47-phox, p67-phox, and p40-phox and the small GTPase Rac1/Rac2, with the p40-phox and p67-phox proteins often being complexed prior to activation [36][37][38]. In resting phagocytic cells such as macrophages, p47phox, p67phox, and p40phox are demonstrated to exist as complexes in the cytosol that are stabilized by SH3 domain interactions [39]. Conversely, Rac is tethered to RhoGDI, a RhoGDP-dissociation inhibitor, that prevents its binding to the rest of the NOX complex [40]. In the resting state, binding to the flavocytochrome is prevented because p47phox exists in an auto-inhibited conformation, a positional conformation that prevents its tandem SH3 domains from being exposed, and they are masked through intramolecular interaction with the C-terminal segment, preventing activation of the complex.
During NOX activation in the phagocytic cell, phosphorylation unmasks a binding region on p47-phox, allowing it to bind p67-phox to form a trimeric cytosolic complex [39,41]. The phosphorylation sites consist of multiple serine residues in the C-terminus of p47phox, changing the physical conformation of p47-phox to enable the N-terminal SH3 domain to allow for eventual interaction with the proline-rich region of p22phox at the membrane [42][43][44][45]. Following the formation of the trimeric cytosolic complex, p47-phox mediates translocation of the cytosolic complex to the membrane, where it then binds to p22-phox; the active NOX complex is assembled; and activation of gp91-phox can then occur [46]. Since gp91-phox is the catalytic core, gp91-phox levels are an established measurement for the extent of NOX complex formation. The gp91-phox NOX protein family contains NADPH-(or NADH-) binding domains, which use NADPH as electron donors to produce superoxide anions (O2 •-, the precursor for other reactive oxygen species) [36,47]. Thus, glucose metabolism and the electron transport chain provide the NADPH necessary for NOX full function [48,49]. p22-phox; the active NOX complex is assembled; and activation of gp91-phox can then occur [46]. Since gp91-phox is the catalytic core, gp91-phox levels are an established measurement for the extent of NOX complex formation. The gp91-phox NOX protein family contains NADPH-(or NADH-) binding domains, which use NADPH as electron donors to produce superoxide anions (O2 •-, the precursor for other reactive oxygen species) [36,47]. Thus, glucose metabolism and the electron transport chain provide the NADPH necessary for NOX full function [48,49]. The NOX family consists of several catalytic isoforms and includes seven members: NOX 1-5, Duox1, and Duox2 [35,40]. The structure of NOXes 1-4 are related and contain an N-terminal transmembrane region with six α-helical domains with four conserved histidines. These histidine amino acids are conserved, and two each are located in the third and fifth α-helical domains spanning two asymmetrical hemes. The cytoplasmic C-terminus dehydrogenase domains are also similar and contain conserved binding sites for FAD and NADPH. NOX5 is distinct and contains a calmodulin-like EF domain with four Ca 2+binding sites in the long N-terminus. The EF domain in Nox5 allows this enzyme to be sensitive to elevated cytosolic Ca 2+ levels and can quickly activate in response to cytosolic Ca 2+ levels [51,52]. Finally, the DUOX proteins are the most divergent members of the NOX family as they are characterized by an N-terminal peroxidase-like domain connected to the EF domain by an additional transmembrane domain [53,54].
Evidence suggests that the catalytic subunits of the NOX family members function distinctly despite all being involved in ROS production. NOX1 and NOX2 function by The NOX family consists of several catalytic isoforms and includes seven members: NOX 1-5, Duox1, and Duox2 [35,40]. The structure of NOXes 1-4 are related and contain an N-terminal transmembrane region with six α-helical domains with four conserved histidines. These histidine amino acids are conserved, and two each are located in the third and fifth α-helical domains spanning two asymmetrical hemes. The cytoplasmic C-terminus dehydrogenase domains are also similar and contain conserved binding sites for FAD and NADPH. NOX5 is distinct and contains a calmodulin-like EF domain with four Ca 2+ -binding sites in the long N-terminus. The EF domain in Nox5 allows this enzyme to be sensitive to elevated cytosolic Ca 2+ levels and can quickly activate in response to cytosolic Ca 2+ levels [51,52]. Finally, the DUOX proteins are the most divergent members of the NOX family as they are characterized by an N-terminal peroxidase-like domain connected to the EF domain by an additional transmembrane domain [53,54].
Evidence suggests that the catalytic subunits of the NOX family members function distinctly despite all being involved in ROS production. NOX1 and NOX2 function by generating O2 •− through the transfer of two electrons from NADPH in the cytosol to FAD and then to the two heme groups via the electron transport chain [35]. NOX4 generates H 2 O 2 , through two conserved cysteines, Cys226 and Cys270, and a highly conserved His222 residue in the third extracytosolic loop [55]. It is postulated that the histidine serves as a source of protons for the spontaneous dismutation of O2 •− forming H 2 O 2 [56]. ROS production from this family can be extracellular or intracellular depending on the biological membrane where the NOX family member is located. Activated NOX family members have been found on the plasma membrane, endosome, phagosome, caveolae, endoplasmic reticulum, mitochondria, and nucleus [57]. NOX family members have been shown to be expressed across various cell types in the vascular system, and over-activation has been suggested to be critical for diseases such as atherosclerosis and aortic aneurysms [ 72], and cardiomyocytes [73,74]. NOX5 has been shown to be present in SMCs [75] and endothelial cells [76]. Interestingly, NOX3 and DUOX2 expression have not been reported in vascular cells, while DUOX1 has been reported in SMCs [77]. These data suggest that NOX family member expression varies across different vascular cell types and could play distinct roles in vascular disease mechanisms.
Mediators of NADPH Oxidases/NOX in Experimental AAAs
The NOX Family is known to exert effects on down-stream mediators in multiple cell types in vascular diseases. We specifically discuss the effects of several major mediators of NOX signaling within the context of experimental AAAs [78].
Reactive Oxygen Species (ROS)
Inflammatory vascular diseases, such as atherosclerosis and AAAs, are believed to be strongly linked to the over-production of ROS. An important initiating event in the inflammatory vascular disease process is the disruption of flow in the vasculature leading to increased ROS production and activation of endothelial cells. The overproduction of ROS may subsequently induce inflammation, matrix metalloproteinase (MMP) activity, smooth muscle cell apoptosis, or changes in collagen properties [79]. Studies have found that NADPH oxidases and iNOS are producers of superoxide (O 2 •− ) anions in AAAs [79,80] (Table 1). These studies demonstrated that superoxide anions are increased in AAAs in humans and that this increase could be linked to increased NADPH oxidase and iNOS production in AAAs. Studies in experimental AAAs also demonstrate that NADPH oxidases and iNOS are critical for free radical production in aneurysms [81]. A loss of these enzymes was demonstrated to prevent the development of AAAs through reduced expressions of MMP-2 and MMP-9 in the aortic tissues. Finally, as part of the same study, mice were treated with the oxidase inhibitor apocynin and were shown to be protected from AAA formation [81].
Separate studies in humans have also verified elevated levels of superoxide and lipid peroxidation products in human AAAs and linked these elevated levels to NADPH oxidase activity in AAAs [31]. The inducible form of nitric oxide synthase (iNOS), a source of reactive oxygen species (ROS) and reactive nitrogen species, is upregulated in human AAAs [82], while patients also demonstrated decreased catalase activity, an enzyme known to degrade hydrogen peroxides in vivo [83]. Catalase was shown to be decreased in the aortic wall in human AAAs, and treatment of experimental models of AAAs with catalase was able to attenuate AAA formation [84,85]. These studies demonstrate that ROS are elevated in AAA progression and that the inhibition of ROSs with oxidase inhibitors can attenuate disease formation in pre-clinical experimental models of AAAs.
eNOS
Endothelial nitric oxide synthase (eNOS) protects vascular cells from oxidative damage via the production of nitric oxide (NO • ) to rapidly inactivate superoxide (O2 •− ) and other reactive oxygen species (ROS). Evidence has demonstrated that, when eNOS cofactor tetrahydrobiopterin (H4B) is deficient, eNOS becomes dysfunctional and begins to produce O2 •− rather than NO • [86]. This uncoupling of eNOS is thought to contribute to endothelial dysfunction in vascular disease progression. Interestingly, the aneurysm infusion induction agent, angiotensin II, has been shown to partially function via the uncoupling of eNOS [86]. Gao et al. demonstrated that Ang II uncouples eNOS via transient activation of NADPH oxidase (NOX) and is consequently hydrogen peroxide-dependent, with endotheliumspecific deficiency in H4B salvage enzyme dihydrofolate reductase (DHFR) [86]. Folic acid treatment was shown to ameliorate the effects of the uncoupling of eNOS in experimental AAAs [87]. Recent studies have also attempted to link changes in eNOS expression to aged AAAs in humans and mouse and found that eNOS levels decline in aged AAAs and could be linked to increased aortic diameters in aging [88].
ENOS has been shown to be activated by TGF-β signaling in vascular diseases such as atherosclerosis or aortic aneurysms [89]. In Marfan syndrome-associated ascending aortic aneurysms, a key molecule, transforming growth factor-beta (TGF-beta), normally bound to the extracellular matrix, is free, activated, and allowed to activate eNOS unchecked. In an experimental setting, TGF-beta blockade prevents the aortic root structural damage and dilatation of ascending aortic aneurysm formation [90]. The Angiotensin receptor 1 blockers (also known as sartanics) exert an anti-TGF-beta effect; trials are now ongoing for evaluating the effect of Losartan compared with atenolol in Marfan syndrome [90][91][92]. A current clinical trial is underway to examine the effects of sartanics or Angiotensin II receptor blockers in Marfan syndrome-associated aortic dissection.
HMGB1
High-mobility group box 1 (HMGB1) is a widely expressed protein that acts as an extracellular signal upon active secretion by immune cells or passive release by dead, dying, and injured cells. HMGB1 plays pivotal roles through both intracellular and extracellular regulation of the cellular response to stress. Although the mechanisms contributing to HMGB1 biology in AAAs are still under investigation, it appears that oxidative stress is a central regulator of HMGB1's ability to translocate, release, and activate inflammation and cell death in AAAs. High-mobility group box 1 protein (HMGB1) has been shown in several studies to be elevated in AAAs in humans [100], and its genetic inhibition has resulted in attenuated experimental AAAs [93,94]. In the first of these studies by Kohno et al., HMGB1 was elevated in AAAs in humans and positively correlated with matrix metalloproteinase 2 and 9 (MMP2 and MMP9, respectively) expression. Following inhibition using a neutralizing antibody against HMGB1 in experimental AAAs, the expression of MMP2, MMP9, CD68, and TNF-α declined in AAA formation [93]. In separate studies, HMGB1 expression was found to decrease following the mesenchymal stem cell treatment of elastase-induced experimental AAAs. Sharma et al. then eliminated HMGB1 with a neutralizing antibody and found decreased experimental AAAs and decreased IL-17 production [94]. Finally, the study linked HMGB1 activation to NOX2 by eliminating a single allele of NOX2 to find decreased experimental AAAs; decreased MMP2 and 9 activity; and decreased cytokine production of IL-17, IL-23, and IFN-γ [94]. These studies suggest a correlation between NOX2 and HMGB1 in experimental AAAs and demonstrated that NOX2 was able to mediate its inhibitory effects in part via macrophages in the context of experimental AAAs.
HIF-1α
Hypoxia-inducible factor-1 (HIF-1) is a transcription factor found in mammalian cells under reduced oxygen tension that plays an essential role in cellular and systemic homeostatic responses to hypoxia and has a growing importance in vascular diseases. HIF-1 is a heterodimer composed of a 120-kD HIF-1α subunit complexed with a 91-to 94-kD HIF1β subunit. The HIF-1α accumulates in the cytoplasm under hypoxic conditions and translocates to the nucleus to heterodimerize with HIF-1β, forming an active transcription factor. The HIF-1 complex is believed to be a master regulator of oxidative stress gene regulation and has been implicated in the pathogenesis of atherosclerosis, AAA formation, and pulmonary hypertension [101]. HIF-1 has also been shown to be an essential regulator of angiogenesis and macrophage function [95]. HIF-1α was shown to be elevated in human and experimental AAAs. and HIF-1α over-expression could be found at the rupture edge at human AAA tissues [96]. On the other hand, iron chelation has been shown to stabilize HIF-1α by inhibiting the HIF-1α degradation enzyme prolyl hydroxylase (PHD). Treatment with the HIF-1α inhibitors 2-methoxyestradiol and digoxin demonstrated decreased experimental AAAs, while treatment with the PHD inhibitors, cobalt chloride, and JNJ-42041935 did not attenuate experimental AAAs or MMP expression [95]. Finally, studies further investigating the role of iron chelation and HIF-1 expression in experimental AAAs found that deferoxamine (DFO) stabilized HIF-1α expression and promoted increased activation of MMP2 and MMP9 [97].
NF-Kβ
NF-κB transcription factors regulate the expression of hundreds of genes that are involved in regulating cell growth, differentiation, development, and apoptosis. The mammalian NF-κB proteins consist of five different related family members that bind as homodimers or heterodimers to 10-base pair κB sites. All of these family members have a Rel-homology (RHD) domain essential for DNA binding and dimerization. RelA (also known as p65), RelB, and cRel have C-terminal transcription activation domains (TADs) that serve to positively regulate gene expression. The two other mammalian NF-κB proteins are synthesized as larger p105 and p100 precursor proteins, which have C-terminal ankyrin repeats that inhibit DNA binding until partially processed by proteasome to the smaller p50 and p52 products [102]. All NF-κB proteins are capable of homodimerization or heterodimerization with the other NF-κB proteins with the exception of RelB, which can only form heterodimers. Although there are a few exceptions where NF-κB contributes to cell death, in most cases, the expression of NF-κB target genes typically promotes cellular survival. Therefore, the possible roles of NF-κB in relation to Nox family activation are complex in AAAs and require further investigation as their relative roles in relation to each other currently are unknown.
In the context of AAA formation, NF-κB acts as a cytokine-responsive transcription factor that promotes macrophage MMP expression to promote the destruction of the aorta during AAA progression. Several studies have investigated mechanisms of NF-κβ signaling in AAAs [98,99,103]. The first study found that inhibition of NF-κB and ETS in rats could decrease experimental AAAs [103]. Additional studies found that NF-κB expression was elevated in AAAs and that treatment with pyrrolidine dithiocarbamate (PDTC), a pharmacologic inhibitor of NF-κB resulted in decreased IL-1, IL-6, and MMP 9 in experimental AAAs [98]. The importance of endothelial NF-κB signaling was demonstrated by Saito et al. using transgenic mice expressing dominant-negative IκBα selectively in endothelial cells (E-DNIκB mice) [99]. These mice demonstrated both decreased intimal hyperplasia and decreased experimental AAAs in the endothelial specific transgenic deletion of NF-κB in experimental pre-clinical AAA studies.
While these factors are some of the major mediators of NOX family and ROS signaling in vascular diseases, there are additional factors such as JNK and PKC, and other that future studies could determine major roles of these mediators in AAA formation and rupture. As evidence by the current review, ROSs are complex and their mechanisms as they pertain to AAA rupture remain unclear.
Influence of Genetic NADPH Oxidases/NOX Deficiency Components on Experimental AAAs
Several studies have demonstrated that NOX1 is important for aortic aneurysm formation using the angiotensin II infusion murine model ( Table 2). The first of these studies demonstrated that NOX1-/-mice demonstrated decreased AAA size through a change in tissue inhibitor of matrix metalloproteinase 1 (TIMP1) expression [104]. A second study on the hph1 background examined the effects of NOX1-/-, NOX2-/-, and NOX4-/-in angiotensin II infusion treatment therapies [105]. The study found that all three factors decreased aortic rupture rates, decreased aortic aneurysm size, decreased oxidative stress, and decreased eNOS production [105]. In the case of NOX1-/-, in vitro studies were able to link the downregulation of fibrillin 5 to NOX1 in SMCs in aortic dissection, suggesting a possible mechanism for changes in fibrillin 5 expression during vascular disease progression [106]. Studies using conditional NOX1 mice have been investigated in atherosclerosis and demonstrated that the smooth muscle specific elimination of NOX1 increased migration, proliferation and increased phenotypic switching to a macrophage-like state [107]. However, these studies have yet to be performed using aortic aneurysm murine models Angiotensin II Decreased rupture, decreased size, decreased eNOS [105] p67 phox n/a Elevated but unknown [80,111] Rac1 n/a Elevated but unknown [80,111] Rac2 n/a Elevated but unknown [113] The NOX2 isoform has also been investigated in experimental mouse models in several recent studies. The first study coupled the elimination of NOX2 with the hph1 background followed by ANgiotensin II infusion treatment and found decreased rupture, decreased aortic aneurysm size, and decreased eNOS production in the NOX2-/-experimental aneurysm model [105]. The second study investigated the role of NOX2 in the context of NADPH oxidase-dependent high-mobility group box 1 (HMGB1) expression and found that the elimination of a single allele of NOX2 decreased experimental AAAs; decreased MMP2 and 9 activity; and decreased the cytokine production of IL-17, IL-23, and IFN-γ [94]. Interestingly, in the reverse experiments, the over-expression of NOX2 in the endothelium followed by treatment with Angiotensin II did not result in increased aneurysm size or severity despite increased superoxide formation [114]. NOX2 conditional mice have been created and used to study the effects of myeloid specific elimination of NOX2 followed by high fat diet feeding for 16 weeks [115]. Following high fat diet feeding, NOX2-myeloid-specific conditional knock-out demonstrated lower body weight, delayed adiposity, attenuated visceral inflammation, and decreased macrophage infiltration and cell injury in visceral adipose relative to control mice [115]. In addition, the effects of a high fat diet on glucose regulation and circulating lipids were attenuated in the myeloid specific NOX2 conditional knock-out mice. However, no current published studies have investigated NOX2 conditional knock-out mice studies in aortic aneurysms formation.
There have been several studies investigating possible mechanisms of NOX4 in aortic aneurysm formation. The first study, as previously mentioned above, combined elimination of NOX4 on the hph1 background and found that the elimination resulted in decreased rupture, decreased aneurysm size, and decreased eNOS production [105]. A second study examined the role of NOX4 in the Fbn1 C1039G/+ ascending aneurysm murine model [108]. These studies found that NOX4 was elevated in human and murine aortic aneurysm tissue and that the elimination of NOX4 in the fibrillin ascending aneurysm models decreased aneurysm size and preserved the elastic lamina. NOX4 conditional mice have been investigated in the lung epithelium; however, no studies have investigated the role of cell-specific elimination of NOX4 in aortic aneurysm formation [116].
The possible mechanisms of NOX5 in aortic aneurysm formation appear to diverge from some of the other family members. Interestingly, endothelial-specific NOX5 overexpression in the ApoE-/-model was found to have no effect with a Western diet; however, once these animals became diabetic and were no longer insulin-responsive, they had two times the number of aortic aneurysms of their controls [109]. These studies suggest that NOX5 could play a role in diabetic aortic aneurysms; however, additional studies will be required to further examine the mechanism of this effect.
Three of the NOX family members have unknown effects in aortic aneurysm formation. First, NOX3 levels have been suggested to be undetectable by qPCR in human AAA formation from patient samples and the effects of genetic elimination of NOX3 in experimental AAAs in mouse currently remain unknown [80]. Mice homozygous for a NOX3 het−3J mutation are characterized by head tilting and lack otoconia in the utricle and the saccule of the ear; therefore, these vestibular effects could prevent aortic aneurysm studies in adult mice [117]. Second, DUOX1 has been suggested to be elevated in aortic aneurysms in several studies during the investigation of NOX family isoforms; however, genetic murine models in experimental AAAs remain to be investigated [110]. DUOX1 mice have been created and investigated in urothelial cells; however, the effects of genetic elimination of DUOX1 in aortic aneurysm formation remain unknown [118]. Finally, the effects of DUOX2 on experimental AAAs is currently unknown. A Duox2 thyd mutation has been mapped to chromosome 2 and identified as a T > G base pair change in exon 16 of DUOX2 in mice [119]. These mice have been used to study the effects of Duox2 in the thyroid and in thyroid-related illnesses.
A number of the additional components of the NOX complex have also been investigated in experimental models of AAAs. In the Angiotensin II experimental models, studies in the LDR-/-background with p47 phox elimination demonstrated decreased aortic rupture rates, decreased oxidative stress, and decreased MMP2 levels [112]. A second set of studies on the hph1 background with angiotensin II administration found similar decreases in rupture rates, aneurysm size, and eNOS production [105]. Other family members such as p22 phox [31,80,111], p22 phox [80,111], Rac1 [120], and Rac2 [113] are known to be elevated in thoracic and abdominal aortic aneurysms, but their specific effects via genetic elimination in experimental AAAs in murine models remain to be determined. These data suggest that there exists a knowledge gap in the mechanisms surrounding NADPH oxidase function in aortic aneurysm formation and that there currently exists tools that could help to fill that knowledge gap in the near future.
Influence of Pharmacologic NADPH Oxidases in Experimental AAAs
There is a growing body of evidence in experimental AAAs to suggest that pharmacologic targeting of the NOX family and ROS could result in attenuated AAAs. First, there have been a series of studies in humans designed to link ROS inhibition with antioxidant consumption [121,122]. These studies suggest that the consumption of vitamins C, E, or β-carotene could reduce ROS produce and, thus, AAA formation. Related to this study, the treatment of murine AAAs with resveratrol, a polyphenol, resulted in decreased AAA formation and elevated phox-p47 levels [123]. In separate studies, Quercetin, a polyphenol and antioxidant, also demonstrated decreased AAA formation and decreased HIF-1α and VEGF signaling in experimental AAAs [124]. These studies using Quercetin also treated with Celecoxib, a known anti-inflammatory, separately as a positive control to confirm the ability to attenuate AAA formation. These studies suggest that antioxidants or consumption of polyphenols could attenuate AAA formation and progression through the alleviation of oxidative stress. Finally, the drug azathioprine was also found to be a direct inhibitor of Rac1 and cJun in endothelial cells and could decrease AAA progression in an Angiotensin II murine model of experimental AAAs [120].
Increased focus in experimental AAAs has been through targeting the inhibition of several of the downstream mediators of ROS activation. PPARα is a known downstream mediator of ROS activation, and the PPARα antagonist Pemafibrate was shown to not decreased AAA size but to decrease rupture rates in an Angiotensin II murine model of AAA formation [125]. Pemafibrate was also shown to decrease ROS production in the Angiotensin II murine model and in human SMCs [125]. Studies have also investigated whether inhibition of HIF-1α, a downstream mediator of ROS activation, could result in attenuated experimental AAAs. Studies found that 2-methoxyestradiol and digoxin, known inhibitors for HIF-1α, could attenuate AAA formation in experimental pharmacologic prevention and small AAAs [95]. These studies suggest that the inhibition of the downstream mediators of ROS signaling could attenuate AAA formation.
Finally, studies are ongoing to investigate NOX family pharmacologic inhibition in other diseases [5,126], and as possible isoform selective inhibitors become more widely available, these targets may also be used as inhibitors to attenuate AAAs. Recent studies have highlighted specific inhibitors for NOX1 (ML171, GKT136901, and GKT137831 [127][128][129]); NOX 2 (GSK2795039 [130], CYR5099 [131], Bridged tetrahydroisoquinolines: CPP11G and CPP11H [132], Perhexiline, and Suramin (cell impermeable) [133,134]); NOX4 (GLX7013114 [135], GKT137831 [129], GKT137928 [136], ACD084 [137], and Rosmarinic acid [138])], and DUOX1 (Acrolein [139]). These studies were in models of other diseases and have not been investigated as possible inhibitors of aortic aneurysm formation via NADPH oxidase inhibition. In the documented cases examined in this review, few investigated pharmacologic treatment of chronic, well-established AAAs such as those seen in humans. Past possible pharmacologic treatment therapies for AAAs with great treatment potential in experimental pre-clinical AAA models, such as doxycycline, failed to translate into clinical treatment therapies [4]. Part of the reason for the failure to develop treatment therapies to halt progression or prevent rupture is that there remains an unmet need in the knowledge gap of the causal mechanisms of AAA rupture. A second unfulfilled need in experimental pre-clinical models is treatment with pharmacologic therapies of well-established, chronic large AAAs to better resemble human disease. As more studies treat well-established AAAs and the knowledge gap closes, there will be more medical treatment therapies that better translate to human disease. The treatment of established AAAs with these untested inhibitors in the future could provide new knowledge of the function of NADPH oxidases in AAA formation and rupture and could provide potential treatment therapies for this deadly disease.
Conclusions
In summary, there is a growing body of evidence to suggest the importance of NADPH oxidases in AAA formation, progression, and rupture. Future studies investigating the roles of these enzymes in processes such as mitophagy, apoptosis, and senescence in aging in AAAs may provide new insight into the mechanisms of these enzymes in AAAs. Furthermore, investigation into mechanisms of known NADPH oxidase pharmacologic inhibitors could provide greater insights into the development of novel pharmacologic treatment therapies to halt AAA progression and to prevent AAA rupture. AAA rupture carries an approximate 50% morbidity and mortality rate, and there remains an unmet clinical need to provide medical treatment therapies to prevent AAA rupture. Perhaps targeting of NADPH oxidases in clinical applications could help halt AAA progression or prevent rupture.
Conflicts of Interest:
The author declares no conflict of interest, and the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
|
v3-fos-license
|
2022-01-09T16:17:56.154Z
|
2022-01-01T00:00:00.000
|
246079508
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1660-4601/19/2/630/pdf",
"pdf_hash": "50d9d920c07a5665ff64e4b6af9c2e1a1e5ebb3d",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2329",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"sha1": "319216d4c626df0692cafd39ecbde9f2ffa7a5cf",
"year": 2022
}
|
pes2o/s2orc
|
Changes in Cigarette Smoking and Vaping in Response to the COVID-19 Pandemic in the UK: Findings from Baseline and 12-Month Follow up of HEBECO Study
This study investigated UK adults’ changes in cigarette smoking and vaping during the COVID-19 pandemic and factors associated with any changes. Data were from an online longitudinal study. A self-selected sample (n = 332) of 228 smokers and 155 vapers (51 participants were both smokers and vapers) completed 5 surveys between April 2020 and June 2021. Participants self-reported data on sociodemographics, COVID-19-related, and smoking/vaping characteristics. During the 12 months of observations, among smokers, 45% self-reported a quit attempt (27.5% due to COVID-19-related reasons) since the onset of COVID-19 pandemic and the quit rate was 17.5%. At 12 months, 35.1% of continuing smokers (n = 174) reported smoking less and 37.9% the same, while 27.0% reported an increase in the number of cigarettes smoked/day. Among vapers, 25.0% self-reported a quit attempt (16.1% due to COVID-19-related reasons) and the quit rate was 18.1%. At 12 months, 47.7% of continuing vapers (n = 109) reported no change in the frequency of vaping/hour, while a similar proportion reported vaping less (27.5%) and more (24.8%). Motivation to quit smoking and being younger were associated with making a smoking quit attempt and smoking cessation. Being a cigarette smoker was associated with vaping cessation. Among a self-selected sample, COVID-19 stimulated more interest in reducing or quitting cigarette smoking than vaping.
Introduction
Given the known impact of cigarette smoking on respiratory disease and immune function [1], the onset of the coronavirus (COVID-19) pandemic raised concerns among public health professionals that smokers may be at a greater risk of COVID-19 infection, severe disease, and death [2]. While there is some evidence that current compared with never smokers admitted to hospital with COVID-19 are at increased risk of severe disease and death [3][4][5], other systematic reviews suggest that current compared with never smokers have a reduced risk of initial COVID-19 infection [6,7]. E-cigarette use (hereafter referred to as vaping) is associated with substantially reduced levels of measured carcinogens and toxins relative to cigarette smoking and thus are less harmful [8]. However, there is a similar public health concern that vaping may increase harm from COVID-19 [2], though evidence is lacking [9,10]. These inconclusive findings notwithstanding, cigarette smoking and vaping remain serious concerns during the COVID-19 pandemic regarding the impact of these behaviours on COVID-19 outcomes as well as the impact of COVID-19 on these behaviours.
Apart from understanding the direct health effects of COVID-19 on people who smoke and/or vape, characterising the pandemic's impact on smoking and vaping behaviour is another important area of investigation. However, findings have thus far been equivocal and limited to the first few months of the pandemic. For instance, a nationally representative survey in the UK found that the first national lockdown (beginning of March 2020 to July 2020) was accompanied by an increase in motivation to quit smoking and the number of quit attempts [11]. Further, reductions in smoking behaviour have been found across several surveys in many countries. Within samples in the UK and US, a substantial proportion of smokers reported an increase in motivation to quit smoking (35%; [12]), increased quit attempts (12-23%; [12,13]), or an actual reduction in smoking frequency during the pandemic (28%; [12,14]). Although a non-trivial proportion of smokers have engaged in smoking cessation attempts during the pandemic, research also suggests that the majority of smokers did not change their behaviour, and in fact, that many smokers increased their smoking frequency [12][13][14]. This may be attributed to the fact that cessation attempts triggered by the pandemic were largely unaided [15]. Research in the UK suggests no change in downloads of a popular smoking cessation app during the initial months of the COVID-19 pandemic [16], though a study in the US from the same time period reported an increase in traffic on the Smokefree website and adult-focused digital intervention platforms in 2020 [17]. Likewise, research on vaping suggests that a proportion of vapers decreased their product use in the first months of the COVID-19 pandemic (10-24%; [10,14]), though some vapers reported an increase (24-40%; [10,14]), and half did not change their use (50%; [10]).
For some, boredom and restrictions in movement during lockdowns or other behavioural restrictions might have stimulated increased smoking and vaping [10,18], while for others, concerns about contracting COVID-19 and becoming severely ill might have motivated them to improve their health by stopping smoking and vaping [10,19]. For instance, perceived risk of severe infection from COVID-19 was found to be a positive predictor of motivation to quit smoking [20], and smokers who reported that COVID-19 was a greater risk to smokers than non-smokers also reported a reduction in their smoking behaviour in the first months of the pandemic [14]. In addition, smokers and vapers who had a direct experience with COVID-19, such as someone in their household testing positive, reported a stronger desire to quit smoking and/or vaping [19]. Irrespective of COVID-19-related reasons for quit attempts and quit rate, research suggests that quit attempts are linked with motivation to quit, while quit rate is linked with nicotine dependence [21,22].
Critically, although extant studies provide useful information on changes in smoking and vaping behaviour in response to the COVID-19 pandemic, they have been limited to the first few months of the pandemic, and they have largely been cross-sectional. Since COVID-19 restrictions are constantly changing and COVID-19 vaccines have been widely administered since the beginning of 2021 (at least in high-income countries), these may have changed attitudes and behaviours towards COVID-19. Longitudinal studies have reported that many health behaviours (i.e., physical activity [23] and dietary behaviour [24]) have changed dynamically over the course of the pandemic and in response to changing COVID-19 restrictions. As such, smoking and vaping behaviour may not have remained constant over the course of the COVID-19 pandemic.
This study uses data collected over 12 months to investigate the long-term effects of the COVID-19 pandemic on smoking and vaping in UK adults. Specifically, this study aimed to investigate how smoking and vaping have changed during the first year of the COVID-19 pandemic, to identify factors associated with any changes, and to explore whether COVID-19 has acted as a source of motivation for smokers and vapers to quit. The research questions were: Among cigarette smokers and vapers at baseline (April-June 2020; covering the period of the first national COVID-19-related lockdown in the UK): RQ1. What proportion at a 12-month follow up (May-June 2021; ease of COVID-19-related restrictions in UK) reported having made a quit attempt and a successful quit attempt? RQ2. What proportion at a 12-month follow up (May-June 2021) reported having changed their cigarette or vaping consumption? RQ3. Which, if any sociodemographic, smoking/vaping, and COVID-19 related characteristics are associated with quit attempts, quit rate, and changes in consumption? RQ4. What proportion self-reported COVID-19 related reasons for making a quit attempt?
Study Design
Analysis of longitudinal data of a prospective online survey of adults residing in the UK; the HEalth BEhaviours during the COVID-19 pandemic (HEBECO) study (accessed 17 December 2021. https://osf.io/sbgru/). The study was approved by the Ethics Committee at the UCL Division of Psychology and Language Sciences (CEHP/2020/759). Baseline data collection occurred between April and June 2020, and follow-up surveys were administered at 1 month (FU1), 3 months (FU2), 6 months (FU3), and 12 months (FU4) from the baseline participation date. This analysis uses data from the baseline and 12-month follow up, apart from quit attempts, which were measured at each follow up.
Study Sample
This study included a self-selected sample of UK-based adult (18+ years) who were either smokers or vapers (some were both smokers and vapers, i.e., dual users) and completed the baseline survey of the HEBECO study between 23 April 2020 (initiation of recruitment) and 14 June 2020 inclusive (marking the end of the first national UK lockdown) and were successfully followed-up after 12 months (FU4; ease of COVID-19-related restrictions in the UK). A total of 2994 participants completed the baseline questionnaire of the HEBECO study, of whom 751 were smokers and vapers (556 smokers; 337 vapers; among the smokers and vapers there were 142 dual users), who were potentially eligible for this study. Of these eligible 751 participants at baseline, 332 (44.2%) were successfully followed-up at 12 months. Participants who were successfully followed-up at 12 months were significantly older (p < 0.001), more were of white ethnicity (p = 0.004), with post-16 education qualifications (p = 0.004), and non-smokers (p = 0.003) than those who did not complete the 12-month follow-up (Table 1).
Initial recruitment at baseline was online and involved sharing study invitations via multiple channels, including unpaid and paid advertisements on social media (e.g., Facebook, Twitter, Reddit), an email campaign across the network of UCL, other universities in the UK, Public Health England, Cancer Research UK, charities, and local authorities across the UK. The full recruitment strategy is available online (accessed 17 December 2021. https://osf.io/sbgru/). Participants gave their written consent prior to data collection. Data were captured and managed within the REDCap electronic data system [25,26]. Participants were followed up via email (except for participants who explicitly opted out), with up to three reminders to complete the survey sent at each follow up. Reasons for not completing the follow-up surveys were not assessed.
Measures
All measures were self-reported.
Outcomes
Smoking status was assessed at baseline and at each follow up with the question 'Which statement about tobacco use and cigarette smoking best describes you?' [27], with the options (i) I smoke cigarettes (including hand-rolled) every day; (ii) I smoke cigarettes (including hand-rolled), but not every day; (iii) I do not smoke cigarettes at all, but I do smoke tobacco of some kind (e.g., pipe, cigar or shisha); (iv) I have stopped smoking completely in the last year; (v) I stopped smoking completely more than a year ago; and (vi) I have never smoked any cigarettes. Those who selected (i) or (ii) were classified as smokers and all the others as non-smokers. Participants who were smokers at baseline and non-smokers at the 12-month follow-up were considered as having quit smoking successfully since the COVID-19 pandemic. Smoking quit attempts (among smokers at baseline) assessed at baseline with the question 'How many quit attempts have you made since COVID-19?' with the option zero to any number, and at each of the four follow-up waves with the question 'Have you tried to stop smoking for good in the [timeframe since the previous follow-up]?' [27] with the option (i) yes or (ii) no. Participants who reported at least one quit attempt either at baseline or at the follow ups were categorised as having made a quit attempt since the onset of COVID-19 pandemic.
Vaping status assessed at baseline and each follow up with the question 'Which statement about vaping (e-cigarette use) best describes you?' (adapted from [27]), with the options (i) I vape or use e-cigarettes every day; (ii) I vape or use e-cigarettes but not every day; (iii) I stopped vaping or using e-cigarettes completely in the last year; (iv) I stopped vaping or using e-cigarettes completely more than a year ago; (v) I have never vaped or used e-cigarettes. Those who selected (i) or (ii) were classified as vapers and all the others as non-vapers. Participants who were vapers at baseline and non-vapers at the 12-month follow up were considered as having quit vaping successfully since the COVID-19 pandemic.
Vaping quit attempts (among vapers at baseline) assessed at baseline with the question 'How many quit attempts have you made since COVID-19?' with the option zero to any number, and at each of the four follow-up waves with the question 'Have you tried to quit vaping for good in the [timeframe since the previous follow-up]?' (adapted from [27]) with the option (i) yes or (ii) no. Participants who reported at least one quit attempt either at baseline or at the follow-ups were categorised as having made a quit attempt since the onset of the COVID-19 pandemic.
Changes in the number of cigarettes smoked per day (assessed among smokers at baseline who were also smokers at the 12-month follow up) was derived from a question asking about the number of cigarettes smoked per day (answer option 1-40+ cigarettes per day; [28]) at two time points: (i) at baseline and (ii) at the 12-month follow up. Three variables for change in smoking from baseline to the 12-month follow up were developed: (1) increased smoking, (2) decreased smoking, (3) no change.
Changes in frequency of vaping per hour (assessed among vapers at baseline who were also vapers at the 12-month follow up) was derived from a question asking about the number of times per hour of e-cigarette use (answer options: (i) less than once, (ii) once, (iii) up to 5 times, (iv) up to 10 times, (v) nearly all the time, (vi) don't know, adapted from [28]) at two time points: (i) at baseline and (ii) at the 12-month follow-up. Three variables for changes in vaping from baseline to the 12-month follow-up were developed: (1) increased vaping, (2) decreased vaping, (3) no change. People who responded 'don't know' were excluded from this analysis.
Reasons for making a smoking/vaping quit attempt (assessed among smokers and vapers at baseline who made a quit attempt) assessed at all waves and included: (1) Rules around social distancing/self-isolation due to COVID-19, (2) Children/parents moved back home due to COVID-19, (3) Money is tighter due to COVID-19, (4) Decided it was too expensive, (5) Health problems/concerns related to COVID-19, (6) Health problems/concerns unrelated to COVID-19, (7) Advice from a GP, (8) Government/Tv/radio/press advert, (9) Social campaign, (10) Being contacted by local NHS Stop Smoking Services, (11) Being faced with restrictions already before COVID-19, (12) I knew someone else who was stopping, (13) Seeing a health warning on a packet, (14) Something said by family/friends/children, (15) Improve fitness, (16) Other. Participants could select one or more reasons. Participants who selected at least one of reasons 1, 2, 3, or 5 were classified as motivated to quit smoking/vaping due to COVID-19-related reasons.
Predictors/Covariates
Predictors were assessed at baseline, unless otherwise stated. Sociodemographic characteristics included age (continuous in years), gender (female vs all other), education (post-16 qualification vs no post-16 qualification), ethnicity (any white ethnicity vs all other including 'prefer not to say'), household income (≥£50,000 vs. <£50,000 GBP vs prefer not to say; 'prefer not to say' was categorised separately as almost 10% of participants selected this answer option), health conditions (no vs. yes including 'prefer not to say').
Smoking/vaping characteristics included motivation to quit smoking (among smokers at baseline)/vaping (among vapers at baseline), assessed with the question 'Which of the following best describes you?' (Motivation to Stop Scale; [29]) with the options '(i) I REALLY want to stop smoking/vaping and intend to in the next month, (ii) I REALLY want to stop smoking/vaping and intend to in the next 3 months, (iii) I want to stop smoking/vaping and hope to soon, (iv) I REALLY want to stop smoking/vaping but I don't know when I will, (v) I want to stop smoking/vaping but haven't thought about when, (vi) I think I should stop smoking/vaping but don't really want to, (vii) I don't want to stop smoking/vaping'. Those who selected (i-v) were considered motivated to quit smoking/vaping.
COVID-19-related characteristics included perceived COVID-19 risk to one's health, assessed with the question 'What risk does COVID-19 pose to your health?'. This was dichotomised into major risk or significant risk versus all other (moderate risk, minor risk, no risk at all, don't know).
We also assessed diagnosed or suspected COVID-19 (measured at 12-months). Participants were asked whether they had been tested for COVID-19 with a swab test (to check current infection) and whether they had been tested for COVID-19 with an antibody/blood test (to check past infection), with the response options (i) yes and tested positive at least once, (ii) yes and tested negative every time, (iii) yes and awaiting results, (iv) no, and (v) prefer not to say for both questions. Participants who reported not having had a positive COVID-19 test were asked "The key symptoms for COVID-19 are high temperature/fever or a new, continuous cough, and loss or change to your sense of smell or taste. Do you think you HAVE or HAD COVID-19?" with the answer options (i) I think I have COVID-19, (ii) I think I had COVID-19, (iii) I do not think I have or have had COVID-19, (iv) don't know, and (v) prefer not to say. All participants reporting 'yes and tested positive at least once' to question 1 and/or 2, or who reported thinking they have or had COVID-19 to question 3 were considered as diagnosed/suspected COVID-19, with all other responses were considered as not diagnosed/suspected COVID-19.
Statistical Analysis
The protocol and analysis plan were pre-registered on Open Science Framework (accessed 17 December 2021. https://osf.io/cdpqf/). Data were analysed in SPSS version 27 (IBM, New York, NY, USA).
Descriptive statistics were calculated to characterise the sample. Independent t-tests and chi-squared tests were conducted to assess differences in baseline characteristics between participants identified as smokers/vapers at baseline who completed the 12-month follow up versus those who did not.
For RQ1, we conducted a descriptive analysis of the proportions (and 95% confidence interval (CI)) of smokers/vapers at baseline who reported a quit attempt at any time between the onset of COVID-19 pandemic and the 12-month follow up (i.e., FU1-FU4), and of the proportions (and 95% CI) of participants who were smokers/vapers at baseline and non-smokers/non-vapers at the 12-month follow up.
For RQ2, we calculated the proportions (and 95% CI) of smokers/vapers at baseline who remained smokers/vapers at the 12-month follow up reporting an increase, decrease, and no change in the number of cigarettes smoked per day/frequency of vaping per hour from baseline to the 12-month follow up.
For RQ3, logistic regression analyses were conducted to examine the association of sociodemographic, smoking/vaping, and COVID-19-related characteristics with making a quit attempt versus not (referent) between the onset of the COVID-19 pandemic and the 12-month follow up. Logistic regression analyses were also conducted to examine the association of quitting smoking/vaping successfully versus not (referent) between baseline and a 12-month follow up with potential explanatory covariates (sociodemographic, smoking/vaping, and COVID-19-related characteristics) included in the model. Additionally, multinomial logistic regression analyses were conducted to examine the association of sociodemographic, smoking/vaping, and COVID-19-related characteristics, with (i) decreased smoking/vaping, and (ii) increased smoking/vaping, versus no change (referent).
For RQ4, descriptive analysis of the proportion (and 95% CI) of those citing COVID-19-related reasons for making a quit attempt was conducted.
Results
Of the analytic sample (n = 332) of smokers and vapers, 68.7% (228) were smokers and 46.7% (155) were vapers at baseline (among smokers and vapers there were 51 dual users, 15.4%). Overall, participants' mean age was 49.1 (SD = 13.5), more than half were female (58.1%), the majority were of white ethnicity, most had post-16 education qualifications, and three quarters had an annual income of less than £50,000. Almost half of them reported having a health problem and less than one-third perceived being at high risk of COVID-19, while one third had been diagnosed with/suspected that they had COVID-19 (Table 1). At baseline, most current smokers (93.9%) reported low/medium cigarette dependence compared with 71.6% of vapers reporting low/medium e-cigarette dependence. More than half (58.8%) of smokers and a third (28.4%) of vapers were motivated to quit smoking and vaping, respectively.
RQ1: Quit Attempts and Quit Rate among Smokers and Vapers
Of the 228 smokers who were followed up successfully at 12 months, 45.0% (95% CI 38.0-51.0%, n = 102) made a quit attempt at any time between the onset of the COVID-19 pandemic and the 12-month follow up, and 17.5% (95% CI 12.6-22.5%, n = 40) had stopped smoking cigarettes at the 12-month follow up.
RQ3: Factors Associated with Quit Attempts, Quit Rate, and Changes in Smoking and Vaping
Making a quit attempt between the onset of the COVID-19 pandemic and the 12-month follow up and smoking cessation at the 12-month follow up were both associated with being motivated to quit smoking and being younger ( Table 2). No significant predictors were identified for any change observed in the number of cigarettes smoked per day between baseline and the 12-month follow up (Table 3).
No significant predictors were identified for making a vaping quit attempt, while vaping cessation at the 12-month follow up was associated with being a cigarette smoker at baseline ( Table 4). Comparisons of exclusive vapers and dual users at baseline showed that more dual users than exclusive vapers quit vaping (29.4% and 12.5% respectively, p = 0.01). No significant predictors were identified for any changes observed in the frequency of vaping per hour between baseline and the 12-month follow up (Table 5).
RQ4: Quit Attempts Due to COVID-19-Related Reasons
Of the 103 smokers who made a quit attempt at any time between the onset of the COVID-19 pandemic in the UK and the 12-month follow up, 27.5% (95% CI 13.0-41.9%) reported COVID-19-related reasons for making a quit attempt. The most popular COVID-19 reason was 'Health problems/concerns related to COVID-19' (25.0%), followed by 'Money is tighter due to COVID-19' (17.5%).
Of the 39 vapers who made a quit attempt at any time between the onset of COVID-19 pandemic in UK and the 12-month follow up, 16.1% (95% CI 10.3-22.0%) reported COVID-19-related reasons for making a quit attempt. Similar to smokers, the most popular COVID-19 reason for vapers was 'Health problems/concerns related to COVID-19' (15.4%), followed by 'Money is tighter due to COVID-19' (12.8%).
Summary of Findings
Using longitudinal data of a sample of UK smokers and/or vapers adults, we examined quit attempts and quit rate, as well as changes in smoking and vaping during the first year of the COVID-19 pandemic. Results showed that almost half of smokers in our sample made a quit attempt since the onset of the COVID-19 pandemic, while a quarter of vapers tried to quit vaping in the same period. Similar proportions of smokers and vapers (~18%) reported that they quit successfully during the study period. Additionally, similar proportions of smokers reported smoking less or the same number of cigarettes during the study period, while fewer smokers reported an increase in the number of cigarettes smoked per day. Half of the vapers reported no change in the frequency of vaping per hour, while a similar proportion of vapers reported vaping less and more. Motivation to quit smoking was associated with making a quit attempt and quit rate among smokers, while being a cigarette smoker was associated with stopping in vapers.
Comparison to Previous Research and Implications
Similar to research conducted in the early days of the COVID-19 pandemic (i.e., [11][12][13]), our findings indicate that a substantial number of smokers made a quit attempt during the first year of the COVID-19 pandemic, and 17.5% quit cigarette smoking successfully. It should, however, be noted that such findings are based on a small sample, which is not representative of the UK population. However, data from the Smoking Toolkit Study suggests that in England in 2020, there was an increase in quit attempts compared with 2018, and an increase in the quitting success rate from 14% to 23% [30]. Additionally, Action on Smoking and Health reports that a million people have stopped smoking since the COVID-19 pandemic in Britain [31]. Potential explanations for such changes include that the COVID-19 pandemic and lockdown periods prompted healthy behaviour change, or changes in usual daily routines and social activities providing the opportunity to change smoking behaviour. Indeed, our findings indicate that a substantial proportion of quit attempts were triggered by the COVID-19 pandemic. Cross-sectional studies during the earlier stages of the COVID-19 pandemic also suggest that some cigarette smokers were motivated to quit smoking because of COVID-19 though the proportions were lower than the present findings (e.g., approximately 12% in a representative sample in England; [13]). It can be argued that as the COVID-19 pandemic progressed and disease severity and mortality rates were elevated, people might have been more motivated to follow a healthier lifestyle and tried to quit smoking to protect themselves from COVID-19. Indeed, the most popular COVID-19-related reason for making a quit attempt was health problems or concerns related to COVID-19.
Findings from the present data suggest an association between motivation to quit and quit attempts and quit rate among smokers. Previous research also indicates that motivation to quit is positively associated with quit attempts, while it has been suggested that higher levels of nicotine dependence are negatively associated with quit success in those making an attempt [21,22]. The present sample of smokers had low to medium levels of cigarette addiction, which may be a reason for not finding a significant association between cigarette dependence and quit rate. It was also found that being younger was associated with quit attempts and quit rate. Closures of schools during the COVID-19 lockdowns in the UK meant that children were housebound, and parents might have had fewer opportunities to smoke because of home-schooling and not wanting to expose their kids to second-hand smoking. Additionally, closures of university campuses made many young adults return to their parents' home, which might have triggered quit attempts and quit success. Indeed, research suggests that many college students paused their smoking and vaping during the first COVID-19 lockdown in the US [32].
Similar proportions of smokers self-reported smoking more or the same numbers of cigarettes per day since the COVID-19 pandemic and fewer smokers self-reported an increase in the numbers of cigarettes smoked per day. Research from the first COVID-19 lockdown in England indicated a higher proportion of smokers increasing their product use [11]. Similarly, research in the US suggests that more smokers increased than decreased their product use during the early stages of COVID-19 pandemic [12,33]. It could be the case that increased stress levels during the beginning of pandemic [34] along with reports that nicotine may be protective against COVID-19 [35], may have triggered increases in smoking in the early days of the pandemic. A clearer understanding of the impact of smoking on COVID-19 outcomes and reductions in stress levels during the later stages of the pandemic might have motivated smokers to reduce cigarette smoking.
Among vapers, our findings indicate that a quarter made a quit attempt and most of them (72%) reported quit success. However, quit rate was associated with being a cigarette smoker, and comparison of exclusive vapers with dual users indicated that more dual users quit vaping since COVID-19, possibly because they simply switched back to smoking. In continuing vapers, we found that this longer-term 12-month study confirmed findings from an earlier short-term study at the beginning of the pandemic [10], showing that the majority did not change their vaping use. However, a higher proportion increased vaping in the early stages of the pandemic, probably because they were staying at home where there are fewer or no restrictions, with more opportunities to vape, and only 10% decreased their vaping during the first lockdown in UK compared with around a quarter a year later. Our results also suggest that a minority of vapers (and smaller proportion than smokers) were motivated to change their product use due to the COVID-19 pandemic, reflecting previous work [10,36]. Such findings may be attributed to the contradictory media reports on the possibility of nicotine being protective against COVID-19 [35] and reports that nicotine-containing vaping is generally safer than cigarette smoking [37], as well as inconclusive findings regarding the association of vaping with COVID-19 infection, disease severity, and death [9,10].
Strengths and Limitations
The present study has several strengths. It is one of only a few reporting changes in smoking and vaping over a 12-month period during the pandemic in the UK. Much of the available work assessing the effect of the COVID-19 pandemic on smoking and vaping is dependent on cross-sectional, retrospective studies, which have the potential for recall bias. Furthermore, the variety of measures collected is another advantage of the present study, permitting a detailed analysis of a broad range of potential correlates of changes in smoking and vaping during the COVID-19 pandemic. The measure of quit attempts over multiple points during the 12-month follow up is another strength of the study, as a large proportion of unsuccessful quit attempts fail to be reported if they last a short time and occurred long ago [38]. However, the study also had several limitations. First, smoking and vaping status and product use were exclusively self-reported, though self-reporting of smoking behaviours in low-demand surveys has been shown to be reliable [39]. Second, there is the possibility of selection bias, as the sample was self-selected, and there were also differences between the baseline and follow-up samples. Additionally, people who participated in the study examining the influence of COVID-19 on health behaviours may have had a greater interest in helping to tackle the pandemic than the general population. Third, for some analyses the sample size was small, resulting in wide confidence intervals. Future research is needed to examine changes in cigarette smoking and vaping in representative samples and to examine how COVID-19 may stimulate interest in reducing or quitting smoking and vaping as well as initiation of these behaviours and serve as a novel opportunity to promote cessation or harm reduction during the current and future respiratory viral pandemics.
Conclusions
In conclusion, our findings suggest that many smokers and vapers have attempted to stop either smoking or vaping, though the number of smokers was greater than vapers, and a high proportion were successful. Additionally, most smokers reported a decrease or no change in cigarette consumption during the COVID-19 pandemic. Similarly, half of vapers reported no change in their vaping consumption and a quarter of them reported vaping less. On the one hand, the pandemic has provided motivation to stop smoking in particular, on the other it may have pushed vapers to switch back to smoking. Funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review or approval of the manuscript; and decision to submit the manuscript for publication.
Institutional Review Board Statement:
The study has been approved by UCL Research Ethics Committee at the UCL Division of Psychology and Language Sciences (PaLS) (CEHP/2020/579) as part of the larger programme 'The optimisation and implementation of interventions to change behaviours related to health and the environment'. All participants provided fully informed consent. The study is GDPR compliant.
|
v3-fos-license
|
2021-06-03T06:17:21.678Z
|
2021-05-30T00:00:00.000
|
235302796
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1944/14/11/2960/pdf",
"pdf_hash": "7ae59df5a3fcbe2aa9f2408838f2b8c2b2459165",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2330",
"s2fieldsofstudy": [
"Materials Science"
],
"sha1": "79cf23cc16d0cbd2c0db1067944cf9781f728197",
"year": 2021
}
|
pes2o/s2orc
|
Effect of Steel Surface Roughness and Expanded Graphite Condition on Sliding Layer Formation
The aim of the research was to evaluate the influence of the initial roughness of a steel pin cooperating with a graphite ring—dry and wet—on the mechanism of sliding layer formation. A ring–pin friction pair was used for the study, where the rings were made of expanded graphite, while the pins were made of acid-resistant steel. In the first case, the steel pin interacted with a dry graphite ring, and in the second case, the graphite rings were moist. To determine the effect of initial surface roughness, the pins were divided into three roughness groups. To determine changes in surface geometry due to material transfer, the Ra and Rz parameters were measured. This project investigated how the initial roughness value of the steel surface pin cooperating with expanded graphite influences the formation of the sliding layer. Increasing the initial roughness of the steel surface interacting with the graphite contributes to faster layer formation and reduced roughness. The state of the expanded graphite—dry and wet—influences the formation of the sliding layer of graphite—a wet graphite component causes a faster smoothing of the steel surface. The running time of the wear apparatus has an effect on the resulting layer. The highest roughness group is the most favorable from the viewpoint of sliding layer formation.
Introduction
The material transfer phenomenon in a graphite-metal pair consists of the transfer of the graphite to the metal surface, so that, with time, the pair can work in graphitegraphite configuration. The literature data on this phenomenon mainly refer to the material transfer of plastics containing graphite as filler [1,2]. The study shows that graphitemodified materials form a sliding layer on the metal surface, which is a factor retarding the destruction process on friction surfaces and increases the product value of pressures and speeds at which the exploitation of the tested materials is possible [3][4][5][6].
In contrast to graphite-modified materials, the phenomenon of transfer of pure graphite to a steel surface is not well recognized. The author [7] states that graphite, due to its properties, has the ability to adsorb on friction surfaces and form a strong sliding layer oriented in the direction of motion. However, the nature and course of the layer formation depend on many factors. These include, but are not limited to, the operating environment, the temperature, the material and the roughness of the mating surfaces [8,9]. An important factor affecting the resulting sliding layer is the moisture content of the sliding pair operating environment. Studies [10] performed for a friction pair, bronze and graphite-stainless steel composite, have shown that the presence of water significantly reduces surface wear. The positive effect of water on the graphite-graphite pair work was confirmed by another study [11]. An example of sliding pairs widely used in aqueous environments are face seals operating in a ceramic-graphite ring configuration [12,13]. According to one theory [7] graphite shows good lubricating properties due to water adsorption. The adsorbed water molecules on the main sliding planes reduce the adhesion between them and thus reduce the coefficient of friction as they slide [11,14,15]. The problem turns out to be that too much water washes away the resulting graphite wear particles, which are necessary for the formation of the sliding layer [16]. The existence of the problem is confirmed by studies [17,18], which indicate that a small amount of water in the sliding pair contributes to a decrease in friction, while too much contributes to an increase in wear. Temperature also has an influence on the layer formation in the pair containing graphite material [19]. For selected material pairs, it was found that the higher the temperature, the higher the coefficient of friction [20]. As the temperature increases, the contact area between the mating materials increases due to the destruction of the protective sliding layer. Moreover, graphite oxidizes at high temperatures, causing instability of the resulting layer. The authors of [21] showed that at temperatures of up to 50 • C, the formed layer is visible as a slight tarnishing of the metal surface. Above 100 • C the resulting layer is thicker, while above 300 • C the graphite covers the original machining marks. The reason for this is the increase in graphite consumption at higher temperatures. Research [20] confirms that the wear rate of the graphite surface increases with increasing temperature. The resulting wear products are embedded in surface unevenness and form a film. In addition to the effect of temperature on the resulting layer, the authors [22,23] also investigated the effect of the roughness of the surface mating with the graphite component. Observing steel mandrels operating at 150 • C in valve stuffing boxes in association with rings made of expanded graphite, it was noted that the surface roughness of the mandrels affects the layer appearance. The polished chrome surfaces with a roughness of Ra = 0.15 µm had virtually no traces of the graphite layer. The material did not have the ability to anchor into the microcavities. As the mandrel roughness increased to Ra = 2.6 µm, the visibility and thickness of the layer had also increased. However, at ambient temperature, it was difficult to find a correlation between the layer formed and the surface roughness of the mandrel. The effect of surface roughness on the resulting layer was also addressed in [24]. Surfaces with higher roughness parameters were judged to be more favorable because they acted as microscopic reservoirs of solid lubricants. The thick and heterogeneous layers formed remained intact and withstood higher shear loads. The study showed that even though the adhesion was poor, the ability to anchor the graphite particles in the microcavities contributed to improved layer durability. Subsequent researchers [25] evaluated the effect of roughness and time of operation on the layer formed. They observed that in the early stages of the operation of carbon materials with steel, the wear rate of graphite increased. In addition, the process was characterized by a large number of fine carbon particles that were outside the contact zone. In further operation of materials, the carbon impurities formed a layer of compacted particles on both the carbon surface and the opposite harder mating surface. The presence of the transfer layer provided a cushioning effect and a decrease in the wear coefficient. In the study of [26], the author observed that a significant amount of graphite wear particles produced remain in the contact zone. The wear particles, after repeated deformation and fragmentation, agglomerated at appropriate locations on the wear surfaces and formed a stable layer. The sudden increase in the coefficient of friction was due to the disruption of the graphite layer. An interesting conclusion regarding roughness was made by the authors of [16] who reported that if the difference in hardness is greater than 20%, the surface roughness will play an important role in reducing wear on mating surfaces.
Summarizing the analysis of the state of the art, it can be said that the literature quite unambiguously indicates a positive effect of the formed graphite layer on the durability of the cooperating materials. The effect of the layer depends on a number of factors, including the roughness of the mating surfaces, the temperature and the operating environment. The large number of physical and chemical factors affecting graphite transfer means that a comprehensive explanation of this phenomenon is lacking to date. It is also difficult to determine which of the elementary wear phenomena dominates under specific friction conditions. This paper presents the results of a preliminary study of the material transfer phenomenon in the pair of expanded graphite (dry and wet) and steel with different roughness to determine the mechanism of graphite sliding layer formation on a steel component.
Materials and Methods
A ring-pin pair was used for the study (Figure 1). The rings were made of expanded graphite, which is the carbon material most commonly used in industry due to its easy and inexpensive manufacturing process [27][28][29][30][31]. Unlike traditional graphite, expanded graphite is a material with a lower degree of crystalline ordering; it is porous, with a low bulk density of about 0.001 g/cm 3, and susceptible to rolling. Expanded graphite can increase in volume by hundreds or even thousands of times its original dimensions. For example, graphite flakes with thicknesses in the range of 0.4-60 µm in their initial state can increase their dimensions up to about 20 mm [27]. and inexpensive manufacturing process [27][28][29][30][31]. Unlike traditional graphite, expanded graphite is a material with a lower degree of crystalline ordering; it is porous, with a low bulk density of about 0.001 g/cm 3, and susceptible to rolling. Expanded graphite can increase in volume by hundreds or even thousands of times its original dimensions. For example, graphite flakes with thicknesses in the range of 0.4-60 μm in their initial state can increase their dimensions up to about 20 mm [27].
The pins were made of acid-resistant steel-AISI 304. Information about the tested materials is presented in Table 1. The tests were performed on a pin-ring test stand (Figure 1), where the pin moved in reciprocating motion on the surface of the ring made of expanded graphite. The reciprocating motion of the pin was achieved using an eccentric mechanism. The tests were conducted with a load of 18.5 N and with an average pin velocity of 25 mm/s over a 6 mm section. The friction surfaces of the pins were tested before mating, after static load at rest for 30 s, and then after 10, 30 s and after 5, 15 and 30 min of mating. In the first stage, friction was dry (i.e., the steel pin cooperated with the dry graphite ring), and in the second stage, the graphite rings were moist (they were soaked in water for 24 h before the test). Before the test, the rings were removed from the container with water and dried on blotting paper. The working conditions are shown in Table 2. The designed working conditions of the graphite rings and steel pins correspond to the actual conditions of their work. This association works for the sealing of industrial fittings. The seals used in the valves work in a motion-rest system. This system may be disadvantageous, e.g., The pins were made of acid-resistant steel-AISI 304. Information about the tested materials is presented in Table 1. The tests were performed on a pin-ring test stand (Figure 1), where the pin moved in reciprocating motion on the surface of the ring made of expanded graphite. The reciprocating motion of the pin was achieved using an eccentric mechanism.
The tests were conducted with a load of 18.5 N and with an average pin velocity of 25 mm/s over a 6 mm section. The friction surfaces of the pins were tested before mating, after static load at rest for 30 s, and then after 10, 30 s and after 5, 15 and 30 min of mating. In the first stage, friction was dry (i.e., the steel pin cooperated with the dry graphite ring), and in the second stage, the graphite rings were moist (they were soaked in water for 24 h before the test). Before the test, the rings were removed from the container with water and dried on blotting paper. The working conditions are shown in Table 2. The designed working conditions of the graphite rings and steel pins correspond to the actual conditions of their work. This association works for the sealing of industrial fittings. The seals used in the valves work in a motion-rest system. This system may be disadvantageous, e.g., when it is the cause of vibrations causing disturbances in the movement of rubbing elements, the so-called the stick-slip phenomenon, which is characteristic of graphite. In order to determine the influence of the initial roughness of the pin on the mechanism of formation of the sliding layer, tests were carried out in which the graphite ring had the same roughness parameters in each case when cooperating with steel pins, the surface of which was made in three different roughness groups, hereinafter designated I, II, III.
To determine the changes in surface geometry due to material transfer, the following pin surface roughness parameters were measured: Ra and Rz (Table 3). Three repetitions of roughness measurements were performed for each surface. After the roughness measurements, the steel surfaces of the samples were subjected to observational studies using a scanning microscope (Nikon, Tokyo, Japan). This enabled a preliminary assessment of the formed sliding layer. Measurements of 2D surface stereometry were made with a ZAISS contact profilometer (Carl Zeiss AG, Oberkochen, Germany) equipped with heads with an induction transducer and SUFORM software by SAJD METROLOGIA (Kielce, Poland), which allows for measurements and analysis of deviations of straightness and surface roughness. The measurements were made by determining the sampling length λc = 0.8 mm, where the measuring length Ln was adequately 4 mm.
Surface Roughness
The surface roughness parameters of steel pins before and after testing for dry and wet samples are shown in Figures 2 and 3. The results are for pins of roughness group I.
when it is the cause of vibrations causing disturbances in the movement of rubbing elements, the so-called the stick ̶ slip phenomenon, which is characteristic of graphite. In order to determine the influence of the initial roughness of the pin on the mechanism of formation of the sliding layer, tests were carried out in which the graphite ring had the same roughness parameters in each case when cooperating with steel pins, the surface of which was made in three different roughness groups, hereinafter designated I, II, III. To determine the changes in surface geometry due to material transfer, the following pin surface roughness parameters were measured: Ra and Rz (Table 3). Three repetitions of roughness measurements were performed for each surface. After the roughness measurements, the steel surfaces of the samples were subjected to observational studies using a scanning microscope (Nikon, Tokyo, Japan). This enabled a preliminary assessment of the formed sliding layer. Measurements of 2D surface stereometry were made with a ZAISS contact profilometer (Carl Zeiss AG, Oberkochen, Germany) equipped with heads with an induction transducer and SUFORM software by SAJD METROLOGIA (Kielce, Poland), which allows for measurements and analysis of deviations of straightness and surface roughness. The measurements were made by determining the sampling length λc = 0.8 mm, where the measuring length Ln was adequately 4 mm.
Surface Roughness
The surface roughness parameters of steel pins before and after testing for dry and wet samples are shown in Figures 2 and 3. The results are for pins of roughness group I. Under dry running conditions, a significant increase in the surface roughness of the pin occurs after 5 min. In further operation, the roughness does not drop below the initial value. The surface of a steel sample working with a moist ring shows a decrease in roughness after 15 min. At this stage of research the theory already confirms that the presence of water affects the lubricating properties of these materials [11,14,15].
A summary of the roughness parameters Ra and Rz of the mandrels working in dry conditions and in cooperation with moist rings for roughness group I is shown in Under dry running conditions, a significant increase in the surface roughness of the pin occurs after 5 min. In further operation, the roughness does not drop below the initial value. The surface of a steel sample working with a moist ring shows a decrease in roughness after 15 min. At this stage of research the theory already confirms that the presence of water affects the lubricating properties of these materials [11,14,15].
A summary of the roughness parameters Ra and Rz of the mandrels working in dry conditions and in cooperation with moist rings for roughness group I is shown in Under dry running conditions, a significant increase in the surface roughness of the pin occurs after 5 min. In further operation, the roughness does not drop below the initial value. The surface of a steel sample working with a moist ring shows a decrease in roughness after 15 min. At this stage of research the theory already confirms that the presence of water affects the lubricating properties of these materials [11,14,15].
A summary of the roughness parameters Ra and Rz of the mandrels working in dry conditions and in cooperation with moist rings for roughness group I is shown in Under dry running conditions, a significant increase in the surface roughness of the pin occurs after 5 min. In further operation, the roughness does not drop below the initial value. The surface of a steel sample working with a moist ring shows a decrease in roughness after 15 min. At this stage of research the theory already confirms that the presence of water affects the lubricating properties of these materials [11,14,15].
A summary of the roughness parameters Ra and Rz of the mandrels working in dry conditions and in cooperation with moist rings for roughness group I is shown in The surface tests performed for the roughness group I showed that dry operation increased the roughness of the steel samples. When the steel pins were worked with moist The surface tests performed for the roughness group I showed that dry operation increased the roughness of the steel samples. When the steel pins were worked with moist counter samples, the surface roughness began to decrease after 5 min of operation. The results of Ra and Rz parameters for group II roughness are shown in Figures 6 and 7. During dry operation, the highest increase in roughness occured after 5 min. After 30 min, a decrease in roughness values was observed compared to group I. The operation of the friction node with moist ring showed a significant decrease in roughness after 30 min of operation. The summary of the roughness parameters of the pins working in dry and in association with moist rings are shown in Figures 8 and 9. The surface tests performed for the roughness group I showed that dry operation increased the roughness of the steel samples. When the steel pins were worked with moist counter samples, the surface roughness began to decrease after 5 min of operation. The results of Ra and Rz parameters for group II roughness are shown in Figures 6 and 7. During dry operation, the highest increase in roughness occured after 5 min. After 30 min, a decrease in roughness values was observed compared to group I. The operation of the friction node with moist ring showed a significant decrease in roughness after 30 min of operation. The summary of the roughness parameters of the pins working in dry and in association with moist rings are shown in Figures 8 and 9. During dry operation, the highest increase in roughness occured after 5 min. After 30 min, a decrease in roughness values was observed compared to group I. The operation of the friction node with moist ring showed a significant decrease in roughness after 30 min of operation. The summary of the roughness parameters of the pins working in dry and in association with moist rings are shown in Figures 8 and 9. The greatest differences are seen after 5 min of friction node operation. The longer running time resulted in a decrease in the surface roughness of the pin. After 30 min of operation, the friction node with moist rings had a lower roughness than before the test. A decrease in roughness was also noted for the friction node running dry. The performed pin surface roughness measurements for group III are shown in Figures 10 and 11. The greatest differences are seen after 5 min of friction node operation. The longer running time resulted in a decrease in the surface roughness of the pin. After 30 min of operation, the friction node with moist rings had a lower roughness than before the test. A decrease in roughness was also noted for the friction node running dry. The performed pin surface roughness measurements for group III are shown in Figures 10 and 11. The greatest differences are seen after 5 min of friction node operation. The longer running time resulted in a decrease in the surface roughness of the pin. After 30 min of operation, the friction node with moist rings had a lower roughness than before the test. A decrease in roughness was also noted for the friction node running dry. The performed pin surface roughness measurements for group III are shown in Figures 10 and 11. During dry operation, a decrease in baseline roughness was noted after 30 min of operation. This distinguishes group III from the previous groups, which had higher roughness values. For samples mated with moist rings, a decrease in roughness was noted after only 5 min of operation. A summary of the roughness parameters of pins operating in dry and with moist rings for group III is shown in Figures 12 and 13. During dry operation, a decrease in baseline roughness was noted after 30 min of operation. This distinguishes group III from the previous groups, which had higher roughness values. For samples mated with moist rings, a decrease in roughness was noted after only 5 min of operation. A summary of the roughness parameters of pins operating in dry and with moist rings for group III is shown in Figures 12 and 13. During dry operation, a decrease in baseline roughness was noted after 30 min of operation. This distinguishes group III from the previous groups, which had higher roughness values. For samples mated with moist rings, a decrease in roughness was noted after only 5 min of operation. A summary of the roughness parameters of pins operating in dry and with moist rings for group III is shown in Figures 12 and 13. During dry operation, a decrease in baseline roughness was noted after 30 min of operation. This distinguishes group III from the previous groups, which had higher roughness values. For samples mated with moist rings, a decrease in roughness was noted after only 5 min of operation. A summary of the roughness parameters of pins operating in dry and with moist rings for group III is shown in Figures 12 and 13. Group III had the highest surface roughness. There was no significant increase in roughness for the dry worked samples. However, the roughness decreased after 30 min of operation. Samples working with moist rings had a lower roughness than the initial roughness after only 5 min of operation. This condition persisted until the end of the friction test. The roughness of dry and wet pins was comparable after 30 min of operation.
In all the analyzed groups of roughness, the working time of the graphite-steel pair had an impact on the sliding layer formed, which was noted in investigation [25].
In order to evaluate the effect of the initial roughness and the operating conditions (dry and moist counter samples) on the sliding layer formation, the results of the surface roughness measurements of the tested pins after 30 min of association operation were summarized (Figures 14 and 15).
In all the analyzed groups of roughness, the working time of the graphite-steel pair had an impact on the sliding layer formed, which was noted in investigation [25].
In order to evaluate the effect of the initial roughness and the operating conditions (dry and moist counter samples) on the sliding layer formation, the results of the surface roughness measurements of the tested pins after 30 min of association operation were summarized (Figures 14 and 15). The graph ( Figure 14) shows that after 30 min of operation, the initial roughness of group I increased, while for group II it reached the initial value. Group III had a decrease in roughness from the baseline value. The steel surfaces interacting with the moist counter samples after 30 min of operation for all groups had lower roughness compared to the initial roughness. summarized (Figures 14 and 15). The graph (Figure 14) shows that after 30 min of operation, the initial roughness of group I increased, while for group II it reached the initial value. Group III had a decrease in roughness from the baseline value. The steel surfaces interacting with the moist counter samples after 30 min of operation for all groups had lower roughness compared to the initial roughness. The graph (Figure 14) shows that after 30 min of operation, the initial roughness of group I increased, while for group II it reached the initial value. Group III had a decrease in roughness from the baseline value.
The steel surfaces interacting with the moist counter samples after 30 min of operation for all groups had lower roughness compared to the initial roughness.
Physical Model of Sliding Layer Formation
The analysis of the chemical composition of the layer carried out in the first stage showed that the material from which the layer came into being was coal. Roughness profilograms and photographs of the pin surfaces after testing allow us to present a preliminary model of the formation of a graphite sliding layer due to the transfer of graphite to the steel component. In the initial stage of the research, graphite particles are anchored to the uppermost tips of the microcavities (1). Then, as a result of the movement, the amount of graphite material on the steel surface increases (2). After 5 min of work, the visible grooves of the indoor surface are filled with graphite. However, the steel vertices are still noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I). of graphite material on the steel surface increases (2). After 5 min of work, the visible grooves of the indoor surface are filled with graphite. However, the steel vertices are still noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Conditions of Cooperation
of graphite material on the steel surface increases (2). After 5 min of work, the visible grooves of the indoor surface are filled with graphite. However, the steel vertices are still noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Conditions of Cooperation
of graphite material on the steel surface increases (2). After 5 min of work, the visible grooves of the indoor surface are filled with graphite. However, the steel vertices are still noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Conditions of Cooperation
of graphite material on the steel surface increases (2). After 5 min of work, the visible grooves of the indoor surface are filled with graphite. However, the steel vertices are still noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I). grooves of the indoor surface are filled with graphite. However, the steel vertices are still noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Conditions of Cooperation
grooves of the indoor surface are filled with graphite. However, the steel vertices are still noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Physical Model of Sliding Layer Formation
The analysis of the chemical composition of the layer carried out in the first stage showed that the material from which the layer came into being was coal. Roughness profilograms and photographs of the pin surfaces after testing allow us to present a preliminary model of the formation of a graphite sliding layer due to the transfer of graphite to the steel component. In the initial stage of the research, graphite particles are anchored to the uppermost tips of the microcavities (1). Then, as a result of the movement, the amount of graphite material on the steel surface increases (2). After 5 min of work, the visible grooves of the indoor surface are filled with graphite. However, the steel vertices are still noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Conditions of Cooperation
grooves of the indoor surface are filled with graphite. However, the steel vertices are still noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I). noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Conditions of Cooperation
noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Conditions of Cooperation
noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Conditions of Cooperation
noticeable (3). This phenomenon is practically imperceptible after 30 min of work of samples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I). ples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Conditions of Cooperation
ples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Conditions of Cooperation
ples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Conditions of Cooperation
ples cooperating with wet rings (4). The photos, along with the diagrams, are summarized in Table 4. Table 4. Schematic diagram of the formation of a graphite sliding layer on the surface of a steel sample and photographs of its surface after interaction with a dry and moist counter sample of expanded graphite (roughness group I).
Conditions of Cooperation
After 30 s of static contact, there is a transfer of graphite particles to the metal surfaces for both dry ring contact and moist ring contact. However, it is difficult to see any regularity in the arrangement of the graphite particles. An initial movement lasting 10 s brings no significant change. A movement lasting 60 s causes the roughness of the pin to increase. The reason for this change is an increase in the amount of graphite material transferred systematically to the steel surface-specifically, first of all on the tips of the microcavities. The highest values of the roughness parameters are obtained after five minutes of movement. After this time, the profile of the surface roughness outline of the pin begins to smooth out. This phenomenon occurs much better with samples working with moist rings. After 15 min of operation, these samples had a lower surface roughness than before the test.
Studies [22,23] suggested that the resulting sliding layer is visible faster for higher surface roughness. This phenomenon is confirmed by the obtained research results.
In conclusion, it can be said that the graphite layer is formed by three mechanisms. The first one is the adhesive transfer of the expanded graphite material to the uppermost tips of the microcavities. In addition to this, a second process occurs at the same time, which is the formation of a sliding layer of loose, fragmented particles, which, as a result of their separation from the surface of graphite rings, accumulate in the recesses of the steel sample microcavities.
The third mechanism occurs when large graphite particles are detached from the ring surface due to the cutting action of the tips of the steel surface microcavities and placed in the recesses between the tips of the microcavities. This proves that in addition to the adhesive interactions between graphite and steel substrate, mechanical interactions play an important role in the formation of the sliding layer, as a result of which the particles detached from the ring surface and not removed from the friction surface participate in the sliding layer formation [24]. During subsequent movements, these particles are pressed between the tips and smeared on their surface. Further loose particles can also adhere to them by means of adhesion. The layer formed in this way consists of long strands at the tips of the microcavities and small and larger particles in the recesses. During further movements, the loose graphite particles are pressed against the graphite particles already in the recesses.
During the operation of steel samples with both dry and moist rings, wear products are formed, which are carried out of the area friction node. Loose particles of these products are visible to the unaided eye on the surface of the rings. A significant number of particles outside the contact zone were mentioned in works [25,26].
Conclusions
The results obtained allow us to formulate the following conclusions: 1 The initial roughness value of the steel surface working with expanded graphite affects the formation of the sliding layer and thus the roughness of the steel surface after testing. Increasing the initial roughness of the steel surface interacting with the graphite contributes to faster layer formation and reduced roughness. This is seen especially for the pins working with dry graphite. 2 The state of the expanded graphite (dry or wet) influences the formation of the sliding layer of graphite-a wet graphite component causes a faster smoothing of the steel surface. 3 The running time of the friction node has an effect on the resulting layer: -The transfer of material to the metal surface already occurs at static contact; -A significant increase in the surface roughness occurs after 5 min operation, especially for dry expanded graphite; -After 30 min of operation, samples interacting with moist expanded graphite have a lower roughness than before the test. 4 Comparing the roughness groups selected for the study, it can be indicated that group III is the most favorable from the point of view of the formation of the sliding layer.
In the case of this group, the roughness of the pins of the dry samples after 30 min of work dropped below the initial roughness. This phenomenon was not observed for groups I and II. On the other hand, the greatest decrease in the pin roughness occurred for the moist samples.
Author Contributions: Conceptualization, A.R., G.K. and K.P.; A.R. and G.K. performed most experiment and analyses; A.R., G.K. and K.P. created all the figures and the graphical abstract; A.R. undertook the statistical analyses; A.R. wrote the manuscript, with input from all authors. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest.
|
v3-fos-license
|
2022-04-20T15:10:15.080Z
|
2021-12-31T00:00:00.000
|
248249740
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.37897/rjp.2021.4.7",
"pdf_hash": "d1d5835ca138593145dbcca28acfd244d67702c8",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2332",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "d7c80a63a2196932e15bb1cf22a051576f29ce81",
"year": 2021
}
|
pes2o/s2orc
|
Antimicrobial resistance: A new threat in the COVID-19 era?
During the COVID-19 pandemic antibiotic use considerably increased being partially justified by the fear of bacterial infection. Antibacterial resistance (ABR), due to increased and unjustified antibiotic use, is a major threat to the economy and global health. In pediatric practice, antibiotics are the most common prescribed substances, both in the community and hospital set-ting. Unjustified use, inappropriate doses and prescription duration promote antibacterial resistance and increase mortality. The majority of current health problems in children are viral infections, even the rates of infection are higher as in adults, fact that determines more frequent diagnostic uncertainties. More so, respiratory infections in children have a greater potential of excessive and incorrect use of antibiotics, that justifies protocol based evaluations, risk assessment and targeted treatment. The diversity and magnitude of clinical presentation of SARS-CoV-2 infection, along with early and long-term complications and sequellae, also noticed in pediatric practice, created the premises of increased use of antibiotics, aggravating the ABR. This is a challenge for the clinician during a period when development of new antibiotics is not a priority of the pharma industry anymore. Research of mechanisms that contribute to ABR, innovative therapies, expansion of genetics and implementation of antibiotic stewardship, together with stimulating pharma industry for developing of new substances, may have the potential of decreasing antibacterial resistance in an era of medical and economic uncertainties generated by the COVID-19 pandemic.
INCREASE OF ANTIBACTERIAL RESISTANCE: A GLOBAL PHENOMENON, AGGRAVATED BY THE COVID-19 PANDEMIC
In December 2019, the World Health Organization (WHO), after being notified about an unusual outbreak of pneumonia cases in Wuhan, China, investigated and identified a novel beta-coronavirus that determined a severe acute respiratory syndrome (SARS-CoV-2) (1).
Starting of March 2020, around the globe spreading of COVID-19, determined considerable increase of antibiotics use, justified by the fear of bacterial superinfection.
The increased and unjustified, even abusive antibiotic use determines the emergence and dissemination of antimicrobial resistance (AMR), that possess a challenge for global economy and health. AMR is not an exclusive hospital or intensive care patients burden, it became also a community problem, as well as of the consumers.
Current estimation of international antibiotic stewardship programs show that in 2050 AMR will be responsible for the death of 10 million people and, furthermore, will cost as much as US$ 100 trillion. It has been appreciated that the spectrum of resistance varies widely from region to region, underlining the importance of adapting interventions based on specificity of health problems and risk evaluation for certain populations and geographic areas (2).
In developed countries and with increased health budgets, AMR is a serious challenge -in the US, data from the Centers for Disease Control and Prevention (CDC) in 2019 reports 2.8 million of antibiotic resistant infections each year with over 35,000 associated deaths.
Estimation of the extension of ABR in countries with limited health budgets is poorer, which underlines the importance of implementing antibiotic stewardship programs coordinated by governmental interventions (3). Studies need to be organized and conducted at an international level to evaluate the AMR phenomenon, because pathogens do not respect borders. This issue is much more important in developing countries where data from research is scarce and conditions for spreading infectious pathogens may be optimal, especially when availability of treatment is sub-optimal (4).
More so, from the beginning of the pandemic, there are increased concerns related to supplemental and aggravating increase of AMR generated by prescriptions due to COVID-19 itself in adults and in pediatric practice, even if children have been shown to have mild outcomes (5,6).
Due to millions of cases globally, the COVID-19 pandemic had a devastating impact on the society as a whole, and the long term repercussions on AMR have been considered as an aggravating concern for the healthcare system (7,8,9). This was partially due to the elevated antibiotic use in patients infected with SARS-CoV-2, considered to be prone to bacterial co-infection, despite the viral nature of the syndrome (10,11,12). The reports on that were more accentuated at the beginning of pandemic.
It has been shown that, despite frequent antibiotic prescription to patients with COVID-19, the prevalence of bacterial co-infection or secondary infection in hospitalized patients is low (3.5% and 14.3% respectively) (11). This allowed a potential increase of selective pressure for developing AMR, especially in high risk patients in a strained healthcare system with scarce surveillance capacity. This could finally lead to a long lasting, aggravating phenomenon of an already increased AMR with limited intervention capacity (13,14).
ORIGINS OF AMR IN PEDIATRIC PRACTICE AND CHALLENGES FOR THE CLINICIANS
In pediatric practice, antibiotics are the most common prescribed substances both in the community and hospital setting. Unjustified use, inappropriate doses and prescription duration promote antibacterial resistance and increase mortality.
Tackling antibiotic resistance during the COVID-19 pandemic is a new challenge for the pediatrician, especially in an era when multi-drug resistant bacteria will soon overwhelm this medical area (15).
The majority of current health problems in children are viral infections, even the rates of infection are higher as in adults, fact that determines more frequent diagnostic uncertainties. More so, respiratory infections in children have a greater potential of excessive and incorrect use of antibiotics, fact that justifies protocol based evaluations, risk assessment and targeted treatment. Uncertainty may lead to more antibiotic prescribed, even in mild cases of COVID-19 in children, as well as it has been shown in complications like multisystem inflammatory syndrome in children (MIS-C) (16). Prescription of antibiotics was high, mostly in severely affected children and sometimes because of poorly understood pathophysiological underlined mechanisms.
As the majority of children undergo other viral infections during the SARS-CoV-2 waves, the use of antibiotics have to be rationally evaluated.
Antibiotic resistance in pediatric infections is a subject of concern for all clinicians and is a part of the global emerging threat that leaves us in front of the questions that are directed to the origins of the phenomenon.
The causes of increasing AMR in general are determined by several main mechanisms (17): abuse of antibiotics (there is a direct relationship between the consume and the dissemination of resistant bacteria, the lack of prescription regulations and easy access); the inappropriate prescription (frequent incorrect or unnecessary therapies even in intensive care units, sub inhibitory or under therapeutic concentrations may determine increase of AMR through genetic alteration); extensive use in agriculture (growth factors, transfer of resistant bacteria, alteration of microbiome).
The origins of AMR in pediatrics in contrast support more arguments than the previous ones and may be regarded as a sensitive tool in order to predict the near future as it is an emerging threat that needs immediate action (18).
According to WHO data, infections caused by multi-drug resistant (MDR) bacteria produce more than 700,000 deaths across all ages, with one third in newborns (19). Interestingly, even if Sir Alexander Fleming already warned the scientific and med-ical community regarding antibiotic overuse in 1945, an era of abuses in agriculture, veterinary and human medical practices started after World War II and determined the driving force in bacterial resistance. Furthermore, inappropriate use, medical malpractice contributed to selection of MDR bacteria that constitutes the overwhelming actual burden of AMR (20).
The main driving mechanisms of the pediatric phenomenon are: antibiotic misuse and overuse in hospitalized children and outpatient care, that is reflected by the lack of de-escalation or discontinuation of antibiotics according to culture results, even in intensive care units; use of older antibiotics vs. newer ones, depending on available hospital, ward or country resources, posology and dosing in pediatric population, which is characterized by variability in pharmacokinetics in children, lack of clinical trials that drives extrapolation of pharmacodynamics and pharmacokinetics based on adult studies, the necessity of a "developmental pharmacology" that adapts pediatric needs and, last but not least, intervention of regulatory organisms as the Food and Drug Administration, European Medicines Agency (21,22,23), lack of options when choosing antibiotics that are contraindicated in children, modern predisposing conditions: biofilms (bacterial adaptation mechanisms that lead to AMR) and chronic diseases (24).
FUTURE DIRECTIONS: ANTIBIOTIC STEWARDSHIP AND STEPPING BACK FROM THE EDGE
Antibiotic stewardship (AS) is represented by a coordinated set of strategies meant to measure and to improve prescription of antibiotics by clinicians and the use by the patients. The aim converges to improving of patient care and prognosis through optimal therapies, reducing collateral effects through decrease of antibiotic resistance and lowering the costs of antimicrobial therapies. It is a concept introduced by the Infectious Diseases Society of America in 2007 (25).
The core elements of hospital antibiotic stewardship programs have to be centered on the following elements: leadership commitment (dedicating necessary human, financial and technology resources), accountability (appointing a single leader responsible for programmed outcomes), drug expertise (accounting a single pharmacist leader responsible for improving antibiotic use), action (implementing at least one recommended action such as systemic evaluation after 48 hours of initial treatment), tracking (monitoring antibiotic resistance and prescribing patterns), reporting (regular reporting for informations about antibiotic use and resistance to the medical staff), education (medical staff education regarding AMR and optimal prescribing).
All major medical regulation organizations implement the principles of antibiotic stewardship in order to responsibly use and prescribing by the clinicians, having as aim the patient health.
In pediatric practice, AS has to be centered by the differences in common infectious conditions, drug-specific considerations and the registrations of treatment recommendations that depend on the place where it applies for (outpatient/inpatient, personnel, infrastructure), the used approaches in effectiveness evaluation and the level of knowledge (26).
THE FUTURE OF AMR, ARTIFICIAL INTELLIGENCE AND THE HUMAN MIND
An interesting proposal of a revolutionary field, artificial intelligence (AI), is extremely useful in many human activities, including medicine, and it refers to adapting the needs for fighting against AMR in pediatrics.
Considering pediatric infectious disease, AI could play a role in enriching AS effectiveness (27). This could mean that integrating the intelligence of a machine in the human thinking it would be possible to predict and evaluate infectious disease and even make appropriate antibiotic prescriptions.
Based on modern research, it would be possible to predict AMR, to have collaborative efforts together with the pharma industry, to discover new molecules, to enhance diagnostic procedures, to reduce costs. Most of the proposed solutions are not intended to replace personal medical judging and expertise, but to provide a useful tool in the battle with AMR.
Limitations of AI have to be taken in consideration when thinking of lack of large amount of pediatric data and randomized controlled trials, data protection and security.
If thinking of relevance to global health research, al-driven interventions that aim to fight AMR may be summarized as four categories: diagnosis, patient morbidity and mortality risk assessment, disease outbreak prediction and surveillance, health policy and planning (28). If these directions may be accomplished, and ethical, practical and regulatory considerations will be clarified, the global health community may benefit in fighting the overwhelming pandemic of multi-drug resistant bacteria.
CONCLUSIONS
The diversity and magnitude of clinical presentation of SARS-CoV-2 infection, along with early and long-term complications and sequelae, also noticed in pediatric practice, created the premises of increased use of antibiotics during the COVID-19 pandemic aggravating the AMR.
Diagnosis uncertainty, associated comorbidities, the polymorphism of clinical presentation in MIS-C, limited antibiotic stewardship incentives in low and middle income countries, poor surveillance of antibiotic prescription along with decreased interest of pharma industry for developing new molecules needed in the fight against AMR are the challenging issues for the clinician during the COVID-19 pandemic.
Rethinking new strategies of stimulating pharma industry for investing in research for new molecules, international collaborative relationships through medical and scientific societies and governments and, last but not least, responsibility and clear judgement in decision making regarding antibiotic use may brighten up the perspective of fighting against AMR, amid actual and also future pandemics.
|
v3-fos-license
|
2023-08-21T06:42:56.407Z
|
2023-08-17T00:00:00.000
|
261030552
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/E7B6F3711BDFCC8B8AE49EDB88287136/S002211202301090Xa.pdf/div-class-title-yielding-to-percolation-a-universal-scale-div.pdf",
"pdf_hash": "975fe6911cf25d4f7ef6eb65d51426f9d8f51fb9",
"pdf_src": "ArXiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2333",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "f1c8f62f20589fcdf366b4d5e35bcad6997d2e22",
"year": 2023
}
|
pes2o/s2orc
|
Yielding to percolation: a universal scale
A theoretical and computational study analysing the initiation of yield-stress fluids percolation in porous media is presented. Yield-stress fluid flows through porous media are complicated due to the non-linear rheological behaviour of this type of fluids, rendering the conventional Darcy type approach invalid. A critical pressure gradient must be exceeded to commence the flow of a yield-stress fluid in a porous medium. As the first step in generalising the Darcy law for yield-stress fluids, a universal scale based on the variational formulation of the energy equation is derived for the critical pressure gradient which reduces to purely geometrical feature of the porous media. The presented scaling is then validated by both exhaustive numerical simulations (using an adaptive finite element approach based on the augmented Lagrangian method), and also the previously published data. The considered porous media is constructed by randomised obstacles with various topologies; namely, square, circular and alternatively polygonal obstacles which are mimicked based on Voronoi tessellation of circular cases. Moreover, computations for the bi-dispersed obstacle cases are performed which further demonstrate the validity of the proposed universal scaling.
Introduction
Yield-stress fluid flows through porous media are inherent to many industries including filtration, oil & gas and mining (Frigaard et al. 2017) and also numerous other applications such as biomedical treatments (Keating et al. 2003).Although in the case of Newtonian fluids many aspects of flows in porous media are well-discussed in the literature, when it comes to yield-stress fluids, our understanding of the phenomenon is limited mainly because modeling this problem is cumbersome due to the computational costs and/or the complexity of the experiments needed to carry out the analysis.
To overcome these barriers, several studies focused on pore-scale features of this problem (Bleyer & Coussot 2014;Shahsavari & McKinley 2016;De Vita et al. 2018;Bauer et al. 2019;Waisbord et al. 2019;Chaparian et al. 2020), however, it is yet unclear how to link/upscale the studies in micro-scale to macro-scale, especially due to the non-linearity of the constitutive equations which renders the bulk transport properties unpredictable from pore-scale dynamics.Nevertheless, in the intricate transport mechanism of yieldstress fluids through porous media, several mutual features can be identified regardless of the scales on which the previous studies are focused.In a number of studies (Talon & Bauer 2013;Liu et al. 2019;Chaparian & Tammisola 2021;Talon 2022), four regimes are detected in terms of flow rate (Q)-applied pressure gradient (∆P/L): (i) When the applied pressure gradient is less than a critical pressure gradient (∆P c /L), there is no flow (Q = 0); (ii) If the applied pressure gradient slightly exceeds the critical value, the flow is extremely localised in a channel and the flow rate linearly scales with the excessive pressure gradient where other parts of the fluid are quiescent; (iii) The third regime emerges when the applied pressure gradient increases, more and more channels appear (moderate values of pressure gradient) and the flow rate scales quadratically with the excessive applied pressure gradient; (iv) Finally when the applied pressure gradient is much higher than the critical value, the flow rate again scales linearly with the excessive pressure gradient.
Although these generic features/scales have been evidenced in a large number of studies, still the lack of an inclusive Darcy type expression for bulk properties is evident.The very first step for finding such a generic model is to thoroughly understand the pressure gradient threshold and more generally the yield limit which scaffolds any further progression of this aim.
In spite of the previous efforts to address the yield limit of the current problem (Liu et al. 2019;Chaparian et al. 2020;Fraggedakis et al. 2021), mostly the findings are case dependent, thereby limiting their application for more complicated practical systems.As discussed, in this limit the flow is extremely heterogeneous, hence, pore-scale studies are not fully reliable since they do not contain any statistical data in "real" porous media where a wider range of length scales are involved.Thus, in the present study, the aim is to derive a theoretical model based on yield-stress fluid flows principles and then validate the proposed model with exhaustive simulations.
To this end, we construct our porous media by randomly distributed obstacles of various shapes and lengths to avoid any biased results.Namely, three major types of obstacles are considered: circles, squares and polygons.Then, fluid flow simulations based on the adaptive augmented Lagrangian approach (Glowinski & Wachs 2011;Roquet & Saramito 2003) are performed which is shown to be a reliable tool for investigating the present problem, especially at the yield limit where non-regularised rheology is essential (Frigaard & Nouar 2005).To be fit for purpose, both mono-dispersed and bi-dispersed systems are considered.We have recently delved into mono-dispersed circular obstacles by the means of pore-network approaches where a large data set has been generated (Fraggedakis et al. 2021).This data set is adopted here in conjunction with the present computational data for further validation of the proposed theory.
The outline of the present paper is as follows: the problem is described in §2.1 and the details of the utilized numerical method and porous media construction are highlighted.The numerical results are depicted in §3.The theory is developed in §4 and the comparison with the computational results is performed.Conclusions are drawn in §5.
Mathematical formulation
We consider incompressible two-dimensional Stokes flow through a set of obstacles (i.e.X) in a box of size L × L (i.e.Ω) which is governed by, where p, τ, and u represent the pressure, deviatoric stress tensor, and the velocity vector of the fluid, respectively.We use the Bingham model to describe the fluid's rheology, in which γ is the rate of strain tensor (i.e.∇u + ∇u T ) and ∥ • ∥ is the second invariant of the tensor.Therefore, yielding obeys the von Mises criterion.
The above equations are non-dimensional and B = τy l/μ V is the Bingham number, where μ is the plastic viscosity of the Bingham fluid, V is the mean inlet velocity and l is the characteristic length scale which will be fixed later in §2.2.Hence, the Bingham number is the ratio of the yield stress of the fluid to the characteristic viscous stress.To derive the equations (2.1) and (2.2), we use the following scalings: where x and y are the coordinates in the streamwise and spanwise directions, respectively (see figure 1).Please note that all the variables with hat are dimensional throughout the paper; the same symbols are used for the dimensionless parameters without •.As mentioned above, V is the mean inlet velocity, hence, where Q is the flow rate and L inl is the length of the domain's inlet, i.e. the obstructed length by the solid obstacles is subtracted from L to calculate L inl (see figure 1).Therefore, in this setting, the flow rate is always equal to L inl , irrespective of the Bingham number.This approach in formulating the present problem is called Resistance formulation or [R].Indeed, the yield limit in this type of problem setup moves to B → ∞.
We will predominantly use this approach in our following simulations and analytical derivations.Alternatively, another formulation is possible: In [M] approach, the applied pressure gradient is used to scale the pressure and the stress tensor (i.e.∆ P L l), while the velocity vector is scaled with l2 μ ∆ P L .Hence, as a result, the non-dimensional applied pressure gradient in [M] is always equal to unity.
In [M] formulation, the independent flow parameter is, which is known as the yield number.Indeed, the flow rate changes as the yield number varies: it is zero when Y ⩾ Y c and increases as the yield number drops below Y c and decreases.Indeed, the yield limit in [M] is marked by Y c which is the critical yield number; if Y < Y c , the applied pressure gradient is enough to overcome the yield stress resistance and the fluid flows inside the medium.
There is a one-to-one map between [R] and [M] approaches: these two distinct formulations are linked together with Y (∆P/L) = B.This makes the interpretation of the results feasible; no matter the analysis (analytical, computational, etc.) is done in [R] or [M] settings.For more detailed explanations of these two formulations in porous media flows or more general pressure-driven flows, readers are refereed to Chaparian & Tammisola (2021).
Porous media construction
To construct the porous media for the fluid flow simulations, we randomly distribute non-overlapping obstacles (X) inside a square domain (Ω) of size L × L = 50 × 50; see figure 2. Indeed, the centre of each obstacle is chosen randomly with uniform distribution in the interval [−ϵ, L + ϵ] × [−ϵ, L + ϵ] and then it will be checked if the obstacle satisfies the non-overlapping condition.Here ϵ is introduced to let the obstacles cross the computational borders.
Three different obstacle topologies are used: circles, squares, and polygons.We consider mono-dispersed and bi-dispersed cases.In the mono-dispersed circular cases, the radius of the obstacles is used as the length scale, l = R, which deduces each individual obstacle area to be equal to π.In the mono-dispersed square cases, to be consistent, the individual area of an obstacle is again π or indeed l = Ls /π where Ls is the length of squares' edges.
In the bi-dispersed cases, the area of the larger obstacles is 25π while the area of the smaller ones is still π.This is the only parameter which is fixed in the construction of bi-dispersed cases.To ensure generality of the results, both the positions and the number of the larger obstacles are also chosen completely randomly.
For the case of polygons (see panels (c,f) in figure 2), firstly the domain is partitioned based on the Voronoi tessellation in which the centres of circles are adopted as the set of points in the Euclidean plane.Then each Voronoi cell (the edges of cells are depicted in red in figure 2) is squeezed (or expanded in the bi-dispersed cases) to get the desired area of π (or 25π in the bi-dispersed cases).Hence, this method provides us with a variety of shapes for the polygon cases.
As mentioned, here we are interested in 2D flows, hence, the solid "volume" fraction in the porous media is denoted by ϕ = meas(X)/meas(Ω).Therefore, the porosity of the medium (i.e.void fraction) can be represented simply by 1 − ϕ.
Note that in the polygon bi-dispersed cases, the obstacles may weakly overlap because of the expansion of the cells associated with the larger obstacles.In these cases, the effective solid volume fraction is considered.
Computational details
We implement augmented Lagrangian method to simulate the viscoplastic fluid flow (Glowinski & Wachs 2011;Roquet & Saramito 2003).This method is capable of handling the non-differentiable Bingham model by relaxing the rate of the strain tensor.An open source finite element environment-FreeFEM++ (Hecht 2012)-is used for discretisation and meshing which has been widely discussed and validated in our previous studies; for more details (choice of elements etc.) please see Chaparian & Frigaard (2017); Iglesias As discussed in §2.1, in [R] setting, the flow rate must be equal to L inl , hence, the imposed pressure gradient ∆P/L (which is a body force term in the numerical implementation) will be iterated to match the flow rate (Roustaei et al. 2015).
In the present study, a number of simulations are performed at different porosities to validate the scaling which will be derived in §4.As mentioned in §1, the main aim here is presenting a mathematical model for the yield limit based on physical features of the problem, and then validate it with the present simulations and the previous published data.Due to high computational cost of the full fluid flow simulations, we do not follow a statistical approach by simulating the flow in many realisations here.Rather, we use our data previously published in
Universal scale
For the present problem defined in §2.1, the energy equation at the steady state implies that the work done by the applied pressure gradient (i.e.(∆ P / L) Ω\ X û d Â) balances the total dissipation (i.e.Ω\ X (τ : γ) d  = μ Ω\ X ( γ : γ) d  + τy Ω\ X ∥ γ∥ d Â) which in dimensionless form reads, where a(u, u) is the viscous dissipation and B j(u) is the plastic dissipation.At the yield limit ), the viscous dissipation (which is quadratic in terms of γ) is at least one order of magnitude less than the plastic dissipation (Frigaard 2019;Chaparian et al. 2020), hence, the critical yield number (or indeed the inverse of the non-dimensional critical pressure gradient) can be predicted by, One can re-write the numerator as, since the flow rate is equal to L inl ; see expression (2.3).At the yield limit, the flow in the porous media is localised to a single channel.Thus, to find the scalings for j(u) and L inl at this limit, it is worth revisiting the two-dimensional Poiseuille flow of a yield-stress fluid.In this type of flow, the fluid moves as a core unyielded region with a constant velocity which is sandwiched between two sheared regions in which the velocity profile is parabolic.In the yield limit, these two sheared regions are viscoplastic boundary layers (Piau 2002;Balmforth et al. 2017) In our recent study (Fraggedakis et al. 2021), we have shown that the mean height of the first open channel scales with the porosity, i.e. ⟨h ch ⟩ ∼ 1 − ϕ and the mean relative length of the first channel scales with the volume fraction, i.e. ⟨L ch ⟩ /L ∼ ϕ, where ⟨•⟩ stands for the mean quantity which is acquired by ensemble averaging through different simulations and also various porosities.To elaborate, in a condensed system of obstacles (i.e.low porosities), h ch is smaller since the fluid path is squeezed between the obstacles or the mean void length between the obstacles becomes smaller as the solid volume fraction increases.On the other hand, the mean relative length of the first channel or tortuosity (i.e.L ch /L) scales with the solid volume fraction since in a denser system, the minimum path's shape is zigzag rather than straight which is more probable in a more dilute system of obstacles.These interpretations are evidenced in figure 5 Inserting the scales for the mean height and the mean relative length of the first channel to expression (4.3), the critical yield number can be re-written as: which means that the critical yield number scales with the ratio of the void space to the solid (i.e.obstructed) space.
In figure 6, we present a comparison of the theory (i.e.expression (4.4)) with the data associated with the simulations performed in the current study and also the previously published data: the non-dimensional critical pressure gradient (i.e.1/Y c ) is plotted versus ϕ/(1 − ϕ).The dashed orange line is the scale derived above, i.e. expression (4.4).The hollow symbols are the present computed data: black and purple colours are devoted to mono-dispersed and bi-dispersed cases, respectively.Circles, squares, and pentagrams represent the circle, square, and polygon obstacles, respectively.The filled circle symbols with the uncertainty bars are the data borrowed from Fraggedakis et al. (2021) where a pore-network approach is utilised to analyse a large number of realisations (∼ 500 for each porosity) with circular obstacles where each colour represents an specific R/ L ratio.Indeed, the filled circle symbols are the ensemble averages of all previously performed simulations and the uncertainty bars represent the range of obtained values.For more clarification of the used data, please see Fraggedakis et al. (2021).However, as explained in §2.3, the current data is acquired through individual simulations (i.e. they are not ensemble averages of many simulations), hence, no uncertainty bars are associated with the new data (i.e.hollow symbols).
A reasonable agreement can be observed between the derived scale (with a fitted slope ≈ 3.14 or π) and the computational data for all class of considered topologies.Moreover, the bi-dispersed cases data also fits reasonably to the proposed theory.
In a very recent study, using "variational linear comparison" homogenisation method, Castañeda (2023) has derived an upper-bound for the critical pressure gradient where the solution of Newtonian fluids used as a test function in the dissipation-rate potential of viscoplastic fluids.This upper-bound is shown by the cyan line in the inset of figure 6 along with the proposed universal scale for comparison.Note that the upper-bound proposed by Castañeda (2023) has a linear functionality with ϕ/(1 − ϕ) which further validates the universal scale derived here, although its slope is steeper which is not surprising as it is an upper-bound.(2021).Each colour intensity is dedicated to a different value of R/ L between 0.02 to 0.1 (see the reference for more details).The black and purple hollow symbols devote the mono-dispersed and bi-dispersed cases, respectively.Circles, squares, and pentagrams represent the circle, square, and polygon obstacles, respectively.Inset: comparison between the upper-bound of the critical pressure gradient (cyan line) derived by Castañeda (2023) and the proposed universal scale (dashed orange line).Please note that the axes of the inset are the same as the main figure.
Concluding remarks
Adaptive finite element simulations based on augmented Lagrangian scheme were performed to study the fluid flows of yield-stress fluids in porous media.The specific objective was to fully understand the yield limit of this type of flows and propose a theory to address the critical applied pressure gradient which should be exceeded for flow assurance purposes.This is a vital and a very base step in proposing a generic Darcy type expression for bulk transport properties of the yield-stress fluid flows in porous media.
For this aim, and to avoid prevailing analysis, flows in various porous media constructed with a wide range of obstacle shapes are investigated.The studied geometries have been generated by randomly distributing non-overlapping obstacles of circular and square shapes.In addition, more complicated topologies (i.e.polygon obstacles), have been generated by using the Voronoi tessellation of circular cases.The computational data includes both mono-dispersed and bi-dispersed systems.
In the yield limit, which is the main focus of the present study, the flow is restricted to a single channel connecting the inlet to outlet, while the fluid outside of it is unyielded and thus quiescent.The configuration of this very first channel has been investigated in our previous study (Fraggedakis et al. 2021) and statistical geometrical properties (e.g.height and length) are reported as a function of the solid volume fraction (ϕ) or alternatively the porosity of the domain (1−ϕ) which can be summarised as ⟨h ch ⟩ ∼ 1−ϕ and ⟨L ch ⟩ /L ∼ ϕ.
A theory was proposed based on variational formulation of the energy equation.
The leading order plastic dissipation has been approximated by a channel Poiseuille flow at the yield limit where the channel dimensions are borrowed from the discussed statistical results (Fraggedakis et al. 2021).Indeed, in the very first channel, the transport mechanism is predominantly postulated by the core unyielded plug in the middle of the channel and the leading order plastic dissipation occuring in the sheared boundary layer between the quiescent fluid outside of the channel and the mobilised core unyielded region.It should be noted that due to the complex shape of this limiting channel in the porous media, the mechanism is not as simple as explained above since the limiting channel is not straight and channel height varies (especially in the dense systems); see figure 5. Thus, the core unyielded plug and the adjacent boundary layers are not uniform.Nevertheless, since the mean height and length of the channel is used in our model, the proposed scaling is still valid in the leading order.This has been assessed using the obtained computational data for a wide range of obstacle topologies mentioned above and also previously published data.We have shown that our theoretical approach is capable of predicting the numerical data with a reasonable agreement.Due to the high cost of unregularised numerical simulations of yield-stress fluid flows and also handling various shapes of obstacles, the available data, especially in the yield limit, is limited.This limitation is more evident in three-dimensional flows.Although in some studies (Bittleston et al. 2002;Pelipenko & Frigaard 2004;Hewitt et al. 2016;Izadi et al. 2023), the Hele-Shaw approximation for yield-stress fluids has been developed, still the lack of a compelling study linking this pore-scale approximation to bulk transport mechanisms/features in 3D is evident.This is left for future investigations, both theoretically and computationally, which is a massive step forward for many industrial applications.
Figure 1 .
Figure 1.Schematic of the coordinate system directions and the inlet length L inl which in this case consists of two segments depicted in blue.
Figure 2 .
Figure 2. Schematic of the porous media: top row are mono-dispersed topologies and bottom row are bi-dispersed ones.(a,d) square obstacles; (b,e) circular obstacles; (c,f) generated polygon obstacles based on Voronoi tessellation of panels (b,e).
Fraggedakis et al. (2021) which will be discussed later in §4.Recent advances in computational methods of viscoplastic fluids (e.g.known as PAL & FISTA methods) accelerate the simulations of this type of fluids, yet the implementation of these methods is beyond the scope of this work.Interested readers are refereed to Dimakopoulos et al. (2018) and Treskatis et al. (2016, 2018).
Figure 3 .
Figure 3. Mesh generation for a sample case: (a) initial mesh ("uniform" coarse grid), (b) final mesh after 6 cycles of adaptation.This mesh is associated with the simulation illustrated in panel (d) of figure 4. Note that only part of the mesh in the white window of panel (d) of figure 4 (at the pore-scale) is shown here.
Figure 4
Figure 4 shows the flow in the six sample geometries at ϕ = 0.45 & B = 10 3 .As discussed in §2.1, the yield limit in the [R] setting goes to B → ∞, so in the illustrated examples for this relatively large Bingham number, the channelisation is clear.However, clearly for different geometries, different "large" Bingham numbers are required to get only the very first open channel.This translates to different critical yield numbers which is expected for different topologies and will be discussed in §4 with other features of the flows.
Figure 4 .
Figure 4. Contour of velocity (i.e.|u|) for 6 sample simulations at ϕ = 0.45 & B = 10 3 .Top row panels are mono-dispersed cases and the bottom panels are the bi-dispersed ones.The white window in panel (d) marks where the mesh represented in figure 3 belongs to.
with thickness δ.To simplify the plastic dissipation functional j(u) substantially, we approximate the flow in the first open channel with the discussed Poiseuille flow.Hence, the leading order of ∥ γ∥ can be approximated as ≈ 2(U ch /δ) δ L ch ∼ U ch L ch in the boundary layers where the index ch stands for the first open channel.Indeed, U ch and L ch represent the core unyielded region velocity and the length of the first channel, respectively; see figure5(a).Moreover, the continuity equation in the leading order obeys Q = L inl ≈ U ch h ch which allows us to rewrite U ch in terms of the flow rate and the channel height.Thus,
Figure 6 .
Figure 6.Comparison between our theory and the computational result: non-dimensional critical pressure gradient versus ϕ/(1 − ϕ).The dashed orange line is the scale derived in (4.1).The filled circle symbols with uncertainty bars are the data borrowed from Fraggedakis et al.(2021).Each colour intensity is dedicated to a different value of R/ L between 0.02 to 0.1 (see the reference for more details).The black and purple hollow symbols devote the mono-dispersed and bi-dispersed cases, respectively.Circles, squares, and pentagrams represent the circle, square, and polygon obstacles, respectively.Inset: comparison between the upper-bound of the critical pressure gradient (cyan line) derived byCastañeda (2023) and the proposed universal scale (dashed orange line).Please note that the axes of the inset are the same as the main figure.
|
v3-fos-license
|
2024-03-31T15:55:09.739Z
|
2024-03-25T00:00:00.000
|
268762316
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/qj.4698",
"pdf_hash": "75919a9a54cb2966dc9f13c79ea24c5c6bb2ad6b",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2334",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Mathematics"
],
"sha1": "487a3dc4aec61abd3dd5747c8a1a6e71a469623d",
"year": 2024
}
|
pes2o/s2orc
|
Online state and time‐varying parameter estimation using the implicit equal‐weights particle filter
A method is proposed for resilient and efficient estimation of the states and time‐varying parameters in nonlinear high‐dimensional systems through a sequential data assimilation process. The importance of estimating time‐varying parameters lies not only in improving prediction accuracy but also in determining when model characteristics change. We propose a particle‐filter‐based method that incorporates nudging techniques inspired by optimization algorithms in machine learning by taking advantage of the flexibility of the proposal density in particle filtering. However, as the model resolution and number of observations increase, filter degeneracy tends to be the obstacle to implementing the particle filter. Therefore, this proposed method is combined with the implicit equal‐weights particle filter (IEWPF), in which all particle weights are equal. The method is validated using the 1000‐dimensional linear model with an additive parameter and the 1000‐dimensional Lorenz‐96 model, where the forcing term is parameterized. The method is shown to be capable of resilient and efficient parameter estimation for parameter changes over time in our application with a linear observation operator. This leads to the conjecture that it applies to realistic geophysical, climate, and other problems.
INTRODUCTION
Online parameter estimation is the process of inferring values that are often included in numerical models as unobservable quantities using sequentially collected observations.Since such parameters in numerical models are simplified representations of the modeled characteristics, parameter estimation plays an important role in obtaining accurate and reliable predictions.There are several approaches to parameter estimation, such as using an optimization algorithm under given state variables in the model and using data assimilation (DA) techniques (Evensen et al., 2022).
DA is known as the procedure to incorporate observations into numerical models and obtain posteriors of the state variables, especially in high-dimensional dynamical systems.Although DA usually focuses on generating an optimal initial state and forecasting the temporal evolution of millions of time-varying state variables (Clayton et al., 2013), parameter estimation is often combined to calibrate the models (i.e., estimate the appropriate model characteristics).Therefore, parameter estimation is key to improving the prediction accuracy and is as complex as state estimation due to nonlinearities, even for linear dynamical models (Evensen et al., 1998).
Further, parameters can be considered not only as static but also as time-variant.For example, in hydrological modeling, parameters are usually assumed to be constant and calibrated using a particular data record to obtain an optimal parameter set or stationary parameter distributions.Still, it is necessary to use time-variant parameters to accurately simulate state variables wherein the calibration period may contain different climate conditions and hydrological regimes compared with the simulation period (Deng et al., 2016).As another example, according to Zhu et al. (2017), state and parameter estimation plays an important role in the application of process monitoring, online optimization, and process control.The difficulty of these applications is in identifying changes in model parameters when the operating conditions of the processing system have changed, or some faults have occurred in the processing system.From the above examples, it can be seen that estimating time-varying parameters plays an important role not only in improving prediction accuracy but also in determining when model characteristics change abruptly.However, the challenging issue is to distinguish whether the cause of the inaccuracy is incorrectly estimated state variables or a change in the model characteristics (i.e., parameters).
A typical method for time-varying state and parameter estimation in high-dimensional dynamical systems is the state augmentation technique, in which the parameter vector is incorporated into the state vector.This technique is also called joint estimation.Generally, the Kalman filter-based method is used for linear Gaussian systems, whilst the particle filter (PF) based method can be applied to nonlinear non-Gaussian systems.Santitissadeekorn and Jones (2015) indicate that the state augmentation method may become ineffective when the impact of parameters on the state is weak, and they propose a two-stage filter that combines a PF and an ensemble Kalman filter.This method estimates the static parameters and the tracking of the dynamic variables alternatively.Although similar approaches using an independent dual PF (Cooper & Perez, 2018) and a nested hybrid filter (Pérez-Vieites et al., 2018) have been proposed, they are only applicable to the estimation of static parameters.Extension to time-varying parameters requires identifying whether the change in observed states originates from state variables or parameters, but the amenability in practical contexts depends on the cross-covariance between states and parameters.In particular, detecting abrupt changes in characteristics in high-dimensional and partially observed nonlinear systems may be problematic because of the relatively low correlation between the observed state and parameters.
Another issue concerns nonlinearities due to the temporal evolution of the system and augmented state vector.As in the example using PF above, the parameter estimation method combined with PF can deal with nonlinearities, but filter degeneracy might be a critical obstacle for high-dimensional systems such as geophysical and climate systems.To overcome this problem, several approaches have been proposed, including the PF method by hybridizing with the ensemble Kalman filter (EnKF: Santitissadeekorn & Jones, 2015), as mentioned above.The approach of the equivalent-weights particle filter (EWPF: e.g., Van Leeuwen, 2010;Ades & Van Leeuwen, 2015) allows the proposal density to depend on all particles at the previous time step and assigns equivalent weights to most particles to avoid filter degeneracy.Zhu et al. (2016) proposed the implicit equal-weights particle filter (IEWPF), which combines the method of EWPF and implicit sampling (Chorin & Tu, 2009) to eliminate the need for parameter tuning.Skauvold et al. (2019) proposed a two-stage IEWPF method to correct the systematic bias in predictions caused by a gap in the proposal distribution in IEWPF (Zhu et al., 2016).Other approaches to eliminate filter degeneracy are also reviewed in Van Leeuwen et al. (2019).However, the above methods focus on estimating state variables or constant parameters.
In this article, we focus on a nonlinear time-varying system where the dimension of the state vector is large, while that of the model parameters is comparatively small, with a view to application in geophysical, climate, and other high-dimensional contexts.Then, we propose a new PF-based parameter estimation method and assess the capability of detecting abrupt changes in characteristics by applying it to the above system.We provide a methodology and results based on the IEWPF of Zhu et al. (2016) as an example of avoiding filter degeneracy.In our application, we assume a linear observation operator and require partial derivatives with respect to the parameters depending on the dimension of the parameters, although the methodology does apply to nonlinear observation operators and can work with approximate derivatives.
The remainder of the article is organized as follows.Section 2 describes the methodology for estimating time-varying parameters.First, to estimate states and parameters simultaneously, we extend IEWPF to an augmented state-space model with a correlated covariance matrix.We then propose the IEWPF-based method that incorporates an optimization algorithm from machine learning into the parameter time evolution model by taking advantage of the flexibility of the proposal density in particle filtering.In Section 3, the effectiveness and advantages of the proposed method are evaluated through comparison with a method without incorporation of an optimization technique by using the linear model and the Lorenz-96 model (Lorenz, 1996).A summary and conclusions are put forward in Section 4.
Correlated perturbation in augmented state-space model
A typical state-space model for a nonlinear system containing model parameters is described as where x n is the state variable at time step n and y n is the observation vector at time step n. f is the known possibly nonlinear function that maps the state from time t n−1 to t n , and H x is the known nonlinear observation operator. is the vector of model parameters, the true values of which are unknown and possibly time-varying. is a random model perturbation drawn from the model-error probability density function (pdf) (0, Q ), while the observation error is drawn from the observation-error pdf (0, R).
To estimate time-varying parameters sequentially, the state vector is updated according to the following dynamical system by augmenting parameters as artificial states: Here, n is a random parameter perturbation drawn from the pdf (0, Q ), and we require that f is a differentiable function with respect to the parameter.Then, the above state updating function f can be approximately expressed by a first-order Taylor series expansion at the previous parameter n−2 : Then, by using the time evolution model in the previous time step n − 1: we can rewrite Equation 2 as where we introduce the augmented vector z n = [x nT , n−1 T ] T , model f , and perturbation ρ representation.
We also rewrite the observation operator H x in Equation 1as follows: The augmented perturbation ρ can drawn from the error pdf (0, Qn ), which is expressed as where ′n = (f ∕) n−1 + n .Since model perturbation and parameter perturbation are independent of each other and both have zero means, each matrix element in Equation 7 can be calculated as follows: Then, Equation 7 can be expressed as Note that the Taylor expansion in Equation 3 is used up to the first-order term, so the augmented perturbation ρ from Q includes the linear impact of the parameters on the model evolution over one time step.
State and parameter update with IEWPF
In this section, we explain how to apply the IEWPF to the update equation Equation 5and how to avoid filter degeneracy.Considering a Markovian system with observational errors that are independent from one time to another, the prior pdf can be written as Then, plugging Equation 12 into Bayes Theorem as a prior pdf, the posterior pdf of the model state given observations can be written as (13) Suppose we run a particle filter, and the particle weight for the ensemble at the previous time step n − 1 is given by Then plugging Equation 14 into Equation 13, we can obtain Introducing the proposal density q(z n |Z n−1 , y n ), which is conditioned on all particles at time n − 1, which indicated by the Z n−1 , Equation 15 can be expressed as The well-known problem of filter degeneracy means the weight will concentrate on only some particles, and most particles will have a negligible weight after a few propagations.Snyder et al. (2015) described that the particle filter using the optimal proposal yields minimal degeneracy and provides performance bounds.This could be a serious obstacle to implementing the particle filter when the number of states and observations increases, that is, a high-dimensional system.Therefore, we use the IEWPF (Zhu et al., 2016), which can avoid this filter degeneracy problem.From Equation 14, Equation 16 can be expressed as where w i is the weight for particle i and is expressed as follows using the proposal density expressed in Equation 16: Instead of drawing directly from proposal density q, we can draw a standard Gaussian distributed proposal density q(), which is related by where ‖dz∕d‖ denotes the absolute value of the determinant of the Jacobian matrix, which expresses the following transformation: where n i express the mode of q(z n |Z n−1 , y n ), P is a measure of the width of that pdf, and i is a scalar factor.Note that this expression is similar to the original IEWPF (Zhu et al., 2016), but z n i denotes the augmented vector z n = [x nT , n−1 T ] T .This means that transformed variable also has the dimension of the augmented vector.Then, Equation 18 can be expressed as follows: In general, the n i can be obtained via a minimization of − log q ( z n |Z n−1 , y n ) , similar to for example, a 3DVar, and also the equal weights can be obtained numerically.In this article, we will follow Zhu et al. (2016) and assume a linear observation operator, which will allow for an analytical solution for the equal weights.
Linear observation model and Gaussian error
Assuming the linear observation model H and Gaussian model and observation error as shown in Equations 5 and 6, n i in Equation 20 can be expressed as explained in Zhu et al. (2016): where and P in Equation 20 is Note that Q is the model-error covariance matrix described in Equation 11and R is the observation-error covariance matrix.Therefore, from Equations 20-22, equal-weight particle z i sampled from posterior pdf Equation 16 can be constructed using the scalar factor i .
The factor i needs to be determined so that the weight of each particle i represented by Equation 21is the same target weight for all particles.Introducing w prev i , which denotes the weight from previous time steps, we can express Equation 21as With the above Gaussian assumption, we can write where (27) Taking the logarithm of Equation 25 leads to Substituting Equations 26 and 20 in Equation 28, we find Using Equation 20and the simplified expression for the Jacobian in Zhu et al. (2016), we can rewrite where N x is the dimension of the model state.Setting the weights of all particles to the target weight is equivalent to setting all log w i equal to the constant C, which leads to the following equation for i : in which constant value 2 log Here, let c i denote the log-weight offsets for each particle i from the target weight C as In practice, this c i can be determined using the values of for all particles as Therefore, i is obtained as a solution satisfying Equation 31 with c i determined by Equation 33.Further assuming that the factor i depends on n i only through (see Appendix in Zhu et al., 2016).For every particle to reach the target weight, c i ≥ 0 should be satisfied, therefore 0 < exp (−c i ∕2) ≤ 1 in Equation 34.Furthermore, since the function of the left-hand side exp (−g i ∕2)(g i ) N x ∕2−1 has an extremum at et al. (2016), Equation 34 can be integrated from N∕2 to ∞, then yields the following equation: where is the monotonically decreasing upper incomplete gamma function.Therefore the solution i for every particle i that satisfies Equation 35 is allowed both ≤ 1 and ≥ 1 theoretically.Although ≥ 1 solutions are known to lead to systematic bias (Zhu et al., 2016), the bias decreases when the state-space dimension N x increases, that is, the high-dimensional case.As another solution, Skauvold et al. (2019) proposed the two-stage IEWPF that can eliminate this bias.
In practice, the following should be considered when generating the posterior distribution by calculating i that satisfies Equation 35.The first point is the computational cost of finding i numerically for each particle.To avoid this calculation, Zhu et al. (2016) proposed an approximation under the limiting case of N x → ∞.Then, the solution can be expressed analytically using the Lambert W function (Corless et al., 1996), which has two branches: > 1, which gives a large ensemble spread, and < 1, which gives the opposite effect.The authors proposed adjusting the ratio of sampling i for each particle i from either branch in order to bring the shape of the distribution closer to the ideal one.The results of this dependence will be shown later.The second point is the guarantee of convergence to the posterior distribution.IEWPF can equalize the weights of all particles, but the convergence of the filter distribution to the posterior distribution was only confirmed experimentally by Zhu et al. (2016) and not shown theoretically.
Parameter nudging with proposal density
The effectiveness of the method proposed in the previous section, which augments parameters as artificial states, depends on the cross-covariance between states and parameters.To improve the accuracy and resilience of time-varying parameters, we introduce an optimization algorithm from machine learning into the parameter time evolution model using the flexibility of the proposal density in particle filtering.According to Equation 11, the model transition density is expressed as The prior pdf expressed in Equation 12 is allowed to both divide and multiply the model transition density by a proposal transition density q, leading to (37) Drawing from p ( z n |z n−1 ) corresponds to using the original model transition density Equation 36.Still, we could instead draw from q ( z n |Z n−1 , y n ) , which would correspond to any other model transition that we choose.This allows us to control the transition of both state and parameters by choosing proposal density q.
Sequential observation data can be considered as samples for the stochastic gradient descent (SGD) algorithm based on the similarity between sequential DA and online learning or stochastic optimization, in that the data are given sequentially.The ideas in stochastic optimization have advanced in recent years in machine learning and deep learning with large-scale data.The basic problem structure classification and associated solutions are summarized in Hannah (2015).The effectiveness of SGD for large-scale learning problems, that is, cases with large-scale data, is also described in Bottou (2010).The optimization algorithm used in the proposed method is described in the next section.Assume an objective function L n i () and consider the problem of minimizing this function, where the parameter minimizes L n i ().The parameter n can be updated by the following iteration: where is the step size, sometimes called the learning rate in machine learning contexts.The function g n expresses the update rule for the parameter.
Here, we consider introducing the above parameter update analogy to the transition density modification.In the next step of the last observation n, that is, n + 1, let us assume that instead of original transition density Equation 12, the proposal density q at time step n + 1 for augmented state z can described as where the augmented nudging term is denoted as gn .Therefore, the step size and the function g( n−1 i , y n ) have the same role as Equation 38 and together express the nudging term forcing estimated model parameters towards true values, and y n is the last observed data vector.
Qn
is the same augmented model-error covariance matrix as described in Equation 11 with correlated perturbation.Then updating of the augmented state vector after the last observation step n is given as follows, instead of the original updating expressed in Equation 5: where This corresponds to only the modification of augmented perturbation ρn+1 , which shifts the mean value of parameters.Note that sampling from this proposal transition density instead of the original model is compensated by an extra weight as described in Ades and Van Leeuwen (2015):
Adam-method-based parameter nudging
As mentioned above, we introduced a nudging term for the parameters by taking advantage of the flexibility of the proposal density in particle filtering.One of the main points in this article is that we can choose any term that forces the parameters toward the true value.Therefore, our scheme is combined with a well-known gradient descent optimization algorithm that has evolved in recent years as deep learning progresses (Alom et al., 2018).In general, a task in machine learning and deep learning is often expressed as the problem of finding parameters that minimize (or maximize) the objective function, and the key is how quickly the optimal parameters can be found.Typical optimization formulations and algorithms are summarized in Sun et al. (2019).
Regarding gradient-based optimization algorithms, Ruder (2016) showed a classification of algorithms and a description of typical examples.Momentum-based algorithms accumulate a decaying sum of the previous gradients into a momentum vector and use that instead of the true gradients.This method has the advantage of accelerating optimization along dimensions where the gradient remains relatively consistent and slowing it along turbulent dimensions where the gradient is significantly oscillating.Another approach is norm-based algorithms, which divide a portion of the gradient by the L 2 norm of all previous gradients.This has the advantage of slowing down along dimensions that have already changed and accelerating along dimensions that have only changed slightly.In our method, we use the adaptive moment estimation (Adam) proposed by Kingma and Ba (2014), which combines the above two approaches.
Our proposed formulation of the function g( n−1 i , y n ) for the parameter nudging term in Equation 39 is as follows.First, f (z n−1 i ) can be regarded as the expected value of z n i given z n−1 i and is defined by Next, we chose the log-likelihood of p ( y n |z n i ) as the aforementioned objective function L n i in Equation 38 as follows: Here, Equation 44 can be calculated from the likelihood with respect to the observed value y n at observation step n and ensemble member i, given z n i , as follows: Then, we define the function g( n−1 i , y n ) in Equation 39 by using the gradient of the objective function L n i as follows.Following Kingma and Ba (2014), we introduce the moving averages of the gradient and the squared gradient, and denote them as m n i and v n i , respectively.Their update equations are expressed using the gradient of L n i as follows: where the hyperparameters m and v control the decay rate of these moving averages.Note that the gradient ∇ L n i requires computing the partial derivatives of the likelihood with respect to the parameters in Equation 45 or an approximation thereof.Since these moving averages are initialized (as a vector of zeros), the moment estimates are biased toward zero, especially during the initial time step and especially when the decay rates are low (i.e., m and v are chosen to be close to 1).Therefore, m n i and v n i in Equation 46 are modified as follows to cancel these biases: Finally, the function g( n−1 i , y n ) expressed in Equation 39 is yielded as follows: Here, the factor √ vn i represents the L 2 norm of the past gradients via the v n−1 i term and current gradient in Equation 46, and scales the gradient.Note that is a factor to avoid dividing by zero and set to 1.0 × 10 −8 in the following experiment.
The proposed method contains two procedures dependent on the observation: (1) state and parameter update by IEWPF and computation of likelihood gradient at the observation step, and (2) parameter nudging with proposal density between observations.The algorithm is summarized as follows: (1) State and parameter update at the observation step 48, by using hyperparameters m , v , and step-size factor .
(2) Parameter nudging at the forecast step • The time step t + 1 in the next step after observation, for each particle i: -Generate parameter perturbation using the computed parameter nudging term g ( t−1 i , y t ) from Equation 41.
-Compute extra weight in Equation 42.
NUMERICAL EXPERIMENTS
The effectiveness of the proposed method is demonstrated through two synthetic test cases as follows.The first case is the linear model with additive parameters, where all model states are observed directly at every time step.Although this article focuses on a nonlinear system, we use a linear model to verify that the shape of the posterior pdf is close to the true one.The second case is the Lorenz-96 model (Lorenz, 1996) with parameterized forcing, where only the model states are observed directly at every fourth step.
Linear model with an unknown parameter
In order to compare the estimates of the proposed method with the analytically calculated true values, we use the following linear model as the time evolution expressed in Equation 2: where x ∈ R N x is the model state vector with dimension N x and ∈ R N is the parameter vector with dimension N . and are random perturbations drawn from the model-error pdf (0, Q ) and parameter error pdf (0, Q ), respectively.The matrix F x ∈ R N x ×N x and F x ∈ R N x ×N represent the linear model.Here, we define the matrices F and G as follows: Then, Equation 49 can be rewritten by using Equation 4 as follows: When the initial prior pdf is Gaussian, the true posterior pdf should also be Gaussian.Assuming that the posterior pdf at time n − 1 is Gaussian with covariance matrix P n−1|n−1 , the predicted covariance matrix P n|n−1 of the prior pdf expressed in Equation 51 can be calculated as follows: where and this term is equivalent to Equation 11 when using the linear model F defined in Equations 50 and 51.
In the following experiments, we choose the dimension of the model state N x = 1000 and the parameter N = 1, in order to consider a simple high-dimensional system with a parameter.Setting the model F x = I, F x = 0.1, the time evolution model described in Equation 51 and observation model are expressed as ) , where index = 1, … , N x indicates the elements of the model states x.Here, the observation model H = (I 0), assuming that all variables are observed, and is the observation error drawn from the observation-error pdf (0, R).
Since we assume a time-independent state transition matrix F, the covariance matrix satisfying the linear system defined by Equation 54 converges to the steady-state matrix P such that P n|n−1 = P n−1|n−2 ≡ P, and satisfies the discrete-time Riccati equation (Wonham, 1968) as follows: Therefore, the shape of the true posterior pdf of Equation 54 can be obtained by solving Equation 55numerically and compared with the distribution obtained from the proposed IEWPF.
The procedure of the comparison using synthetic data is as follows.Let us assume the initial ensemble members z 0 i are sampled from the background error (0, B).First, one member from the ensemble generated under the model-error covariance matrix Q and the background-error covariance matrix B is used as the "truth".Observations are then created from this "truth" and the observation error defined by covariance matrix R. In the following experiments, the true value of the parameter is 0, and the true model-error covariance matrix Q is chosen as a diagonal matrix with the main diagonal value 0.04 for states and 0 for the parameter.The background-error covariance matrix B is a diagonal matrix with the main diagonal values of 1 for states and 0 for the parameter.The observation-error matrix R is diagonal, and the main diagonal value is set to 0.01.
Next, for the assimilation, we choose the same matrix Q , B for states, and R as when the observation was generated.The matrix Q and B for parameters are set to be the same as those of the states.The number of particles is set to N = 20 to demonstrate the validity of the estimation with few particles.Regarding observations, consider the condition that all model state variables x are observed at every step.Note that the step size in Equation 39 is set to 0 to evaluate the parameter augmentation method of IEWPF described in Section 2.2.In order to investigate the dependence of the aforementioned i on the shape of the posterior pdf, we compare the variance of pdfs estimated with the values sampled from the i ≥ 1 branch at three sampling percentages: 0%, 50%, and 100%.Note that 50% means sampling from both branches of i ≥ 1 and i ≤ 1, which is the closest to the true pdf according to Zhu et al. (2016).Thus, 0% and 100% mean sampling only from i ≤ 1 branch and i ≥ 1 branch, respectively.
Figure 1 shows histograms of variance accumulated from the 20th to 1000th steps for comparing the two sampling cases of with the diagonal value of R = 0.01.The variances of both (a) states Var(x) and (b) parameter Var() are averaged over the dimension, that is, N x = 1000 and N = 1 for the variables and parameter, respectively, and the number of particles N p for each dimension, as follows: where the index denote the elements of the states x, and x n and n are the ensemble mean.Note that the dimension of the parameter is one.The true variances based on the solution of Equation 55 are shown as "True".From these comparisons, both the states and parameter variances are close to the "True" value when sampling 50% from the i ≤ 1 branch.On the other hand, when sampling only from the ≤ 1 branch and the ≥ 1 branch, we see that the variance becomes smaller and larger with the same trend as for Zhu et al. (2016), respectively.Figure 2 compares the posterior pdf obtained in the 50% sampling case with the true pdf for the diagonal value of R of 0.01.Since the ensemble size is too small compared with the number of model dimensions, both of the estimated pdfs are shown as the histogram accumulated over the time evolution from 20th to 1000th steps for the state and parameter, respectively.From Figure 2a,b, we see that the obtained pdf of the state x 1 and parameter is close to the true pdf.
These results indicate that the method of extending IEWPF to the proposed augmented state-space model is valid, and the variance and shape of the posterior pdf for the parameter are also close to those of true pdf under the condition that the variance and shape of the posterior pdf for the state are close to those of true pdf.
Lorenz-96 model with parameterized forcing
The Lorenz 1996 model with parameterized forcing is used as the time evolution expressed in Equation 1to I G U R E 2 Posterior pdf represented by the particles using the 50% sampling case compared with true pdf (full line) for (a) state x 1 of element one and (b) parameter , respectively.explore the validity of the proposed method in a nonlinear high-dimensional system.The original Lorenz-96 model (Lorenz, 1996) is the dynamical nonlinear model given by where index = 1, … , N with cyclic indices, x is the state variable of the model at position , N is total dimension, and F is the forcing function parameterized by for which c 0 , c 1 , c 2 are true values, and 0 , 1 , 2 are their scale parameters that have to be estimated.For the evaluation of nonlinearity, this value of F , which is typically chosen to be 8 or more to generate chaotic behavior, is set as follows.The values of c 0 , c 1 are set to 8, 4 respectively, and c 2 is set to the same value as the dimension of the model state: N .Then, the scale parameters 0 , 1 , 2 are estimated, and their true values are 1 each.By introducing this parameterized forcing term F ( 0 , 1 , 2 ), each state variable x contains a parameter-dependent chaotic behavior.This model is numerically solved by the fourth-order Runge-Kutta scheme with a time step of t = 0.05.The procedure for the following experiment is the same as for the previous linear model.The true model-error covariance matrix Q for states is chosen as a tridiagonal matrix, the main diagonal value being 0.10 and both sub-and superdiagonal values being 0.025.The background-error covariance matrix B is a diagonal matrix with the main diagonal value 1 for states.In the experiments below, the true observation-error matrix R is diagonal, with main diagonal values of 0.02.For the assimilation, we choose the same matrix Q , B for states and R as when the observation was generated, that is, the true one.The matrices Q , B for parameters are diagonal matrices with main values 5.0 × 10 −6 , 0.001, respectively.The step size for the Adam method is set to 0.001.The number of particles is set to N = 20 to demonstrate the validity of the estimation with few particles.To consider high-dimensional cases, N is chosen as 1000, the same as in the linear-model experiment.
In contrast to the previous evaluation using the linear model and a static parameter, this experiment investigates the ability of the proposed methods for estimating time-varying (i.e., dynamic) parameters in nonlinear high-dimensional systems.Regarding observations, consider the condition that all of the model states are observed every fourth step (i.e., the assimilation interval is 4).Moreover, this 1000-dimensional evaluation with only 20 particles can validate its feasibility to apply to realistic geophysical, climate, and other problems.First, we compare the methods outlined in Section 2 in terms of the RMSE and the ensemble spread (Spread).Next, we compare the impact of the parameter error covariance Q and the step size factor on the ensemble.The performance indicator of parameter estimation is not only the RMSE but also the ratio of the RMSE to the spread in the ensemble, and it is preferable that their ratio becomes one for Gaussian variables.Note that, for non-Gaussian variables, this is only true for the forecast ensemble (Fortin et al., 2014).
3.2.1
Comparison of the methods Figure 3 compares the true values and particle trajectories in the three methods mentioned above for the state x 1 and the three scale parameters 0 , 1 , 2 .All variables are observed every four steps, setting the main diagonal value of matrix R to 0.02.Each true parameter is increased by 30% at the 200th step, as the dashed red line shows.The figure shows the difference in tracking performance of the three methods for abrupt parameter changes and the advantage of the proposed method.The method shown in Figure 3a MH1 is the conventional augmented method expressed as Equation 2. There are some steps where the trajectories of each ensemble deviate from the true trajectory in the state, and the ensemble spreads out greatly and cannot track abrupt changes in all three parameters.Then, both of the methods shown in Figure 3b MH2 and Figure 3c MH3 are based on the proposed state-space model expressed as Equation 5with the covariance matrix Q.The method shown in Figure 3c MH3 further applies the Adam-method-based nudging described in Section 2.5 with step-size factor = 0.001.
The results for the state show that the trajectories of each ensemble are close to the true trajectory.Although both methods tend to approach the true values for 0 and 2 , the Adam-method-based nudging is more accurate and responsive to abrupt changes, especially for 1 .Figure 4a,b shows the comparisons of time series RMSE for the states and parameters, respectively.The horizontal axis indicates the time steps in the 100th-600th steps, where the difference between methods is significant in Figure 3.For the state, since the assimilation interval is four, each value represents the average of all elements (i.e., 1000) for the third step, which has the largest prediction error after filtering, while for the parameter, the average values of all elements (i.e., 3) for all steps are shown.The results show that the estimation error of the parameters after the parameter abrupt change (200th step) increases the error in the forecast step of the model states, and the estimation error of the proposed method (MH3) decreases the fastest for both states and parameters.
Figure 5a,b shows the RMSE and spread comparisons for the states and parameters, respectively.Each box plot shows the time-averaged RMSE and spread at the forecast and filtering steps in the 100th-1500th steps shown in Figure 3, including the abrupt change (at 200th steps).Therefore, the interquartile range (IQR) of the box plot indicates the dispersion across the dimensions of the model states (1000) and parameters (3).Note that outliers are not plotted, to exclude estimation errors immediately after abrupt changes in the 200th step.From the result for the states shown in Figure 5a, the proposed methods (i.e., MH2 and MH3) have smaller RMSE values and dispersion than the conventional methods (i.e., MH1), especially in the forecast step.The result for the parameters shown in Figure 5b clearly shows that both the RMSE values and dispersion of MH3 (i.e., with nudging) are smaller than the others, and the spread is also smaller.The fact that the RMSE dispersion of MH3 is smaller than that of MH2 means that the difference in RMSE in the three parameters is small.Thus, the proposed nudging method reduces differences in estimation accuracy for each parameter, which is the effectiveness of combining IEWPF with Adam.
3.2.2
Dependence of parameter error covariance and step-size factor In the following, we investigate the impact of the parameter error covariance Q and the step-size factor on estimation accuracy (RMSE) and ensemble spread (spread).Figure 6 shows the true values and the particle trajectories of the scale parameter 0 under the combination of different values of Q and , respectively.Note that Q is chosen as a diagonal matrix and we denote it as Q = 2 I.The graph shown in Figure 6 as exp2 is the reference condition with 2 = 5.0 × 10 −6 , = 0.001, and is the same graph shown for scale parameter 0 in Figure 3c.The other graphs exp1, exp3, and exp4 in Figure 6 show the cases where 2 is 1.0 × 10 −6 , 1.0 × 10 −5 , and 5.0 × 10 −5 , respectively, under the same value of = 0.001.These graphs show that the larger the parameter covariance, the larger the ensemble spread and the less overshoot after the parameter abrupt change.
Next, we quantitatively evaluate the impact of the parameter error covariance Q on the ensemble.Figure 7 shows the dependence of the parameter error covariance Q on RMSE and spread for (a) states and (b) parameters, respectively.Each box plot shows the time-averaged RMSE and spread at the forecast and filtering steps in the 100th-1500th steps.The forecast RMSE and spread include three cycles of forecast steps, since the filtering interval is four.The four values of 2 shown on the horizontal axis are for exp1, exp2, exp3, and exp4 in Figure 6.Note that outliers are not plotted to exclude estimation errors immediately after abrupt changes in the 200th step.
For the states, we can see from Figure 7a that neither the value of RMSE nor the value of spread depends on the diagonal value of the parameter error covariance Q .In addition, the values of forecast RMSE and spread are close, that is, their ratio is close to one.On the other hand, for the parameters, Figure 7b shows that as the diagonal values 2 increase, the values of spread also increase, and the values of RMSE decrease.Especially in the case of 2 = 5.0 × 10 −5 , the values of forecast RMSE and spread are close, that is, their ratio is close to one.
Figure 8 shows the true values and the particle trajectories, as in Figure 6.The graph of exp2 is the same as in Figure 6 exp2 of the reference condition with 2 = 5.0 × 10 −6 , = 0.001.The exp5, exp6, and exp7 in Figure 8 show the cases where is 0.0005, 0.002, and 0.004, respectively, under the same value of 2 = 5.0 × 10 −6 .These graphs show that the larger the step-size factor, the faster the value approaches the true value after the abrupt change, but the more likely it is to overshoot.
Figure 9 shows the dependence of the step-size factor on RMSE and spread for (a) states and (b) parameters, respectively.Each box plot shows the time-averaged RMSE and spread at the forecast and filtering steps during the 100th-1500th steps, and the forecast RMSE and spread include three cycles of forecast steps, as in Figure 7.The four values of shown on the horizontal axis are for exp5, exp2, exp6, and exp7 in Figure 8.Note that outliers are not plotted as in Figure 7. Similarly to the trend shown in Figure 7, there is almost no dependence of the step-size factor on the RMSE and spread for states.For parameters, the spread does not increase even as the step-size factor increases, but the RMSE decreases, that is, the ratio of the forecast RMSE to spread approaches one.
Dependence of observation error and number of observations
In order to evaluate the dependence of the observation error and number of observations, we compare the large step-size condition: = 0.004 (exp7) with two additional experiments (exp8 and exp9).The first (exp8) is the case where the main diagonal value of the matrix R is large, and in the following, the value is set to 0.08.Note that this experiment (exp8) uses observation data generated at R = 0.08.Hence, R for data generation and assimilation are the same value.The second (exp9) is when the state is observed at every other grid point, so that H In both additional experiments, the conditions of the step size and the diagonal value of the parameter error covariance are the same as for exp7, that is, = 0.004, 2 = 5.0 × 10 −6 .Figure 10 shows a comparison of RMSE and spread for different observation conditions for (a) state and (b) parameter.The description of the box plot is the same as in Figure 9. Figure 10 exp7 shows the results of the reference condition, that is, R = 0.02, and all model states are observed.From the comparison of the state in Figure 10a exp7 and exp8, the change in R from 0.02-0.08increases both RMSE and spread, but spread is somewhat more pronounced.For the parameter in Figure 10b, RMSE values and dispersion tend to increase compared with spread.From comparison of the state in Figure 10a exp7 and exp9, because the number of observed variables was reduced to half, both RMSE and spread are increasing except for the filtering value of the observed variable.As for the parameters, both RMSE and spread show a small increase in median values, but an increase in dispersion.The results indicate that increasing observation error and decreasing observation density increase differences in estimation accuracy between parameters.In other words, the decrease in observed information has reduced the estimation accuracy of parameters with little impact (i.e., low sensitivity) on the model state.This could potentially be mitigated by adjusting the step size and the parameter error covariance.
CONCLUSION
This article proposed a resilient and efficient state and time-varying parameter estimation method for nonlinear high-dimensional systems through a sequential DA process.First, we introduced an extension of IEWPF to an augmented state-space model with a correlated covariance matrix.We then proposed the IEWPF-based method that incorporates the nudging technique inspired by optimization algorithms in machine learning into the parameter time evolution model by using the flexibility of the proposal density in particle filtering.
The performance of the method is examined in the 1000-dimensional linear model and nonlinear Lorenz-96 model.Experiments using the linear model with the static parameter indicate that the impact of the scalar factor on the variance of the parameter is similar to that on the variance of the state.Numerically, under the condition that the variance and shape of the posterior pdf for the states are close to the true ones, those for the parameter are also close to the true ones.
The experimental results of the nonlinear Lorenz-96 model with the time-varying parameters show the following points.First, the proposed state augmentation method successfully estimates states and parameters simultaneously, even when the number of ensemble members is much smaller than the model dimension.This result indicates that filter degeneracy is avoided when extending to an augmented state-space model.Second, the proposed parameter nudging method inspired by optimization algorithms accelerates the tracking for abrupt parameter changes and reduces the difference in estimation accuracy for each parameter.This result suggests the effectiveness of combining IEWPF with Adam, one of the optimization algorithms.Thirdly, from evaluating the impact of the parameter error covariance and the step-size factor on the time-averaged RMSE and the ensemble spread (spread), the former increases the spread and decreases the RMSE, while the latter decreases the RMSE.Properly determining these values so that the ratio of the RMSE to the spread approaches one will allow for good ensemble generation.However, its systematic method will be a subject of future research.Finally, from evaluating the dependence of the observation error and number of observations, the decrease in observed information has reduced the estimation accuracy of parameters with little impact (i.e., low sensitivity) on the model state.This could potentially be mitigated by adjusting the step-size factor and the parameter error covariance.Alternatively, it may be beneficial to narrow the parameters to be estimated to those with high sensitivity through a preliminary sensitivity analysis.
In the numerical experiments in this article, the Lorenz-96 model with parameterized forcing was used mainly to evaluate the nonlinearity of time evolution of the model states, but further investigation of the nonlinearity of the parameters is needed.Adam optimization is a first-order gradient-based method, and it is widely used to learn the weights in deep neural networks, that is, nonlinear functions.Thus, our Adam-based nudging term can work theoretically in nonlinear problems.However, even for nonlinear convex problems, there are conditions and limits to convergence, and new methods have been proposed (Reddi et al., 2018).Furthermore, convergence for nonconvex problems is still an open question, though Chen et al. (2019) developed an analysis framework and a set of sufficient conditions that guarantee convergence.Therefore, the applicability of the proposed method to various nonlinear problems in data assimilation needs to be investigated and is a topic for future research.
In this article, we applied the proposed online parameter estimation scheme to IEWPF as an example of a PF that can avoid filter degeneracy.The method is shown to be capable of resilient and efficient parameter estimation for time-varying parameters.The results lead to the conjecture that the proposed method is applicable to realistic geophysical, climate, and other problems.Since several approaches have been proposed to avoid filter degeneracy (e.g., Skauvold et al., 2019), the evaluation of another combination will be a subject of future research.
F
Histogram of cumulative variance comparing the diagonal values of R = 0.01 for (a) states and (b) parameter, respectively.Three sampling percentages from the ≤ 1 branch: 100%, 50%, and 0% are compared with the true variance (dashed line).
E 3
Comparison of estimated state and parameter trajectories between (a) conventional augmented method (MH1),(b) without nudging: = 0 (MH2), and (c) with nudging: = 0.001 (MH3).The solid lines show each of the 20 ensemble members, and the dashed lines show the true parameter value.Only the 1350-1500th steps are shown for the state, and each true parameter is increased by 30% at the 200th step.
F I G U R E 4
Comparison of time series RMSE after parameter abrupt change (200th step) between augmented method (MH1), without nudging: = 0 (MH2) and with nudging: = 0.001 (MH3) as per Figure 3.The third step after the filtering for the (a) state and all steps for the (b) parameter are shown.Each value is averaged over all elements.
F
Box plot showing the comparisons of RMSE and spread for forecast and filtered ensembles between augmented method (MH1), without nudging: = 0 (MH2) and with nudging: = 0.001 (MH3) as per Figure 3.Each IQR indicates the dispersion of the (a) state and (b) parameter elements averaged over the forecast and filtering steps in 100-1500, respectively.Outliers are not plotted.
F
Comparison of estimated parameter trajectories between different values of 2 : 1.0 × 10 −6 (exp1), 5.0 × 10 −6 (exp2), 1.0 × 10 −5 (exp3), and 5.0 × 10 −5 (exp4) under the same value of = 0.001.The solid lines show each of the 20 ensemble members, and the dashed lines show the true parameter value.Each true parameter is increased by 30% at the 200th step.F I G U R E 7 Box plot showing the comparison of RMSE and spread for each of the forecast and filtered ensembles between different values of 2 = 1.0 × 10 −6 , 5.0 × 10 −6 , 1.0 × 10 −5 , and 5.0 × 10 −5 as per Figure 6.Each IQR indicates the dispersion of the (a) state and (b) parameter elements averaged over the forecast and filtering steps in 100-1500, respectively.Outliers are not plotted.
F I G U R E 8
Comparison of estimated parameter trajectories between different values of : 0.0005 (exp5), 0.001 (exp2), 0.002 (exp6), and 0.004 (exp7) under the same value of 2 = 5.0 × 10 −6 .The solid lines show each of the 20 ensemble members, and the dashed lines show the true parameter value.Each true parameter is increased by 30% at the 200th step.F I G U R E 9Box plot showing the comparison of RMSE and spread for each of the forecast and filtered ensembles between different values of = 0.0005, 0.001, 0.002, and 0.004 as per Figure8.Each IQR indicates the dispersion of the (a) state and (b) parameter elements averaged over the forecast and filtering steps in 100-1500, respectively.Outliers are not plotted.F I G U R E 10Box plot showing the comparisons of RMSE and spread for forecast and filtered ensembles between the large step-size condition (exp7), large observation error: R = 0.08 (exp8), and partially observed (exp9).Each IQR indicates the dispersion of the (a) state and (b) parameter elements averaged over the forecast and filtering steps in 100-1500, respectively.Outliers are not plotted."Ob" and "Uo" represent observed and unobserved states.
•
Sample initial particle for state x 0 i and parameter 0 i , i = 1, … , N.
|
v3-fos-license
|
2023-11-16T16:02:19.832Z
|
2023-11-01T00:00:00.000
|
265216734
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://assets.cureus.com/uploads/case_report/pdf/191284/20231114-23659-amidmx.pdf",
"pdf_hash": "227efff8247c88ba4ba25e736fa683a81fe189ba",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2335",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "ba892b3d50d5e39a44c81a91cee16bc28058dde8",
"year": 2023
}
|
pes2o/s2orc
|
Induction Chemotherapy Prior to Endoscopic Resection of Alveolar Rhabdomyosarcoma
Head and neck rhabdomyosarcoma (HNRMS) is a rare type of soft tissue tumor that affects both adults and children with an overall incidence of 0.041 per 100,000 people. Adults make up approximately 31.2% of all HNRMS diagnoses and have an overall survival rate between 20% and 40%. We present a case of a 46-year-old male who initially presented with nasal congestion and vision changes. Maxillofacial computed tomography and magnetic resonance imaging of the brain showed involvement of the orbital apex, abutment of the planum sphenoidale, and extension to the foramen rotundum (FR). Nasal endoscopy with biopsy confirmed the diagnosis of T2aN0M0 parameningeal HNRMS. The patient underwent induction chemotherapy, followed by endoscopic resection, which resulted in negative intraoperative margins. Subsequently, he underwent adjuvant concurrent chemotherapy and proton beam radiation after positive microscopic positive margins were found on the optic nerve. The patient did not experience any significant complications, and he is currently without radiographic or clinical recurrence 18 months after the treatment. He was able to maintain his vision throughout the treatment. In adults, HNRMS is usually treated with chemoradiotherapy based on pediatric protocols, since there are limited data available for adult treatment protocols and outcomes. Although surgery has been associated with positive outcomes in adult patients, there are no previous reports of its use with either neoadjuvant or adjuvant treatment. This type of treatment protocol has never been described for adult HNRMS. We hope that our report can add more data to the growing body of literature on HNRMS treatment protocols.
Introduction
Rhabdomyosarcoma (RMS) is a malignant soft tissue tumor composed of immature mesenchymal cells with the potential to differentiate into striated muscle tissue.It is a rare tumor that accounts for only 350 cases annually, yet it comprises about half of soft-tissue sarcomas in children and adolescents [1].Approximately 35-40% of pediatric RMS occurs in the head and neck region, compared with 33% in adults.In total, RMS represents only 2-5% of soft-tissue tumors in adults [2][3][4].
Head and neck RMS (HNRMS) has been reported to the Surveillance, Epidemiology, and End Results (SEER) registry 558 times from 1973 to 2007, with an overall incidence of 0.041 cases per 100,000 population [2].Of all cases, adults make up approximately 31.2% of all diagnoses of HNRMS.Although there is no known gender predilection, it has the propensity to affect white patients more often than other races (77.1% White vs. 13.8%Black vs. 8.4% Others) [2].HNRMS can be divided into three subtypes based on location with the following frequencies: orbital (25.6%), parameningeal (infratemporal fossa, pterygopalatine fossa, ear, mastoid, nasal cavity, paranasal sinuses) (44.4%), and nonorbital non-parameningeal (tongue, parotid, palate, all other head and neck sites) (29.9%) [2,4].Three primary histologic subtypes have been identified, namely, pleomorphic, embryonal, and alveolar, with embryonal having a better five-year relative survival of 72.2% compared with alveolar at 44.1% [2].
Here, we present a 46-year-old male with a T2aN0M0 parameningeal HNRMS with involvement of the orbital apex, abutment of the planum sphenoidale, and extension to the foramen rotundum (FR) who underwent induction chemotherapy, followed by endoscopic resection and then by adjuvant concurrent chemotherapy and proton beam radiation.The treatment course was modeled after pediatric protocols for similar parameningeal RMS.To the authors' knowledge, there are no previous reports of this type of operative and post-operative treatment regimen for parameningeal RMS in an adult, and the patient is currently without radiographic or clinical recurrence 18 months out and has maintained his vision.
Case Presentation
A 46-year-old male presented to an outside hospital emergency department with five days of progressively worsening diplopia and nasal congestion.Ophthalmology evaluation noted right papilledema.Non-contrast maxillofacial CT showed a 4.5 cm x 4.1 cm x 4.0 cm soft tissue mass centered in the right ethmoid sinus involving both ethmoid sinuses, both sphenoid sinuses, and the right sphenopalatine foramen with thinning of the planum sphenoidale and erosion into the right orbital apex (Figure 1).MRI of the brain/orbits redemonstrated the expansile mass of the right ethmoid sinus with involvement of the PPF, extension along the planum sphenoidale, and extension through the orbital apex (Figure 2).Otolaryngology was consulted and recommended an urgent nasal biopsy within the week.The patient underwent an endoscopic image-guided biopsy.The frozen section was not definitive.Permanent histological analysis revealed a small, round, blue-cell tumor, with immunohistochemistry that was positive for desmin, myoD1, and myogenin consistent with RMS.Confirmatory testing with reverse transcription polymerase chain reaction (RT-PCR) was positive for PAX/FOX01 translocation, confirming the alveolar subtype RMS.
After tissue diagnosis, a positron emission tomography (PET) scan redemonstrated the mass centered in the right ethmoid cavity with a maximum standard uptake value (SUV) of 8.6.There was also nonspecific borderline activity of bilateral level 2 lymph nodes with an SUV of 2.8, but no clinical cervical lymphadenopathy on physical exam.
The patient was presented at a multidisciplinary tumor board as a T2aN0M0 alveolar RMS of the right ethmoid sinus.The tumor board recommendation was for induction neoadjuvant chemotherapy, followed by endoscopic resection and then adjuvant concurrent chemotherapy and proton beam radiation therapy.The patient was strongly opposed to an orbital exenteration.It was felt that this induction chemotherapy (IC) technique, modeled after pediatric protocols, would help facilitate achieving surgical clear margins or at least only microscopic disease at the orbital apex to be followed by adjuvant concurrent chemoradiation therapy.The combination of vincristine, actinomycin D, and cyclophosphamide was chosen according to the D9803 Children's Oncology Group (COG) protocol.Upon completion of four cycles of induction, both posttreatment CT of the sinuses (Figure 3) and MRI brain (Figure 4) revealed resolution within the orbital apex with continued involvement of the right ethmoid sinus, lateral margin of the right sphenoid sinus, the superior margin of the maxillary sinus, and persistent involvement of the pterygopalatine fossa (PPF).One month after completion of IC, image-guided endoscopic resection, including a right medial maxillectomy, bilateral ethmoidectomy, bilateral sphenoidotomy, right frontal sinusotomy, near total septectomy, extended right PPF dissection, and right orbital/optic nerve decompression, was performed.Final intraoperative frozen section margins of the posterior periorbita, posterior septum, lateral sphenoid wall/optic nerve sheath, ethmoid/sphenoid skull base, and PPF/vidian canal were negative.Vision remained intact after surgery and no cerebrospinal fluid leak was encountered.Final pathology from the resection again revealed parameningeal alveolar RMS, FOX01 translocation positive.Of note, the lateral sphenoid wall/optic nerve sheath did show evidence of microscopic disease on the permanent section.Four weeks after recovery from surgery, the patient started concurrent chemoradiotherapy (CRT), which included vincristine, cyclophosphamide, actinomycin D, and 23 fractions of proton beam therapy with a total dose of 41.4 Gy.
Three-month post-treatment PET-CT (Figure 5) revealed a mild amount of mucosal thickening of the right maxillary sinus, which was mildly FDG avid with an SUV of 3.4, compared with his pre-treatment SUV of 8.6, likely representing inflammatory changes rather than residual disease.The most recent MRI (Figure 6) did not show any definite signs of recurrence.The patient is currently 18 months out from completion of CRT and is being monitored at three-month intervals with serial nasal endoscopies and repeat imaging with no signs of recurrence.Extensive postsurgical changes with no definite signs of residual or recurrent tumor were noted.
Discussion
RMS is a malignant tumor composed of immature mesenchymal cells with the potential to differentiate into striated muscle tissue and typically presents with small blue round cells on histological examination [5].The two most common histologic subtypes of RMS are embryonal and alveolar, with embryonal presenting more commonly in children and alveolar more commonly in adults, and immunohistochemical markers are the most reliable way to differentiate between the two [2,5,6].Myogenin, myoD1, and desmin are more specific to alveolar RMS compared with embryonal, and the presence of PAX/FOX01 translocation favors the alveolar subtype, as seen in our patient [1,[5][6][7].
Overall, RMS treatment relies on the TNM classification, disease site, FOX01 translocation factor presence, and clinical group findings.Due to the rare occurrence of RMS in adults, the prognosis is based on data identified by the Intergroup Rhabdomyosarcoma Study Group Outcomes I and II, which included only those diagnosed in the first two decades of life [8][9][10].The data gathered from these studies incorporated tumor spread at diagnosis and the amount remaining after the initial intervention [4,[8][9][10].Although these prognostic groups were based on pediatric patients, adult RMS has been noted to respond similarly to pediatric RMS.Our patient would be considered Stage II (cT2aN0M0), with tumor extension into an unfavorable site (orbital apex) and FOX01 translocation positive, which would correspond to the intermediate risk group of 50-70% event-free survival [10].
Regarding prognostication and survival of HNRMS, the reported 5-year overall survival (OS) ranges from 33% to 45% [8].Wu et al. [8] reported their single-institution findings of 59 adult HNRMS patients and reported a five-year OS rate of 36%, with metastasis to cervical lymph nodes in 29% of patients.Worse prognostication was associated with tumor size greater than five centimeters, positive surgical margins, and cervical lymph node involvement.Local recurrence and distant metastasis were the primary causes of treatment failure [8,9].The extent of disease (localized vs. regional) is a key prognostic factor of HNRMS relative survival (RS) more than involvement of a primary site, with regional disease portending a worse prognosis [1,2].
The parameningeal subsite of HNRMS is also associated with a poorer prognosis and earlier recurrence.This is likely because these tumors tend to extend into meningeal and intracranial sites, as seen in our patient [7].In a database review of 186 adult patients with sinonasal/parameningeal RMS, Stepan et al. found alveolar to be the most common histologic type, comprising 66.7% of adult cases [7].Alveolar RMS was not shown to have a poorer prognosis than embryonal RMS within the sinonasal sites; however, increased age was correlated with a poorer prognosis regardless of type, with five-year survival rates of 31.9% in those aged 18-35 and 24.4% in patients older than 35 years [7].
In general, the treatment guidelines for adults are similar to those for children due to limited data available, which consists primarily of chemotherapy with the addition of radiation and/or surgical resection [10][11][12][13].
Although there are limited data for adults, surgical resection of parameningeal RMS in the pediatric population is correlated with a higher five-year survival rate [12].Pediatric RMS responds well to induction chemotherapy with concurrent chemoradiotherapy, but adult RMS tends to be more aggressive with a worse prognosis.Although initial surgical resection may be difficult secondary to the anatomic location of the lesion, the use of induction chemotherapy to shrink the tumor can allow for surgical resection in adult patients [14].Kobayashi et al. reported treatment outcomes of 37 HNRMS patients undergoing either delayed primary excision after induction chemotherapy versus concurrent chemoradiotherapy after induction.In patients with a good response to induction chemotherapy, the surgical excision group had better three-year locoregional control than the chemoradiotherapy group [14].
While there was no direct extension into the cranial fossa in our patient, there was the involvement of the PPF and orbital apex.With the patient opposed to an orbital exenteration, a pterygopalatine fossa dissection was a feasible option.Our patient underwent this procedure without any orbital injury, cranial nerve injury, or CSF leak.Craniofacial approaches have been reported, but these result in transfacial incisions or facial osteotomies, which can be disfiguring and have an increased risk of neural and vascular injury secondary to poorer visualization [15][16][17].
Conclusions
We presented a 46-year-old male with a T2aN0M0 parameningeal alveolar RMS with involvement of the orbital apex, abutment of the planum sphenoidale, and extension to the FR who underwent IC followed by endoscopic resection and adjuvant concurrent chemotherapy with proton beam radiation.The treatment course was modeled after pediatric protocols for similar parameningeal RMS.Currently, the patient is 18 months disease-free without significant comorbidities or side effects from their treatment.To the authors' knowledge, there are no previous reports of this type of operative and post-operative treatment regimen for parameningeal RMS in an adult, specifically with adult endoscopic resection with proton beam therapy.
FIGURE 1 :
FIGURE 1: CT maxillofacial without contrast (A), coronal (B), and axial planes (C) in a bone window, showing 4.5 cm x 4.1 cm x 4.0 cm soft tissue mass (red dot) centered in the right ethmoid air cells involving the right sphenopalatine foramen, right olfactory recess, bilateral sphenoid sinuses, and right maxillary sinus.Noted bony destruction of the medial orbital apex and abutment of the right optic nerve (blue arrow), thinning of the right planum sphenoidale and posterior medial right orbital wall, and enlargement of the right maxillary osteomeatal complex (OMC) and posterior fontanelle.
FIGURE 2 :
FIGURE 2: T1 with contrast MRI brain in axial (A) and coronal (B,C) planes showing an enhancing, expansile mass of the right nasal cavity (red dot) extending from the right ethmoid sinus to the bilateral sphenoid sinuses and right anterior skull base, with extension of the mass into the right orbital apex.
FIGURE 3 :
FIGURE 3: CT scan of sinuses in the axial (A, B) and coronal (C) planes after completion of induction chemotherapy.Noted extension of the mass to the PPF (blue arrow), with a lack of regression from the right maxillary and sphenoid sinuses (red dot).
FIGURE 4 :
FIGURE 4: T1 fat-suppressed MRI brain in the axial (A) and coronal (B, C) planes post-induction chemotherapy.Resolution of the mass from the right orbital apex and continued involvement of the right sphenoid sinus laterally and right maxillary sinus anteriorly (red dot).
FIGURE 5 :
FIGURE 5: A three-month post-treatment PET-CT axial plane (A, B).Interval right frontoethmoidectomy and bilateral sphenoidectomy with tumor resection noted.A mild amount of mucosal thickening of the right maxillary sinus (blue dot), which was mildly FDG avid, with an SUV 3.4.
|
v3-fos-license
|
2016-06-17T11:09:22.471Z
|
2014-07-28T00:00:00.000
|
10160564
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2014.00209/pdf",
"pdf_hash": "47cf6e821b1c13ceeed8c21ef2a7b1a2faa7ebb1",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2336",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "47cf6e821b1c13ceeed8c21ef2a7b1a2faa7ebb1",
"year": 2014
}
|
pes2o/s2orc
|
Emerging role of the KCNT1 Slack channel in intellectual disability
The sodium-activated potassium KNa channels Slack and Slick are encoded by KCNT1 and KCNT2, respectively. These channels are found in neurons throughout the brain, and are responsible for a delayed outward current termed IKNa. These currents integrate into shaping neuronal excitability, as well as adaptation in response to maintained stimulation. Abnormal Slack channel activity may play a role in Fragile X syndrome, the most common cause for intellectual disability and inherited autism. Slack channels interact directly with the fragile X mental retardation protein (FMRP) and IKNa is reduced in animal models of Fragile X syndrome that lack FMRP. Human Slack mutations that alter channel activity can also lead to intellectual disability, as has been found for several childhood epileptic disorders. Ongoing research is elucidating the relationship between mutant Slack channel activity, development of early onset epilepsies and intellectual impairment. This review describes the emerging role of Slack channels in intellectual disability, coupled with an overview of the physiological role of neuronal IKNa currents.
INTRODUCTION
An influx of sodium ions through sodium channels or neurotransmitter receptors triggers a sodium-sensitive potassium current (I KNa ), which is found in a diverse range of neuronal cell types. In many cases, I KNa is mediated by the phylogenetically related K Na channel subunits Slack and Slick . Where Slack or Slick is expressed, I KNa contributes to a late afterhyperpolarization that follows repetitive firing. I KNa also regulates neuronal excitability and the rate of adaptation in response to repeated stimulation at high frequencies. Alterations in I KNa have pathophysiological consequences, as suggested by reports of human mutations found in the Slack-encoding gene KCNT1 (Barcia et al., 2012;Heron et al., 2012;Martin et al., 2014). Slack channels are hence associated with several early onset epileptic encephalopathies. Epilepsies associated with each one of the Slack mutations are in turn associated with a severe delay in cognitive development. Importantly, these new findings strengthened an earlier connection between Slack channels and Fragile X syndrome (FXS); Slack channels interact with FMRP (Fragile X Mental Retardation protein; Brown et al., 2010), which is absent in FXS patients. FXS as a condition is also associated with an increased incidence of childhood seizures, and is the most commonly inherited form of intellectual disability and autism. These observations suggest that Slack channels are developmentally important modulators of cell plasticity underlying normal cognitive development.
This review summarizes studies that have focused on the physiological and pathophysiological role of I KNa , with a particular focus on Slack channels, and also discusses implications for future research. The review is divided into the following parts. First, we describe the properties of Slack channels and physiological functions of the I KNa current, drawing from both historical and more recent studies. Next, we compare and contrast some of the features of FXS and three epileptic encephalopathies (malignant migrating partial seizures of infancy, MMPSI; autosomal dominant nocturnal frontal lobe epilepsy, ADNFLE; and Ohtahara syndrome, OS; Barcia et al., 2012;Heron et al., 2012;Martin et al., 2014) that can result from mutations in Slack channels. In the last section, we cover the mechanisms by which Slack channel activity is altered in these conditions. In particular, we focus on the extent to which the development of intellectual disability can be attributed to the occurrence of the seizures themselves vs. alterations in cellular signaling pathways likely to be disrupted by Slack mutations.
PROPERTIES OF KCNT1 SLACK CHANNELS
The KCNT1 gene encodes the sodium-activated potassium channel called Slack (named for Sequence like a calcium-activated K + channel). Slack channels resemble the well-known voltagegated Kv channels in their topography and assembly. Like the Kv channels, Slack subunits have six hydrophobic, transmembrane segments (S1-S6) along with a pore-lining loop that is found between S5 and S6 (Figure 1). These subunits assemble as tetramers to form a functional channel that is voltage-dependent (Joiner et al., 1998). However, unlike the Kv family of channels, which use a set of positively charged residues along the S4 segment to sense changes in transmembrane voltage (Aggarwal and MacKinnon, 1996;Seoh et al., 1996), Slack channels have no charged residues in S4, and the corresponding mechanism for voltage-sensing in Slack channels is not yet understood. Another distinguishing feature of Slack channels is their very large cytoplasmic C-terminal domain, which is over 900 amino acids in length (Joiner et al., 1998), making Slack channels the FIGURE 1 | A schematic diagram of Slack subunit topography. Slack subunits have six transmembrane domains. These hydrophobic transmembrane segments are labeled as S1-S6, with the pore region between S5 and S6 indicated with the letter P. Four of these subunits assemble into a functional channel. Both the N-and C-terminal ends are cytosolic in Slack, with C-terminus being one of the longest found among all potassium channels. The C-terminus contains two RCK (regulators of K + conductance) domains that stack on top of each other and form a gating ring underneath the channel opening pore. Gray circles represent the general locations where human mutations have been found. A total of thirteen distinct mutations have been found to date, and these mutations are discussed further in the text.
largest know potassium channel subunits. In comparison, the C-terminal domain of one of the longest Kv family eag channels is only ∼650 amino acids in length (Warmke and Ganetzky, 1994).
The unitary conductance of the Slack channels expressed in heterologous systems ranges from 88 to 180 pS in symmetrical potassium solutions (Yuan et al., 2003;Chen et al., 2009;Zhang et al., 2012), while single channel conductances measured for Na +activated K + channels in native neurons range from 122 to 198 pS (Yang et al., 2007;Tamsett et al., 2009). At least three observations can explain the wide range and difference in the two expression systems. One confounding factor in measuring channel conductance is that both native I KNa channels and Slack channels in expression systems are known to have multiple subconductance states, which in patch clamp experiments appear as brief, flickering short steps alternating with time spent in the fully open or the closed state (Yuan et al., 2003;Brown et al., 2008). We will revisit this particular property of Slack channels in our discussion of the pathophysiological consequences of aberrant changes in Slack channel activity. Secondly, diversity in the properties of native I KNa can stem from the existence of multiple splice isoforms of Slack channels (Brown et al., 2008), and the fact that some Slack isoforms can form heteromers with related channel subunits such as Slick subunits (Joiner et al., 1998;Yang et al., 2007;Chen et al., 2009). Encoded by KCNT2, Slick subunits are distinct in their channel kinetic behavior and unitary conductances (Bhattacharjee et al., 2003). Heteromeric Slack/Slick channels also have properties that are yet different from those of either subunits expressed alone, and their response to modulation by protein kinases also differs from that of the homomeric channels (Chen et al., 2009). Evidence that Slack and Slick channels are co-expressed has been provided in auditory brainstem neurons, olfactory bulb and a number of other neurons Chen et al., 2009). Finally, in addition to the Slack and Slick channels, which are phylogenetically related (Salkoff et al., 2006), the evolutionarily more distant K ir 3 inward rectifier potassium channels are also sensitive to cytoplasmic sodium ions, further increasing the diversity of native I KNa channels. (Petit-Jacques et al., 1999).
SLACK ENTERS INTO PROTEIN-PROTEIN INTERACTIONS WITH OTHER MEMBRANE PROTEINS AND CYTOPLASMIC SIGNALING PROTEINS
The Slack channel subunit interacts directly with the mRNAbinding protein FMRP, which regulates the probability of Slack channel opening (Brown et al., 2010;Zhang et al., 2012). Evidence for direct Slack channel-FMRP binding was first found in a yeasttwo-hybrid assay, and confirmed by co-immunoprecipitation from synaptosomal lysates isolated from mouse brainstem and olfactory bulbs (Brown et al., 2010). This interaction appeared to be evolutionarily conserved, as the same finding was demonstrated in large bag cell neurons of the marine mollusk Aplysia californica (Zhang et al., 2012). Moreover, messenger RNA targets of FMRP can be co-immunoprecipitated with Slack from wild type mice but not from the fmr −/y mice lacking FMRP (Brown et al., 2010). Addition of an N-terminal fragment of FMRP (FMRP 1-298) that retains the majority of the known FMRP protein-protein interaction domains, but lacks the major mRNA binding sites to Slack channels in excised inside-out patches substantially increased channel mean open time (Brown et al., 2010). In part, this increase in Slack channel activity occurs by eliminating subconductance states and favoring openings to the fully open state.
Slack channel subunits also interact directly with TMEM16C (ANO3), a transmembrane protein found in non-peptidergic nociceptive neurons (Huang et al., 2013). Though closely related to the Ca 2+ -activated Cl − channels TMEM16A and B, TMEM16C itself alone does not appear to function as an ion channel. Slack and TMEM16C can exist together in a protein complex and are colocalized in nociceptive neurons. Similar to FMRP, the presence of TMEM16C substantially increases the activity of Slack channels. Further discussion of the biological role of this interaction in nociceptive neurons is provided later in this review, but for now we turn to discuss neuronal cell types that express K Na channels.
LOCALIZATION OF SLACK AND SLICK SUBUNITS
Cloning of the K Na Slack and Slick genes, KCNT1 and KCNT2, and the development of specific antibodies have enabled a detailed study of their expression in the brain (Bhattacharjee et al., 2003Yuan et al., 2003). These studies have confirmed that highest levels of Slack and Slick channels are found in the brain, with detection of lower levels in the heart and the kidney (Joiner et al., 1998;Yuan et al., 2003;Brown et al., 2008).
Frontiers in Cellular Neuroscience
www.frontiersin.org In situ hybridization and immunohistochemistry were systematically performed in the adult rat brain, and demonstrated that Slack transcripts and protein are abundantly expressed in neurons throughout all regions of the brain, including the brainstem, cerebellum, frontal cortex and the hippocampus (Bhattacharjee et al., 2002;Santi et al., 2006;Brown et al., 2008). Similar results are also reported in the mouse brain, where abundant mRNA expression has been found in the brainstem and the olfactory bulb (Brown et al., 2008).
I KNa IS A MAJOR COMPONENT OF THE DELAYED OUTWARD CURRENT IN NEURONS
The term I KNa was first coined by Bader et al. (1985), who described in avian neurons an outward K + current with dependence on [Na + ] i ( Table 1). An independent study concurrently described similar currents in neurons isolated from the Crayfish (Hartung, 1985). In both studies, researchers observed changes in the outward K + current in the presence and absence of the Na + channel inhibitor tetrodotoxin (TTX), and concluded that a component of the neuronal outward current was sensitive to Na + influx. Similar reports soon followed in a number of neuronal cell types, which led to the recognition of a previously unrecognized outward current that was sensitive to Na + influx (Haimann et al., 1990;Dryer, 1991;Bischoff et al., 1998). A partial list of such cell types includes medial nucleus of the trapezoid body (MNTB), trigeminal, mitral, vestibular, and dorsal root ganglion (DRG) nociceptive neurons. Importantly, this list demonstrates that Slack channels are involved in the olfactory, auditory, vestibular and pain-sensing systems, all of which are critical to normal development and learning. For a more comprehensive review of Slack channel expression patterns, the reader is advised to Refs. Kaczmarek, 2013).
SLACK CHANNEL SUBUNITS ARE REQUIRED FOR I KNa
That Slack channel subunits contribute to I KNa currents was demonstrated in later studies, using neonatal neurons isolated from the rat olfactory bulb, as well as in corpus striatum (Budelli et al., 2009;Lu et al., 2010). A component of the outward current similar to the I KNa reported in the earlier studies was suppressed upon knocking-down Slack expression using the siRNA technique (Budelli et al., 2009). These studies contributed the surprising discovery that I KNa represents a very major fraction of the total outward current of these neurons. Levels of I KNa channels are particularly high in mitral cells of the olfactory bulb (Egan et al., 1992;Bhattacharjee et al., 2002), in which the other major component of K + current is carried by the voltage-dependent potassium channel subunit Kv1.3 (Kues and Wunder, 1992). The activity of Kv1.3 channels helps determine the firing patterns of mitral cells in response to odorant stimulation and/or glucose presence (Tucker et al., 2013). A very interesting phenotype results when Kv1.3 channels are deleted by homologous recombination in mice (Fadool et al., 2004). Levels of both I KNa current and of Slack channel protein expression are substantially increased in Kv1.3 −/− mice (Lu et al., 2010). This I KNa could be directly attributed to the Slack subunits by again knocking down Slack subunits with the siRNA technique, which suppressed the I KNa currents (Lu et al., 2010). Loss of Kv1.3 channels, together with the upregulation of I KNa currents, altered the kinetics of inactivation of K + currents in the mitral cells, resulting in a decrease in action potential height and an increased adaptation of action potential firing in response to maintained stimulation (Fadool Table 1 | Physiological role of Slack-mediated I KNa in specific neuronal cell types (selected references).
Neuronal type Animal/Age I KNa contribution Reference
Ciliary and trigeminal ganglia E7-8, chick or quail embryos Sodium-dependent outward current Bader et al. (1985) Layer V neurons of sensorimotor cortex Cats sAHP, cellular excitability Schwindt et al. (1989) Spinal Frontiers in Cellular Neuroscience www.frontiersin.org et al., 2004). Remarkably, these changes were associated with the development of increased numbers of olfactory glomeruli in the olfactory bulb and a 10,000-fold increase in the sensitivity of the Kv1.3 −/− mice to odorant stimuli.
CONTRIBUTION OF I KNa TO NEURONAL FIRING PATTERNS: REGULATION OF ADAPTATION TO MAINTAINED STIMULATION
In many neurons, I KNa currents contribute to a long-lasting slow afterhyperpolarization (sAHP), which results from a slowly developing outward current evoked during sustained stimulation (Vergara et al., 1998). The period of reduced excitability afforded by sAHP is thought to protect the cell from repetitive, tetanic activity, and has been studied in layer V neurons of the sensorimotor cortex of the cat (Schwindt et al., 1988a,b). It has been shown that whereas the early part of the sAHP is dependent on Ca 2+ influx during stimulation, the late part is Na + -sensitive. Furthermore, this late component of the sAHP is sufficient to reduce cellular excitability in the cat sensorimotor cortex layer V neurons (Schwindt et al., 1989). Performing slice recordings in the absence of Ca 2+ , Schwindt et al. (1989) showed that neuronal firing rate is attenuated for many tens of seconds following stimulation, matching the duration of Na + -dependent sAHP. Similar Na + -dependent sAHPs have also been observed in a number of other neurons, including hippocampal pyramidal cells (Gustafsson and Wigstrom, 1983) and spinal cord neurons (Wallen et al., 2007). In motor neurons from the lamprey spinal cord, stimulation of action potentials at increasingly higher rates (from 2 to 8 Hz) progressively prolongs the time it takes for the membrane potential to return to baseline, an effect that can be attributed to the duration of the evoked sAHP. At lower firing rates, the Ca 2+ -sensitive early phase of the sAHP dominates the rate of recovery to the resting state. However, the contribution of the late I KNa -dependent phase of the sAHP to this effect becomes more significant with increasing firing frequencies. It appears then that the I KNa -mediated sAHP is likely to be a physiological modulator of neuronal excitability during rapid firing (Wallen et al., 2007).
ROLE OF SLACK CHANNELS IN NOCICEPTION
Two studies focusing on the pain-sensing DRG nociceptors have shed further light on the role of I KNa in neuronal excitability (Nuwer et al., 2010;Huang et al., 2013). In one study, siRNAmediated technology was utilized to knock down Slack channels in the embryonic rat peptidergic nociceptors, demonstrating that these Slack-knockdown neurons were hyperexcitable compared to control neurons (Nuwer et al., 2010). The second study showed that the voltage threshold for action potential generation is significantly reduced in nociceptive neurons isolated from a TMEM16C −/− rat (Gadotti and Zamponi, 2013;Huang et al., 2013). As was described earlier in this review, TMEM16C is a transmembrane protein found in non-peptidergic nociceptive neurons that binds Slack channel subunits and increases their channel activity. Consistent with this, the neurons from TMEM16C −/− rats had reduced I KNa currents. The TMEM16C −/− rats also had increased thermal and mechanical sensitivity, as revealed in behavioral studies. That this increased sensitivity could be directly attributed to the change in Slack I KNa current was confirmed by an in vivo Slack knockdown experiment in animals, which induced the same pattern of heightened sensitivities (Huang et al., 2013).
ROLE OF SLACK CHANNELS IN TEMPORAL ACCURACY OF ACTION POTENTIAL FIRING
Slack/Slick channels are also expressed in high abundance in neurons of the MNTB within the auditory brainstem (Bhattacharjee et al., 2002;Yang et al., 2007). These neurons are capable of firing at rates up to ∼800 Hz with high temporal accuracy, a feature that is required for accurate determination of the location of sounds in space. Current clamp and voltage clamp experiments have demonstrated that activation of I KNa currents increases temporal accuracy in these neurons at high rates of stimulation, in large part by increasing the membrane conductance close to the threshold for action potential generation (Yang et al., 2007). This reduces the time constant of the membrane and allows the timing of action potentials to be closely matched to the pattern of incoming stimuli. Pharmacological activation of Slack channels in these neurons has been shown to further increase timing accuracy in these cells, a finding that is consistent with numerical simulations of the firing patterns of these cells with and without I KNa currents (Yang et al., 2007).
I KNa currents also shape the neuronal firing in the vestibular system, which consists of four vestibular nuclei that receive input from the vestibular afferent neurons (Fitzpatrick and Day, 2004). The afferent neurons transmit information about head movements to help the organism stabilize gaze and maintain proper balance. Vestibular afferent neurons have characteristic resting discharge rates that adapt upon detecting angular and linear accelerations, thereby relaying vestibular information (Grillner et al., 1995). Cervantes et al. (2013) characterized I KNa currents in rat vestibular ganglion neurons, and found that I KNa currents regulate the phase-locking of action potential firing to a stimulus, as well as the firing regularity and discharge patterns of these neurons.
The summarized studies have demonstrated that I KNa currents are a major physiological component of the outward current in neurons, where these currents help regulate intrinsic electrical excitability, as well as the manner in which neurons respond to patterns of incoming stimulation. These studies have led Budelli et al. (2009) to conclude "in clinical and pharmacological studies, this previously unseen current system that is active during normal physiology represents a new and promising pharmacological target for drugs dealing with seizure and psychotropic disorders," an early prediction that would be realized by the finding of human mutations in Slack channels.
SLACK CHANNELS IN COGNITIVE DISORDERS
Given that Slack channels appear as modulators of neuronal excitability and of neuronal adaptation to stimulation in a wide range of species, it is not surprising that alterations in Slack channel activity may have significant pathophysiological consequences. Furthermore, what is known about these pathologies strongly suggests that Slack channel activity is a critical component that ensures normal cognitive development. The finding that Slack Frontiers in Cellular Neuroscience www.frontiersin.org channel activity is increased by direct complex formation with FMRP, the RNA-binding protein that is deleted in FXS, implicates Slack channel function in this syndrome (Brown et al., 2010). More specifically, there may be a clinically significant relationship between Slack channel activity and development of intellectual disability in FXS. Increasing evidence supports this hypothesis: epilepsy patients who have profound intellectual disability carry mutations in the Slack-encoding KCNT1. More than a dozen different KCNT1 mutations have now been reported in the literature, in connection with three different types of seizures that occur in infancy or childhood, MMPSI, ADNFLE, and OS (Barcia et al., 2012;Heron et al., 2012;Martin et al., 2014). These findings strongly indicate a pathophysiological role for Slack channels in the abnormal development of intellectual function.
INTELLECTUAL DISABILITIES
Seizures can have variations in onset and frequency, and may occur during childhood with little or no intellectual impairment [Engel and International League Against Epilepsy (ILAE), 2001]. A case in point is ADNFLE, epilepsy that can be caused by mutations either in the α-4, α-2 or β-2 subunits of the neuronal nicotinic acetylcholine receptor, encoded by the CHRNA4, CHRNA2, or CHRNB2 genes respectively, or by mutations in the Slack channel. Severe intellectual disability, however, only occurs in those patients who carry Slack mutations (Heron et al., 2012). This implies that the seizure episodes themselves are unlikely to be the prime determinant of intellectual function. Intellectual disability is a salient feature in all patients diagnosed with FXS, and in some patients with epilepsy and/or autism spectrum disorder (ASD). Below, we explore the overlap in clinical manifestation among these three types of patient groups. Fragile X syndrome, childhood epilepsies and ASD are notable for their heterogeneity of clinical manifestations in the behavioral and cognitive domains. Different combinations of these three disorders have also occurred together in patients. Numerous studies have reported a range of percentages for the prevalence of such overlapping patient groups, and are shown in Figure 2. The codiagnosis rate of an ASD disorder in male Fragile X patients ranges from 25 to 46% (Muhle et al., 2004;Abrahams and Geschwind, 2008;Bailey et al., 2008;Hernandez et al., 2009). The corresponding rate for epilepsy in male Fragile X patients is lower, ranging from 10 to 18% (Musumeci et al., 1999;Muhle et al., 2004;Bailey et al., 2008), whereas the occurrence of epilepsy in ASD patients varies more widely from 6.6 to 37% (Amiet et al., 2008;Yasuhara, 2010;Jokiranta et al., 2014). Such a wide variation likely reflects methodological differences, as well as heterogeneity in sample population and etiology of the diseases. Even so, these studies are helpful in demonstrating the overlap among FXS, childhood epilepsy and ASD at the clinical diagnostic level. More pertinent to our discussion in this review, these findings raise the possibility that there could be a molecular link that controls intellectual disability development in each of the three clinical diseases.
silencing of its gene fmr1, found on the X chromosome (Pieretti et al., 1991). FMRP is highly expressed throughout the brain in neurons, where it is found in both pre-and post-synaptic processes (Christie et al., 2009). One well-characterized function of this RNA-binding protein is to suppress the translation of these target mRNAs. Through mechanisms that are not fully understood, neuronal activity can release the suppression of some mRNAs (Khandjian et al., 2004;Stefani et al., 2004), leading to an activity-dependent increase in protein synthesis in synaptosomal regions (Bassell and Warren, 2008). FMRP binds to polyribosomes and specific mRNAs in neuronal dendrites, leading to the concept that it regulates local translation at these sites. FMRP is required for a number of forms of synaptic plasticity including mGluR1-mediated long-term depression (LTD; Li et al., 2002).
As described earlier, FMRP can also form complexes with Slack channel protein (Brown et al., 2010;Zhang et al., 2012), and in this manner directly regulate Slack channel activity. I KNa currents were compared in MNTB neurons recorded in brain slices from the FMRP-deficient Fmr1 −/y mice vs. those from wildtype mice. As expected, outward I KNa currents were smaller in Fmr1 −/y MNTB neurons, even though Slack subunit levels are not decreased (Brown et al., 2010). Conversely, increases in levels of FMRP can enhance I KNa currents. This was demonstrated by the finding that introduction of the FMRP N-terminal 1-298 fragment into bag cell neurons of Aplysia increases I KNa currents and hyperpolarizes the resting membrane potential (Zhang et al., 2012). These findings suggest a more versatile role for FMRP in both the presynaptic and postsynaptic elements of neurons, in addition to its function in the suppression of translation.
Slack is not the only ion channel that can interact with FMRP. Both the large-conductance calcium-activated BK potassium channel and Ca V 2.2 voltage-dependent calcium channel have recently been shown to interact directly with FMRP, and these Frontiers in Cellular Neuroscience www.frontiersin.org interactions regulate action potential width and neurotransmitter release (Deng et al., 2013;Ferron et al., 2014). It is possible that the activation of ion channels that are linked to FMRP serves as a local mechanism to regulate the translation of neuronal mRNAs (Zhang et al., 2012). These new findings collectively suggest that dysregulation of an acute modulation of neuronal excitability and transmission by FMRP may contribute to the intellectual disability associated with FXS.
EPILEPSY
Epilepsy is estimated to affect 50 million people worldwide (World Health Organization, 2012). While seizures, presenting as abnormal patterns of synchronous activity in EEG recordings, can occur in isolation in both children and adults, these are distinguished from epilepsy, in which such abnormal activity is recurrent, and which may have an enduring clinical impact. The impact can manifest as neurobiological, cognitive, psychological, and/or social changes (Fisher et al., 2005). Although over 30 different kinds of epileptic seizures, or syndromes, are recognized as of 2013, each syndrome has considerable variation in etiology and health outcome [Engel and International League Against Epilepsy (ILAE), 2001;Berg et al., 2010]. Some epilepsies are channelopathies, and human mutations in a number of genes encoding ligand-gated receptors or ion channels have been found in epilepsy patients (Steinlein et al., 1995;Scheffer and Berkovic, 1997;Charlier et al., 1998;Zuberi et al., 1999;Escayg et al., 2000;Brenner et al., 2005). The advancement and wider use of sequencing technologies such as whole exome sequencing, which can identify de novo mutations in single probands, are reshaping the genomics approach to understanding epileptogenesis, and rapidly expanding the list of proteins mutated in epilepsy patients. In this next section, we consider in particular the three types of seizures associated with mutations in the Slack-encoding KCNT1 gene, accompanied by a summary table of selected clinical reports from the literature.
MALIGNANT MIGRATING PARTIAL SEIZURES IN INFANCY
Malignant migrating partial seizures in infancy was first described by Coppola et al. (1995), as a new distinct early onset (<6 months) seizure type with a characteristic random pattern of electrical discharges recorded on the brain electrical encephalogram (EEG). Since then, numerous other groups have also identified patients who fit these original criteria, selected references of which are reviewed and summarized in Table 2 (Coppola et al., 1995(Coppola et al., , 2006(Coppola et al., , 2007Okuda et al., 2000;Veneselli et al., 2001;Gross-Tsur et al., 2004;Marsh et al., 2005;Hmaimess et al., 2006;Caraballo et al., 2008;Carranza Rojo et al., 2011;Sharma et al., 2011;Barcia et al., 2012;Lee et al., 2012;Ishii et al., 2013;McTague et al., 2013;Milh et al., 2013). Ongoing analyses of these patients using EEGs, brain imaging and DNA sequence analysis continue to shape the field's understanding of this focal seizure in infancy.
Patients diagnosed with MMPSI are unlikely to achieve intellectual growth, learning, and other developmental milestones. Following an early onset, MMPSI seizures increase in frequency to the point of halting normal development; patients also lose any developmental progress they had previously accomplished (Coppola et al., 1995). Even when the seizures diminish in frequency, very few patients resume neurodevelopmental growth. The end results are severe delays in development and profound intellectual disability. Not surprisingly, absence of language and hypotonia are also commonly noted in these patients. In capturing the bleak prognosis for MMPSI patients, a study of 14 patients concluded, "the highest developmental level maintained beyond 1 year of age in all patients was partial head control, rolling and visual fixation" (McTague et al., 2013). Out of the 96 patients considered in Table 2, 20 were reported as deceased.
Possible cause(s) for seizure development in MMPSI patients has remained elusive until recently. Neurometabolic, blood gas and serum tests are typically normal, and brain lesions are rarely observed in affected patients (Nabbout and Dulac, 2008). Besides microcephaly, or abnormally small heads, that progressively appeared in 57 out of 91 examined patients reported in the literature (Table 2), the brain appears to be without any other structural lesions at presentation.
Genetic etiologies for MMPSI were first made in 2011, with the discovery of SCN1A (Nav1.1) mutations (Carranza Rojo et al.,
AUTOSOMAL DOMINANT NOCTURNAL FRONTAL LOBE EPILEPSY
Autosomal dominant nocturnal frontal lobe epilepsy is a focal seizure that occurs predominantly during sleep with a typical onset in late childhood. The mean age among more than 110 patients reported in six different case reports was 10.9 years ( Table 3; Scheffer et al., 1995;Oldani et al., 1998;Phillips et al., 1998;Nakken et al., 1999;Derry et al., 2008;Heron et al., 2012). ADNFLE patients are sometimes misdiagnosed as having sleep disorders rather than suffering a seizure attack, because the seizure attacks often disrupt sound sleep (Oldani et al., 1998). That a genetic mutation can result in ADNFLE in an affected family was first suggested by chromosome linkage and confirmed later by sequencing of the gene for the nicotinic α4 acetylcholine receptor subunit (CHRNA4; Steinlein et al., 1995). For this reason, ADNFLE is most commonly associated with mutations in acetylcholine receptor subunits (Steinlein et al., 1995;De Fusco et al., 2000;Aridon et al., 2006). Statistically, however, only 20% of ADNFLE patients with a family history of seizures, and 5% of those without, have a mutation in one of these genes (Kurahashi and Hirose, 2002).
More recently, mutations in KCNT1 (Slack) have been identified as a novel genetic etiology for ADNFLE, but these too seem to be a cause in a minority of affected families (Heron et al., 2012). Nevertheless, it is interesting that several observations distinguish the families harboring a KCNT1 mutation from those with a different mutation. As a notable example, the occurrence of intellectual disability and other psychiatric illnesses appear to be greatly increased in those families with a KCNT1 mutation (Heron et al., 2012). This is in contrast to ADNFLE patients without mutations in Slack, in whom intelligence and other neurologic functions are largely unimpaired (Phillips et al., 1998). Penetrance of the mutation is also increased to 100% in the families with Slack mutations, when that of acetylcholine receptor mutations has been estimated to be only 70% (Heron et al., 2012). These results further implicate Slack channels in intellectual development.
A majority of OS patients show severe developmental delay, including intellectual disability. Greater than 80% of OS patients reported in the literature have a developmental delay, while only 10% are described as showing normal development ( Table 4). OS patients also appear to have increased vulnerability to other ailments such as pneumonia and virus infections (Krasemann Quan et al., 2001), and these complications have been a cause of death in more than 20% of patients. It remains a challenge to reverse or overcome these prognoses, since these seizures have pronounced pharmacological resistance (Beal et al., 2012). Nevertheless, surgical intervention may hold some promise for patients in whom brain abnormalities can be identified as the basis of the seizures (Malik et al., 2013). Macroscopic and microscopic brain abnormalities are the predominant causes of seizure development in OS patients, and common defects are enumerated in Table 4 (Robain and Dulac, 1992;Trinka et al., 2001;Low et al., 2007;Saitsu et al., 2008;Nakamura et al., 2013). In more than one-fifth of the patients, however, no brain abnormalities can be detected. Genetic etiologies have also been identified in a subset of OS patients. To date, alterations in five different genes have been found in patients: ion channels KCNQ2 (Kv7.2; Saitsu et al., 2012a;Kato et al., 2013), SCN2A (Nav1.2; Nakamura et al., 2013;Touma et al., 2013), and KCNT1 (Slack; Martin et al., 2014); the transcription factor ARX (Kato et al., 2007;Absoud et al., 2010;Giordano et al., 2010;Fullston et al., 2010;Eksioglu et al., 2011); and the synaptic binding protein STXBP1 (Saitsu et al., 2008(Saitsu et al., , 2010(Saitsu et al., , 2011Mignot et al., 2011;Milh et al., 2011). Interestingly, mutations in SCN2A and KCNT1 have also been found in patients diagnosed with MMPSI.
Many earlier studies (prior to 2011) were selective in their approach, sequencing only one or a few selected genes of interest. A growing number of researchers are now utilizing whole exomic or genomic sequencing for such patients (Majewski et al., 2011), however, and it is foreseeable that a more comprehensive estimate of the prevalence of these epileptogenic alleles will emerge within the next decade.
MECHANISMS UNDERLYING CHANGES IN HUMAN SLACK MUTANTS
Slack mutants have been tested for change in channel activity in Xenopus laevis oocytes and HEK 293 cells using two-electrode voltage clamping. These studies have shown that, surprisingly, currents generated by the Slack mutants are greatly increased over those in wild type channels. Peak current amplitudes of mutant Slack currents are increased by 3-to 12-fold, with no change in levels of Slack protein (Barcia et al., 2012;Martin et al., 2014;Milligan et al., 2014).
One alteration in the biophysical properties of the mutant Slack channels is that the occurrence of subconductance states is greatly reduced compared to that in wild type channels. As we described earlier, subconductances appear as brief, flickering short steps alternating with time spent in the fully open or the closed state in single channel patch clamp experiments. The wild type channel spends most of its time transitioning between the closed or subconductance states (Joiner et al., 1998). However, mutant channels are more likely to open immediately to a fully open state rather than to a subconductance state, resulting in an overall increase in current during depolarization of the membrane (Barcia et al., 2012). A similar reduction in occurrence of subconductance states was also seen in FMRP-mediated positive regulation of Slack channel activity (Brown et al., 2010).
A second mechanism for increased current in at least two of the mutant channels is that they render the channels in a state that mimics constitutive channel phosphorylation by protein kinase C (Barcia et al., 2012). Wild type Slack channels undergo phosphorylation by this enzyme at a site (Serine 407) in their large C-terminal cytoplasmic domain, leading to an ∼3-fold increase in peak current amplitude. Protein kinase C was pharmacologically activated in Xenopus oocytes expressing Slack channels, and the peak current amplitude compared in the mutants and wild type channels using two-electrode voltage clamping. The results showed that unlike in the wild type channel, which showed an increase in channel activity, it remained unchanged in mutant channels (Barcia et al., 2012). Thus in these channels the mutations both mimicked and occluded the effects of activation by protein kinase C.
Other mechanisms for the enhanced currents in the Slack mutants are under investigation. These channels are sensitive to cytoplasmic levels of Na + . In patch clamp experiments it has been found, however, that the Na + -sensitivity of the mutant channels is not different from that of the wild type channels. Nevertheless, other potential mechanisms, such as shifts in voltage-dependence, may also contribute to the enhanced currents in mutant channels.
The unexpected finding that a gain-of-function change in a K + channel can induce a hyperexcitable state of the brain has a precedence in the BK channel, a mutation of which can lead to generalized epilepsy and paroxysmal dyskinesia (GEPD; Yang et al., 2010). BK channels are activated by [Ca 2+ ] i , and can contribute to the rapid hyperpolarizations that follow action potentials, thereby regulating cellular excitability. An electrophysiological study of the mutant BK channel in Xenopus oocytes showed that the mutant channel has increased Ca 2+ sensitivity, resulting in an overall increase in BK channel activity (Yang et al., 2010).
Several possible changes at the cellular/neuronal network level could account for how aberrant electrical activities of the brain may arise from increased K + channel activity, some of which have been suggested by others (Du et al., 2005). First, an increase in K + current could cause more rapid neuronal repolarization, shortening the duration of action potentials. A more rapid repolarization can indirectly increase cell excitability by increasing the rate at which voltage-dependent Na + channels recover from inactivation. Next, more pronounced hyperpolarizations resulting from BK or Slack channel hyperactivity may also potentiate hyperpolarization-activated cation channel (I h ) currents, aberrantly triggering network excitability. It is also possible that the enhancement of K + current may occur selectively in inhibitory neurons. This could lead to a selective suppression of the activity of inhibitory interneurons, thereby producing an imbalance of excitation to inhibition (Du et al., 2005). Finally, increases in K + current early in development could alter the formation of normal patterns of synaptic connections, predisposing the nervous system to develop circuits that generate epileptiform discharges. After close to a decade, however, these hypotheses have yet to be tested experimentally, perhaps due in part because of a lack of a specific knock-in mouse or other animal models.
Malignant migrating partial seizures of infancy, OS, and ADNFLE have traditionally been regarded as distinct seizures with Frontiers in Cellular Neuroscience www.frontiersin.org considerable heterogeneity in etiology and prognosis. OS and MMPSI are two of the first epilepsies known to affect newborns, and both produce devastating changes in neurodevelopment. ADNFLE, on the other hand, is typically less disruptive of normal development and life, and seizures are often successfully controlled with antiepileptic drugs. The recent discoveries that Slack mutants have been uncovered in patients diagnosed with one of these three seizures, and that they all share in common severe intellectual disability and other developmental delays together give rise to an emerging role for Slack channels in intellectual disability. The evidence suggests that a key physiological role of Slack may be its control over cellular or network excitability in regions of the brain involved in intellectual development.
CONCLUSION
Slack channels are physiologically important regulators of neuronal excitability and adaptability to changing patterns of sensory stimulation. In this review, we have considered how alterations in Slack channel activity can have pathophysiological ramifications, in conditions such as FXS and early onset epileptic encephalopathies. In addition to FXS, which has a well-established genetic link to the development of intellectual disability, the three seizures related to Slack mutants -OS, MMPSI and ADNFLEalso notably share a common manifestation of intellectual disability in their patients. These new findings make a strong argument that Slack channels may be a common link that can describe the occurrence of intellectual disability in these patients, suggesting that Slack channels could be critical modulators of cognitive development.
|
v3-fos-license
|
2022-11-09T06:16:55.736Z
|
2022-06-28T00:00:00.000
|
253395847
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.2196/40600",
"pdf_hash": "fed5d9944e89017eb5857953c86196c8b016564a",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2337",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "e91d518126971f6da1c99f2e82fda0bc665fc769",
"year": 2022
}
|
pes2o/s2orc
|
Esophagogastroduodenoscopy Screening Intentions During the COVID-19 Pandemic in Japan: Web-Based Survey
Background: The number of people undergoing cancer screening decreased during the COVID-19 pandemic. The pandemic may have affected the willingness and motivation of undergoing cancer screening by those eligible for it. Objective: This study aims to clarify the effect of the COVID-19 pandemic on the intention to undergo cancer and esophagogastroduodenoscopy (EGD) screening. Methods: We performed a web-based survey on the intention to undergo screening among 1236 men and women aged 20-79 years. The numbers of participants by sex and 10-year age groups were equal. The survey was conducted in January 2021, during which the government declared a state of emergency because of the third wave of the COVID-19 pandemic in Japan. Emergency declarations were issued in 11 prefectures among all the 47 prefectures in Japan. Results: In total, 66.1% (817/1236) of the participants felt anxious about undergoing screening due to COVID-19. More women than men were anxious about undergoing screening. By modality, EGD had the highest percentage of participants with anxiety due to COVID-19. Regarding the intention to change the participants’ appointment for screening, the most common strategies were to book an appointment for a time during nonpeak hours, postpone the appointment to a later date, and change the mode of transportation. In addition, 35.8% (442/1236) of the participants were willing to cancel this year’s screening appointment. Among the 1236 participants, 757 (61.2%) were scheduled for screening in 2020. Of the 757 participants in this subgroup, 68% (n=515) did not change the schedule, 6.1% (n=46) cancelled, and 26% (n=197) made some changes, including changing the appointment date, hospital, or mode of transportation. Among the 296 participants scheduled for EGD screening, 18.9% (n=56) made some changes, 5.7% (n=17) cancelled on their own, and 2.7% (n=8) cancelled on the hospital’s order. Based on the previous screening results, the percentage of participants who felt anxious about EGD due to the COVID-19 pandemic was higher in the order of those who had not undergone screening and those who were judged to be in need of further examination in screening but did not visit a hospital for it. In the logistic regression analysis, the factors associated with anxiety about EGD screening due to the COVID-19 pandemic were “viral infection prevention measures,” “waiting time,” “fees (medical expenses),” “mode of transportation,” “worry about my social position if I contracted COVID-19,” and “perceived the risk of gastric cancer.” However, “residence in declared emergency area” was not associated with EGD anxiety due to COVID-19. Conclusions: Excessive anxiety about COVID-19 may lead to serious outcomes, such as a “decreasing intention to undergo EGD screening,” and it is necessary to thoroughly implement infection prevention measures and provide correct information to examinees.
Introduction
The COVID-19 pandemic has led to severe restrictions in almost all countries and has affected many health care services worldwide. It disrupted the use of preventive health care services. In the United States, the American College of Radiology supported the postponement and rescheduling of nonurgent care, including cancer screening [1]. Screening for cancer is a proven and recommended approach to prevent deaths owing to cancer. The number of people undergoing cancer screening decreased during the COVID-19 pandemic [2]. Although there was an increase in the number of cancer screening tests beginning in late 2020, screenings remained between 29% and 36%, lower than those in the prepandemic era [3]. Coma et al [4] reported that during the pandemic, the number of malignant neoplasms decreased in all age groups, and the number of colonoscopies and mammograms also decreased. However, the number of chest radiographies increased. Another study conducted in north-eastern United States during the COVID-19 pandemic revealed a significant decrease in the number of patients undergoing screening tests for cancer and in the number of ensuing diagnoses of cancerous and precancerous lesions [5]. According to a survey conducted by the Japan Cancer Society, the number of people undergoing cancer screening in 2020 decreased by 30.5% compared with the number of screenings in the previous year. Consequently, the COVID-19 pandemic could disrupt oncology care by delaying the diagnosis and surgical treatment of cancer owing to reduced screening, thereby leading to the long-term consequence of projected increases in cancer-related deaths [6]. The reduction in the number of cancer screenings has been attributed to health care providers. Health care provider constraints included restrictions on elective procedures and shortages of health care staff owing to redeployment to help with pandemic-related care [7]. At the start of the pandemic, elective medical procedures, including cancer screening, were put on hold to conserve medical resources and reduce the risk of spreading COVID-19 in health care settings. However, health systems are now back to scheduling cancer screening tests and examinations. Even when health care providers have increased the availability of preventive care and cancer screenings, many patients face constraints such as loss of income and employer-based insurance coverage [2] and fear of contracting COVID-19 during in-person health care visits [8]. To increase the number of people who receive screening while the COVID-19 pandemic continues, it is necessary to survey the intention to be screened. However, to our knowledge, no studies have investigated the causes of refraining from undergoing cancer screening because of the effect of the COVID-19 pandemic.
This study aimed to examine the predictors of anxiety around cancer screening owing to the COVID-19 pandemic, with a focus on esophagogastroduodenoscopy (EGD).
Survey Method and Participants
All participants were recruited using an internet panel survey company, as we have previously reported [9][10][11]. All participants were registered as panel members with the company. The participants of this study included registered panel members aged between 20 and 79 years. First, to recruit participants, the survey company created a list using random sampling across all registers. Next, an email that gauges interest in survey participation was sent to all the individuals on this list. Registration was ended when the number of participants in each group reached the target sample size to ensure that the number of participants by sex and 10-year age groups was similar. Participants completed and provided their responses via mail. After completing the survey, participants received a small cash reward. This study comprised 1236 participants aged 20-79 years. Each group was balanced for age and sex. Assuming a confidence level of 95%, a margin of error of 5%, and an expected response rate of 50%, the required sample size was calculated to be 384. When the margin of error was assumed to be 3%, the required sample size was calculated to be 1067. Therefore, the sample size of 1236 was considered sufficient for the analysis. The survey was conducted in January 2021, when the Japanese government declared a state of emergency during the third COVID-19 pandemic. Emergency declarations were issued in the following 11 prefectures among all the 47 prefectures in Japan: Tochigi, Saitama, Chiba, Tokyo, Kanagawa, Gifu, Aichi, Kyoto, Osaka, Hyogo, and Fukuoka.
Survey of Intention to Undergo Screening During the COVID-19 Pandemic
We conducted an internet survey to assess selected measures of interest, that is, sex, age, place of residence, plans to undergo screening or EGD screening in 2020, results of previous screening, anxiety about undergoing screening due to the COVID-19 pandemic, concerns about undergoing EGD screening due to the COVID-19 pandemic, things to be concerned about if you have COVID-19, and whether you feel you are at risk of having gastric cancer (Multimedia Appendix 1).
Statistical Analyses
Continuous variables were compared between study groups using the t test (2-tailed). Categorical variables were compared using a chi-squared test. Logistic regression analysis was performed with anxiety regarding EGD screening due to the COVID-19 pandemic as the dependent variable. The independent variables included anxiety about viral infection control measures, waiting times, fees (medical expenses), mode of transportation, crowdedness, worry about own social position in case of contracting COVID-19, worry about own health in case of contracting COVID-19, worry about family member's social position in case of contracting COVID-19, worry about health risk to family members in case of contracting COVID-19, perceived risk of contracting gastric cancer, and residence in a declared emergency area.
All statistical analyses were performed using SPSS version 27.0 (IBM Corp). Statistical significance was set at P<.05.
Ethics Approval
This study was approved by the Ethics Committee of the National Institute of Public Health, Japan (NIPH-IBRA#12302, approval date: November 17, 2020). All participants provided informed consent for data collection and storage. Written informed consent for participation in the study was obtained at the time of registration.
Patient and Public Involvement Statement
Patients or the public were not involved in the design, conduct, reporting, or dissemination plans of our research.
Baseline Characteristics of Participants Concerning Anxiety About Screening Due to COVID-19
The background characteristics of the participants are shown in Multimedia Appendix 2. The average age of the participants was 49.4 (SD 16.5) years, with equal numbers in each 10-year age group and both sexes. Moreover, of the 1236 participants, 63.3% (n=783) resided in a declared emergency area. Furthermore, 66.1% (n=817) responded that they were anxious about undergoing screening due to the COVID-19 pandemic. There were more women than men in the group who were anxious about undergoing screening, but there were no significant differences in age or the percentage of people who resided in a declared emergency area (Table 1).
Participants who were anxious about receiving screening due to COVID-19 were significantly more likely to worry about their own health, the health risk of their family members, their own social position, or the social position of their family members if they had COVID-19 compared with those who were not anxious ( Figure 1).
Regarding the intention to change the screening, the most common strategies were to book an appointment for a time during nonpeak hours, postpone the appointment to a later date, and change the mode of transportation. In addition, 35.8% (442/1236) of the participants were willing to cancel this year's checkup (Table 4).
Among the 1236 participants, 757 (61.2%) were scheduled for screening in 2020. In this subgroup of 757 participants, 68% (n=515) did not change the schedule, 6.1% (n=46) cancelled, and 26% (n=197) made some changes, such as booking an appointment for a time during the nonpeak hours, postponing the appointment to a later date, or changing the hospital or mode of transportation (Table 5).
Percentage of Anxiety Stratified by Previous Screening Result
The proportion of "anxiety about EGD due to the COVID-19 pandemic" responses was analyzed according to the results of the previous screening. Based on previous screening results, participants who had not undergone prior screening had the highest amount of anxiety about EGD screening due to the COVID-19 pandemic (52%). Participants who were judged as needing extended examination but did not go for it had the second highest rate of anxiety about EGD (44%) (Figure 2, section A). Participants who were judged as needing extended examination but did not go for further screening had the highest amount of anxiety about visiting the hospital due to the COVID-19 pandemic (84%). Participants who had not undergone prior screening had the second highest rate of anxiety about visiting the hospital (73%) (Figure 2, section B).
Feeling at Risk of Developing Gastric Cancer and Anxiety About EGD Screening Due to the COVID-19 Pandemic
We compared "anxiety about EGD screening due to the COVID-19 pandemic'"between participant subgroups classified based on whether or not they felt at risk of contracting gastric cancer. There were 385 participants who felt that they were at risk of contracting gastric cancer, of whom 195 (50.6%) were anxious about EGD screening due to the COVID-19 pandemic. There were 851 patients who did not feel at risk for gastric cancer, of whom 315 (37.0%) were anxious about EGD due to the COVID-19 pandemic. The percentage of "anxiety about EGD screening" was significantly higher in the "feel the risk of contracting gastric cancer" group compared to the "do not feel the risk of contracting gastric cancer" group ( Figure 3). Percentages of respondents with "anxiety about EGD screening" in the "feel the risk of contracting gastric cancer" and the "do not feel the risk of contracting gastric cancer" groups. EGD: esophagogastroduodenoscopy.
Factors Associated With Anxiety About EGD Screening Due to the COVID-19 Pandemic
The factors associated with anxiety concerning EGD screening due to COVID-19 were examined using logistic regression analysis ( Table 7).
The following factors were related to anxiety regarding EGD screening anxiety due to the COVID-19 pandemic: "viral infection prevention measures," "waiting time," "fees (medical
Principal Findings
In this study, we conducted a web-based survey on the intention to undergo cancer and EGD screening. In total, 66.1% of participants responded that they felt anxious about undergoing screening owing to the pandemic. With respect to modality, the percentage of participants who felt anxious about screening was the highest for EGD. Factors associated with anxiety around EGD owing to the COVID-19 pandemic were "viral infection prevention measures," "waiting time," "fees (medical expenses)," "mode of transportation," "worry about my social position if I contracted COVID-19," and "perceived the risk of gastric cancer." However, residing in a declared emergency area was not associated with anxiety around EGD screening owing to the COVID-19 pandemic. According to a previous screening result, the percentage of "concerned about EGD due to the COVID-19 pandemic" was higher in the groups who had not undergone screening or who needed extended examination but did not undergo it.
The World Health Organization declared the COVID-19 pandemic on March 11, 2020. Plans were put in place to reserve capacity for the surge in COVID-19 clinical care, including the suspension of elective care. In Japan, the Ministry of Health, Labour and Welfare issued a notification, stating that in areas where a state of emergency has been declared, only mass screenings should be postponed during the period the emergency declaration is in effect, and that those who are unable to receive screenings due to postponement will be given another opportunity to receive screening. Hospitals and clinics reduced appointments for cancer screening and nonemergency care to prepare for the diagnosis and treatment of patients with COVID-19 and to prevent the spread of the infection during the periods of emergency declaration, that is, from April to May 2020, and again from January to March 2021. The Japan Cancer Society reported that the number of people receiving cancer screenings in 2020 decreased by 30.5% compared to 2019, and that the number of cancer diagnoses in 2020 was 9.2% lower compared to the previous year (2019). This suggests that the decrease in the number of cancer diagnoses can be attributed to the temporary suspension of cancer screening due to the COVID-19 pandemic and the decrease in the number of people receiving screening due to refraining from visiting hospitals and going outside. In Taiwan, the number of mammography screening examinations decreased in 2020, although the medical system was not disrupted due to the COVID-19 pandemic, likely due to the influence of the population's perceived risk on their willingness to attend screening [12]. In our survey, 66.1% of the participants felt anxious about undergoing cancer screening regardless of whether they resided in a prefecture where a state of emergency was declared. With the spread of COVID-19, the deterioration of public mental health has become a major global and social problem. A web survey conducted in August 2020 among Japanese participants revealed that 73.2% of the respondents experienced perceived stress related to the COVID-19 pandemic, 34.9% felt intense stress associated with COVID-19, 17.1% were depressed, and 13.5% had severe anxiety symptoms [13]. Therefore, the psychological burden caused by COVID-19 could have affected the intention to undergo screening.
Various factors such as sex, age, marital status, education, occupation and income, place of residence, contact history with patients with COVID-19, and comorbidities were associated with mental health problems such as stress, depression, and anxiety [14][15][16]. During the COVID-19 pandemic, psychiatric disorders such as depression and anxiety were more prevalent in women than in men [13,17,18]. In this study, more women than men were anxious about undergoing screening. Epidemiological sex differences in anxiety disorders and major depression are well characterized. Anxiety and major depressive disorders are more common in women than in men [19,20]. Besides psychological and cultural factors, biological factors contribute to these sex differences [21]. Therefore, it is likely that there are sex differences in anxiety about undergoing screening owing to COVID-19.
By modality, the percentage of participants who felt anxious due to the COVID-19 pandemic was highest for EGD and colonoscopy, respectively. Malignant neoplasms are the leading cause of death in Japan. Colorectal cancer was the most common cancer type in 2018, followed by gastric cancer. In 2019, colorectal cancer was the second most common cause of cancer-related mortality, followed by gastric cancer. Delays in screening will increase the number of advanced cancers and deaths in the near future.
In a French study investigating the effect of the COVID-19 pandemic on EGD screening in France, 98.7% of endoscopists had cancelled endoscopies, and 73.6% of them had closed the endoscopy outpatient clinic [22]. COVID-19 spreads primarily through droplets of saliva, although airborne transmission and fecal excretion have been documented [23,24]. Severe acute respiratory syndrome coronavirus 2 can survive in the air for several hours [25]. Health care professionals in endoscopy are exposed to COVID-19 through contact with saliva droplets on their face and in airways, via touch contamination, and through contact with a patient's stool [26,27]. Aerosol infections around endoscopes have also been reported, making EGD among the major aerosol-generating procedures [28,29]. In EGD, where the risk of droplet diffusion and aerosol generation is high, careful measures, such as patient triage and thorough infection protection, are required [30]. Guidelines for endoscopy during the COVID-19 pandemic have been developed [31]. The Japan Gastroenterological Endoscopy Society has published a proposal on its website regarding gastrointestinal endoscopic care for COVID-19. In this survey, we did not ask about the risk for COVID-19 infection from aerosol in EGD, but it is hypothesized that the participants felt anxious about EGD because it is a face-to-face examination compared to other modalities.
In a Japanese study, the Comprehensive Survey of Living Conditions reported a 39% participation rate in gastric cancer screening in 2019. Cancer screening rates in Japan are lower than those in other countries, such as the United Kingdom and Korea. In this study, the percentage of "concerned about EGD due to the COVID-19 pandemic" was higher compared to "haven't undergone screening" and "needed further examination but did not go for it" based on previous screening results. It is a concern that those who do not undergo screening or visit hospitals for further examination will become increasingly reluctant to do so. In addition, one of the factors associated with EGD screening anxiety due to the COVID-19 pandemic was "perceived to be the risk of gastric cancer." These results suggest a decrease in the number of gastric cancer screenings and a delay in the detection of gastric cancer. Other factors associated with anxiety around EGD screening due to the COVID-19 pandemic were "viral infection prevention measures," "waiting time," "fees (medical expenses)," and "mode of transportation." Medical institutions and the government must reassure citizens by informing them that appropriate infection prevention measures are being taken during cancer screening.
This study had several limitations. First, we used an internet panel survey company to collect data. While we could obtain responses regarding a wide range of demographic factors such as age, occupation, and income, these groups were not representative of the general population in Japan. However, web surveys have recently become a common method for conducting studies [32,33]. Second, the spread of infection changes daily and varies across regions; however, the survey did not consider this effect. Third, because we did not ask respondents whether they ever had COVID-19, we do not know the effect of the respondents' personal experiences with previous infection on their anxiety. Finally, the cross-sectional design of this study made it difficult to assess causality.
Conclusions
This is the first survey-based study to examine the effects of the COVID-19 pandemic on the intention to undergo cancer screening. Most participants were anxious about undergoing screening owing to COVID-19 regardless of whether they resided in a prefecture where a state of emergency was declared, and the percentage of anxiety was higher for EGD than for other modalities. "Viral infection prevention measures," "waiting time," "fees (medical expenses)," "mode of transportation," "worry about my social position if I contracted COVID-19," and "perceived the risk of gastric cancer" were associated with anxiety about EGD screening anxiety owing to the COVID-19 pandemic. Excessive anxiety about COVID-19 leads to serious outcomes such as delayed detection of cancer and increased cancer-related deaths. Thus, it is necessary to thoroughly implement infection prevention measures and provide correct information to examinees.
|
v3-fos-license
|
2018-04-03T00:21:39.797Z
|
2017-07-19T00:00:00.000
|
3716706
|
{
"extfieldsofstudy": [
"Geography",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://academic.oup.com/jpubhealth/article-pdf/40/2/389/25175171/fdx074.pdf",
"pdf_hash": "44e1df2ff8ce0135c3fc45a758e5b3558888dc1b",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2338",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "44e1df2ff8ce0135c3fc45a758e5b3558888dc1b",
"year": 2017
}
|
pes2o/s2orc
|
Distribution of optometric practices relative to deprivation index in Scotland
Abstract Background The UK National Health Service aims to provide universal availability of healthcare, and eye-care availability was a primary driver in the development of the Scottish General Ophthalmic Services (GOS) model. Accordingly, a relatively equal distribution of optometry practices across socio-economic areas is required. We examined practice distribution relative to deprivation. Methods 672 practices were sampled from nine Health Boards within Scotland. Practices were assigned a deprivation ranking by referencing their postcode with the Scottish Index of Multiple Deprivation (SIMD) tool (Scottish Executive National Statistics: General Report. 2016). Results Averaged across Health Boards, the share of practices for the five deprivation quintiles was 25, 33, 18, 14 and 11% from most to least deprived area, respectively. Although there was some variation of relative practice distribution in individual Health Boards, 17 of the 45 regions (nine Health Boards, five quintiles) had a close balance between population and share of practices. There was no clear pattern of practice distribution as a function of deprivation rank. Analysis revealed good correlation between practice and population share for each Health Board, and for the combined data (R2 = 0.898, P < 0.01). Conclusion Distribution of optometry practices is relatively balanced across socio-economic areas, suggesting that differences in eye-examination uptake across social strata are unrelated to service availability.
Introduction
Suggested by Hart in 1971 and referred to as 'conventional wisdom' 1 in the 21st century, the inverse care law 2 states that those who need healthcare the most are the least likely to receive it. In deciding where to practice, a general practitioner (GP) will take into account their expected income and the availability of local amenities (cultural and otherwise). Consequently, GPs in England gravitate toward areas with higher health needs, but also 'lower deprivation levels, a more pleasant environment and higher levels of amenities'. 3 Recent evidence indicates that a similar proclivity for practice in less deprived areas may exist amongst optometrists based in England. 4 Based on this one might expect an under-provision of optometric practices in areas of higher deprivation in Scotland, particularly when one considers that the inverse care law is thought to apply most rigorously in situations exposed to market forces 2 (in this case, spectacle sales). Alternatively, the higher eye-examination fee available to practitioners in Scotland (compared to England) may result in a more even distribution of optometry practices across socio-economic groups. The current study tested these two hypotheses by analysing the provision of optometric practices across different deprivation strata in Scotland.
In 2006, the Scottish Government introduced free National Health Service (NHS) eye examinations to the Scottish population. The motivation was to make more efficient use of resources by shifting aspects of eye-care from general practice and hospitals to community optometrists, whilst improving uptake for eye examinations. 5 These changes have been successful in generating increased uptake in Scotland, 6 however, concerns have been raised about levels of uptake within deprived socio-economic groups. Reports suggest that individuals attending for eye examinations increased disproportionately amongst higher income groups and the most educated. 6 The General Ophthalmic Services (NHS-funded) eyeexamination should present equitable benefit across socioeconomic strata but inequality in uptake could result from a number of factors, including a distorted distribution of optometry practices across Scotland. As the business model of optometric practice often incorporates spectacle sales, it has been suggested that a high number of practices are situated within affluent areas with under-representation at the other end of the socio-economic spectrum. 7 Although such a trend has been observed in Northern England, 4 the higher examination fee available in Scotland should foster a more equitable distribution of optometry practices across socioeconomic groups, as the business model is less reliant on private eye-care and spectacle sales. 8 To determine if uneven uptake relates to a bias favouring practices in less deprived areas, we analysed the distribution of practices across deprivation levels in nine Scottish Health Boards.
Methods
Addresses of all optometric practices were obtained from nine Health Boards in Scotland representing 91% of its overall population (collated January 2015). Scottish Government estimates 9 of population and geographic extent (area in hectares) of each Health Board can be found in Table 1. Businesses that solely provide domiciliary services were excluded, as location is not representative of the geographical scope of their service provision.
Practice postcodes were converted into deprivation scores. The SIMD 10 is the Scottish Government's tool used to identify areas subject to deprivation, enabling a deprivation score to be assigned to any postcode. Ranking is defined by employment, income, health, education, geographic-access to services, crime and housing. The lower the score, the more deprived the area. We used the tool to assign every practice to a quintile from 1 to 5, with Quintile '1' representing the most deprived postcodes in Scotland; for reasons of clarity, Quintiles 1 and 2 are referred to as the 'most deprived' and 'second most deprived', whereas Quintiles 4 and 5 are referred to as the 'second least deprived' and 'least deprived' in this study. Unlike a pre-existing study of practice distribution in Tayside, 7 the use of mean deprivation scores was avoided. The use of mean deprivation scores is problematic since they do not rank on a linear scale; a Data Zone with a score of 50, for example, is not twice as deprived as a zone with a score of 100.
The distribution of practices was analysed using the percentage of practices within a given quintile and Health Board relative to the total number of practices in that quintile at a macro level (i.e. encompassing the nine Health Boards).
To estimate the population within different socio-economic areas, we calculated the respective number of Data Zones. Data Zones (as defined by the Scottish Government) are population-based areas, each containing around 750 residents. In urban areas, they can contain only a handful of streets, whereas more rural Data Zones can describe areas many square miles in size. As with optometric practices, we expressed the distribution of the population as the percentage of Data Zones in a quintile of a given Health Board relative to the total number of Data Zones in that quintile. This allowed us to compare each Health Board's 'share' of practices with that Health Board's share of population within individual quintiles e.g. an analysis between Lothian's share of practices which are based in the most deprived quintile with Lothian's share of population (Data Zones) residing in the most deprived quintile. Data are presented subsequently as the ratio between the two shares: the percentage of practices divided by the percentage of Data Zones, for each deprivation score and Health Board. A quotient of one represents equality where the share of practices matches the population share. A value below one suggests a relative lack of practices, a value above 1 an oversupply of practices. This shows any inter-locality (in)equality with regards to ophthalmic service provision in Scotland.
The concept of a percentage share of geographical areas follows Government guidelines. 10 These state that, in an area comprising 300 Data Zones, if 30 zones are defined as belonging to the most deprived quintile, 10% of that area can be considered to fall within this quintile. Scottish Government deprivation ranking does not take accessibility of optometric practices into account: this means that our 'share of practices' variable will not co-vary with ranking, bolstering the validity of our conclusions.
Results
Part 1: Distribution of optometric practices across Scotland Figure 1 illustrates the percentage of practices and the percentage of population split into each of the five quintiles (1 = most deprived; 5 = least deprived). The data are averaged across the nine Health Boards. As expected, the proportion of population falling into each quintile is close to 20%. The percentage of practices shows an oversupply for the two most deprived quintiles (25 and 33%, respectively) and an under-supply for the two least deprived quintiles (14 and 11%, respectively). There are more practices available to the lower end of the socio-economic spectrum in Scotland than would be the case if practice provision was distributed exactly equally. Figure 2 gives an indication of how many individuals are provided for by each practice within any given quintile within a Health Board. Population within the five quintiles of each health board were approximated by multiplying the number of Data Zones by the average population per zone (750 residents). This figure is then divided by the number of practices found in the same quintile of said Health Board to provide a guide to the number of people each practice is providing for.
In Fig. 2, the median number of patients supported by a practice in the nine Health Boards is 8063 (mean ± SD = 9618 ± 7333). As shown by the obvious outliers in the 2 least deprived quintiles (Dumfries in quintile 4; Forth Valley and Tayside in quintile 5), rural areas are more likely to be home to optometry practices which serve greater patient volumes (since there are very few practices in these areas). Figure 3 illustrates the percentage share of each quintile's practices and Data Zones at a national level contained within individual Health Boards. These figures enable one to assess inequalities between Scottish Health Boards in terms of provision. quintile (Quintile 1) across Scotland. Presenting data in this way allows a direct comparison of (in-) equalities of eye-care provision across health boards separated by deprivation. For example, Fife shows an under-provision of optometric practices (black bar smaller than grey bar) in the most deprived quintile whereas Ayrshire shows an over-provision. A correlation analysis assessed the relationship between practice share and population share in each quintile (all Health Boards combined). This examines if, for example, a higher share of Data Zones (population) within a quintile corresponds with a higher share of practices in that quintile. Table 2 shows a significant relationship between practice and population share (P < 0.05; R 2 between 0.791 and 0.981). For each quintile, the percentage share of practices is highly correlated with percentage share of Data Zones. The relationship between practice share and Data Zone share holds across the five quintiles (R 2 = 0.898, P < 0.001). This shows that for each quintile, areas with a higher population enjoy a higher share of practices.
Part 2: Distribution of optometric practices within health boards Table 3 shows data for deprivation quintiles within Health Boards, presenting the ratio between the percentage share of practices and the percentage share of population in that quintile relative to the entire Health Board. It should be noted that the data used in Table 3 are not the same as those in Fig. 3. Figure 3 illustrates the percentage share of each quintile's practices and Data Zones at a macro level (across all nine Health Boards). For example, of all the practices/ zones in Scotland which fall within the most deprived quintile, the data show the percentage that are found in e.g. Ayrshire (18 and 11%, respectively). Here, Table 3 illustrates the share within Health Boards. For example, of all the practices/data zones in Ayrshire, the table considers the percentage found in the most deprived quintile. These percentages are then used to derive the practice/data zone ratio shown in the table. In contrast to the results presented in Part 1, data in the analysis here will not be affected by national trends in the distribution of practices or population across quintiles.
A quotient of '1' represents a 1 to 1 correspondence between the two factors; i.e. a balanced distribution of practices. Values greater than '1' indicate that there are more practices than Data Zones (population) in terms of percentage share, with values below '1' indicating the opposite. In Tayside, for example, there is an~30% under-representation of optometric practices within the most deprived quintile (quotient 0.69) but a preponderance of practices within the second most deprived quintile (quotient 3.44, i.e.~3.4 times more practices than equality). Individual cells were coloured to reflect this: areas with an essentially balanced share of practices and population (0.8 ≤ ratio ≤ 1.2) are presented on a grey background. Those with a lower practice than population share are set on a hot-coloured background (red < 0.50, yellow 0.50-0.79), those with an under-representation of practices on a cold background (green 1.21-1.50, blue > 1.50). The font size of the values in each cell reflects population size within that area, with larger font sizes denoting Health Boards with larger populations (for example, Greater Glasgow and Clyde is home to >240 000 people living in the most deprived quintile and this is indicated by using the largest font size in the pertinent cell of Table 3).
It is evident from the Table that a large number (N = 17) of areas fall close to the value that would be expected from a balanced practice distribution (between 0.8 and 1.2). However many areas are either substantially under-or overrepresented. Values range from 3.44 in the most deprived quintile in Dumfries and Galloway (around 3.4 times more practice share than population share) to 0 (no practices) in the least deprived areas of Ayrshire & Arran and Dumfries & Galloway.
Although not all Health Boards follow the same pattern, the most frequent trend is over-representation of practices in the two most deprived quintiles and under-representation in the least deprived quintiles. Seven out of nine Health Boards have the highest concentration of practices in one of the two most deprived quintiles (exceptions being Forth Valley and Lothian) and all nine have the lowest concentration in one of the two least deprived quintiles. Average ratios (across Health Boards, bottom row in Table 3) reflect this pattern, with~50% (37-64%) more practice share than population share in the two most deprived quintiles and~42% (29-55%) less practice share than population share in the two least deprived quintiles.
Discussion
Main findings of this study While there is inevitable variation across Health Boards in Scotland, our results show a largely equitable distribution of optometry practices across strata of deprivation ( Fig. 1) with the share of practices in each quintile correlating highly with share of Data Zones when analysed at a national level ( Table 2). This evidence suggests that the eye-care funding model in Scotland enables optometry practices to function in all socio-economic areas. These findings contrast with reports that optometry practices in Leeds, England are concentrated within the least deprived areas. 8 At a local level, individual Health Board results (Table 3) show that practice distribution is not entirely even but is not linked to socio-economic scale: for example, within the most deprived quintile, Fife, Grampian, Lothian and Tayside have a density of practice distribution that falls below their respective population shares. In contrast, Ayrshire and Arran, Dumfries and Galloway, Forth Valley, Greater Glasgow and Clyde, and Lanarkshire all have an overrepresentation of practices in the most deprived quintile. An under-representation in this quintile does not appear to be indicative of a general trend in more deprived areas: all four Health Boards which have an under-representation in the most deprived quintile have an over-representation in the second most deprived quintile. The largest urban area in Scotland (Greater Glasgow and Clyde) has the highest ratios (greatest numbers of practices relative to population) in these two quintiles. This may result from a disproportionate number of practices in city-centre locations coupled with low socioeconomic ranking of associated postcodes. Importantly, Table 3 indicates that optometric practice distribution is not skewed away from the most deprived quintiles. The average ratio for the two most deprived quintiles for each Health board is above 1 (range 1.0-2.29; mean ± SD = 1.50 ± 0.41). For the two least deprived quintiles, only one cell contains a ratio ≥1. Comparison between our results for Tayside and an earlier study 7 agree that a low practice density is found in the most deprived quintile. However, this information does not provide the full picture of optometry practice distribution in Tayside, or in Scotland in general. Table 3 shows that the greatest optometry practice density is in the second most deprived quintile, whilst the two least deprived quintiles possess a lower practice density than the most deprived. This suggests that practice distribution in Tayside is not concentrated in the least deprived areas.
What is already known on this topic A recent study examining the location of optometric practices in the Tayside area found an inequality in terms of optometric provision, concluding that the most deprived areas are home to the lowest numbers of practices. 7 Similar results have been reported in the large metropolitan area of Leeds, England. People aged over 60 or under 16 from the least deprived quintile were more likely to attend for an eyeexamination than persons in the same groups from the most deprived quintile (71 and 23%, respectively). 8 As eye examinations are not generally free in England (unless the patient is over 60 or under 16) these age groups are apt for comparison to the Scottish data. Geographical distance to an optometric practice was suggested as one of the reasons for the lack of uptake by the most deprived quintile and previous work in Leeds has shown a mismatch between the most deprived areas and the locations of optometry practices. 4 Since one would expect lower rates of rent in deprived areas to present some form of commercial incentive to any prospective practice-owner, evidence such as this suggests alternate drivers incentivising this lean toward a preponderance of optometric provision in less deprived areas.
What this study adds
Prior to this study, the only evidence describing the distribution of optometric practices in Scotland relative to deprivation was limited to a small-scale study 7 of a single quintile within a single Health Board (Tayside). Since the exclusive scope of the Tayside study presented a snapshot of information, which is not representative of wider practice distribution trends, it is important to be aware of the bigger picture in Scotland. Although our data agree that there is a low practice density in the most deprived quintile in Tayside, our broader findings suggest that any inequality in eye-care uptake between socio-economic groups in Scotland is unlikely to result from availability of services.
The finding that population share and optometric practice share correlate across quintiles suggests that NHS funding for sight tests in Scotland may be helping to facilitate the on-going commercial viability of practices in more deprived areas, despite the likely shortfall in sales of more profitable optical appliances. Lower rates for commercial property rental in deprived areas may also help such facilitation, although previous evidence from England 4 (where the NHS only funds sight tests for those under 16 or over 60 years of age) suggests that this factor alone is not a sufficiently compelling driver to incentivise an equitable distribution of practices.
Contrary to some earlier reports on imbalances affecting individual parts of Scotland, a wider view shows a largely balanced provision of optometric practices across different socio-economic groups. Any difference in the uptake of eye examinations across social strata can, therefore, not be explained on the basis of optometric practice availability.
Limitations of this study
The distribution of optometry practices is only one metric to inform on uptake and our conclusions are based on the assumption that individuals accessing services reside in the Data Zone where they attend for an eye-examination. However, there is evidence to suggest that patients are more likely to attend a practice in proximity to their home. Previous work in Tower Hamlets (London) found that eye-examination attendance drops sharply with a domicile-to-practice distance as short as 0.8 km. 11 It should, however, be noted that this example may not be representative of trends in less deprived and rural areas. Undoubtedly some patients who live outside the city-centre will access eye-care services there, either due to proximity to their place of work or proximity to retail outlets. In Greater Glasgow and Clyde, city-centre postcodes (e.g. G1 or G2 postcodes) are not all defined by the same level of deprivation, with postcodes commonly belonging to Quintiles 2, 3 or 4. Such examples of city-centre variance in deprivation levels complicate the interpretation of the data presented here.
Two other reasons for a shortfall in uptake in eye-care between socio-economic groups have been proposed: the first is a social bias whereby individuals from more deprived social strata have a lower propensity to attend for eye examinations, perhaps explained by the cost of optical appliances, 12 and a perceived pressure to buy glasses. 13 The second is a lack of awareness regarding the availability of free eye examinations in some groups. 12 Both reasons suggest that any inequality between socio-economic groups may be related to the utilization rather than provision of the eye-care system. Therefore it is important that the eye-care sector work together to improve awareness of the system and encourage uptake in all socio-economic groups. From a Scottish perspective this goal does not appear to require an establishment or re-distribution of optometric practices.
Funding
This work was supported by the Visual Research Trust (Scotland) and Optometry Scotland. The funders had no role in study design, collection, analysis and interpretation of data, writing of the report or decision to submit the paper for publication.
|
v3-fos-license
|
2022-05-10T15:18:56.953Z
|
2022-04-30T00:00:00.000
|
248606855
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1080/02705060.2022.2072008",
"pdf_hash": "8d99b397f8c946ccbac4fda128029c5f7bb5453b",
"pdf_src": "TaylorAndFrancis",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2339",
"s2fieldsofstudy": [
"Environmental Science"
],
"sha1": "7599c607a78466810c1743ee5cc77b19d9a667c6",
"year": 2022
}
|
pes2o/s2orc
|
Channel catfish and freshwater drum population demographics across four large Midwestern rivers
Abstract Channel catfish (Ictalurus puncatus) and freshwater drum (Aplodinotus grunniens) are two commercially and recreationally important species in large rivers of the Midwestern United States. Understanding their population demographics is essential to managing sustainable populations. In this study, we determined and compared the size structure, individual growth, and mortality estimates of channel catfish and freshwater drum among the Illinois River and sections of the Mississippi, Ohio, and Wabash rivers to provide a current baseline for managing these populations. Results suggest that both fishes differed in size structure among rivers. Compared to all other rivers, the Mississippi River freshwater drum growth rate was the highest and the theoretical maximum length was the lowest, and the Ohio River annual mortality was lowest. Channel catfish growth did not differ among rivers, but annual mortality was significantly higher in the Mississippi River compared to the Wabash River. Given the importance of these two fishes, better understanding of their population demographics in these systems is essential to improving current and future fisheries management programs.
Introduction
Channel catfish (Ictalurus puncatus) and freshwater drum (Aplodinotus grunniens) are two species commonly found in North American rivers, including in the Mississippi and Ohio river basins of the Midwestern United States (Michaletz and Dillard 1999). They are longlived, benthic generalists and are recognized as recreationally and commercially important species (Pitlo 1997;Michaletz and Dillard 1999;Maher 2019). Channel catfish is a highly sought-after game fish comprising about one third of the United States' commercial freshwater fish harvest (Quinn 2011), and recreational angling and interest in management of channel catfish populations continues to increase throughout the United States (Arterburn et al. 2002;USFWS 2011USFWS , 2016. Freshwater drum is listed as a non-game fish in most states but are frequently harvested recreationally and commercially because of their abundance and potential to reach a large body size (Becker 1983;Maher 2019). Management of channel catfish in large river systems is typically governed by length-based harvest regulations, but freshwater drum is largely unregulated (Michaletz and Dillard 1999). Despite the popularity and prevalence of these two fishes, limited population demographic data (i.e. recruitment, growth, and mortality)-essential to the characterization and management of fish populations (Guy and Brown 2007)-exists for some populations of channel catfish (Michaletz and Dillard 1999) and freshwater drum inhabiting large river systems (Blackwell et al. 1995).
The literature on population demographics of channel catfish in large floodplain rivers of the Midwestern USA and neighboring states is more abundant and has increased considerably in the second half of the twentieth century and early twenty-first century (Kwak et al. 2011;Porath et al. 2021) compared to freshwater drum (Blackwell et al. 1995;Jacquemin and Pyron 2013). Multiple articles have documented the capture efficiency and size selectivity of sampling gears as well as age estimation techniques used to calculate catfish (order Siluriformes) population demographics in both lotic and lentic habitats (see Kwak et al. 2011). In general, channel catfish growth has been found to be variable within systems (Barada and Pegg 2011;Sindt 2021) and across different systems and regions (Hubert 1999;Rypel 2011;Stewart et al. 2016), but growth does follow a latitudinal gradient with northern populations having faster growth rates (Rypel 2011). Similar to growth, channel catfish mortality estimates for large rivers can be highly variable, affected by multiple factors such as harvest regulations and pressure (e.g. Oliver et al. 2021). For example, three articles on the Wabash River estimated channel catfish annual mortality ranged from 18-35% (Colombo et al. 2008;Donabauer 2009;Colombo et al. 2010), which was lower than the commercially harvested Middle Mississippi River at 43% (Bueltmann and Phelps 2015) and sections of the non-commercially harvested Missouri River ranging from 46-56% (Eder et al. 2016;Hamel et al. 2021). The size structure of channel catfish populations, which reflects the interaction of population demographics, is highly specific to sampling gear and habitat, but length distributions or mean lengths are often reported in association with other population demographics and in examination of harvest regulations (e.g. Barada and Pegg 2011;Eder et al. 2016). Population demographics of freshwater drum in large rivers are understudied (especially relative to channel catfish), with published descriptions of population demographics only existing for growth (Butler 1965;Rypel et al. 2006;Rypel 2007;1992-1996data, Abner and Phelps 2018 and length distribution (Rypel 2007).
Within four large river systems that dominate the Midwestern landscape (i.e. the Illinois, Mississippi, Ohio, and Wabash rivers), there are limited recent population demographic studies (2000s) that can be used to inform harvest regulations and monitoring decisions for both channel catfish and freshwater drum (see Colombo et al. 2008Colombo et al. , 2010Abner and Phelps 2018;DeBoer et al. 2021;Gainer et al. 2021;Oliver et al. 2021). These large rivers also span multiple jurisdictions that lead to differences in recreational and commercial harvest regulations for channel catfish; there are no harvest regulations for freshwater drum (Table 1). Therefore, there is a need to establish a baseline for understudied and more-recent assessments of population demographics, which includes channel catfish size structure in the Illinois and Ohio rivers, growth in the Mississippi River, and mortality in the Illinois and Mississippi rivers. For freshwater drum, estimates of growth and mortality are needed for all four rivers and size structure for the Illinois, Ohio, and Wabash rivers. Additionally, the determination of current channel catfish and freshwater drum population demographics for these four large rivers would add to the growing body of literature to make inter-river comparisons for evaluating and applying various fishery management options (e.g. harvest regulations, Oliver et al. 2021) and river conservation approaches (e.g. varying hydrological patterns, Erickson et al. 2021;Sindt 2021) to improve and conserve these fishes.
Long-term monitoring programs are effective at providing ecological system descriptions and temporal trends over decadal timescales (Carpenter 1998;Lindenmayer et al. 2012), but importantly, the existing data collection framework of long-term monitoring programs can be leveraged to gain insight into additional ecological questions within or across basins (Lindenmayer et al. 2012;Counihan et al. 2018). Recently, long-term data sets are more accessible in many countries worldwide; however, few standardized studies (i.e. long-term monitoring and assessment programs) are conducted on large floodplain river systems. The Long-term Survey and Assessment of Large Fiver Fishes in Illinois (LTEF-historically known as the Long-Term ElectroFishing project) is a standardized electrofishing sampling program that began in 1957 on the Illinois River, USA and expanded to include pools of the Upper Mississippi River (UMR) in 2009 and pools of the Ohio River and portions of the lower Wabash River in 2010. The existing data collection format of the LTEF program provided an opportunity to address knowledge gaps related to channel catfish and freshwater drum population demographics across these four large Midwestern river systems.
In this paper, we used data collected by the LTEF program during a two-year period to quantify population demographics (i.e. size structure, individual growth, and mortality) of channel catfish and freshwater drum from the Illinois River and portions of the Mississippi, Ohio, and Wabash rivers. Our objectives were to: 1) document and assess population demographics over a large geographical area; 2) identify whether differences exist in freshwater drum and channel catfish population demographics among four large Midwestern rivers; 3) and examine channel catfish population demographics in relation to commercial and recreational fishing regulations.
Study area
A total of twenty-two reaches of four large floodplain rivers, the Illinois, Mississippi, Ohio, and Wabash, were sampled in this study (Brown et al. 2005;Delong 2005;Pyron et al. 2020;White et al. 2005; Figure 1; Table 2). The Illinois River flows approximately 439 river kilometers (rkm) through five lock and dam structures (six navigation pools) from the confluence of the Des Plaines and Kankakee rivers, 77 km southwest of Chicago, Illinois, to Grafton, Illinois where it meets the UMR (Delong 2005). The UMR is defined as the 1,374 rkm section of river between St. Anthony Falls, Minnesota and Cairo, Illinois and consists of impounded reaches and open-river reaches. In this study, the impounded UMR reaches bordering the state of Illinois from Rock Island, Illinois to Winfield, Missouri were sampled (navigation pools 16-17, 19-21, and 25). Two open-river reaches that flow from the Missouri-Mississippi river confluence to the Ohio-Mississippi river These large rivers are impacted by aquatic invasive species in all river sections sampled, such as bigheaded carps (silver carp Hypophthalmichtys molitrix and bighead carp H. nobilis), common carp (Cyprinus carpio), and zebra mussels (Dreissena polymorpha), although in differing densities (Angradi et al. 2011). Additionally, each river has had a history of modifications at varying levels such as land-use practice, urbanization, hydrological alterations, and restoration and conservation (Brown et al. 2005;Delong 2005;White et al. 2005;Pyron et al. 2020). Midwestern large river ecosystems, including these rivers, have had complex responses to both anthropogenic impacts and restoration and conservation efforts (DeBoer et al. 2019(DeBoer et al. , 2022Pyron et al. 2020).
Data collection
Channel catfish and freshwater drum were surveyed in 2017-2018 through collaboration between Eastern Illinois University, the Illinois Natural History Survey's Illinois River Biological Station and Great Rivers Field Station, Southern Illinois University, and Western Illinois University under the LTEF program. This program follows a standardized direct-current (DC) electrofishing protocol sampling across three, six-week periods from June 15-October 31 (see Fritts et al. 2017). Effort per river is allocated randomly by Table 2. Pools and reaches of the Illinois, Mississippi, Ohio, and Wabash rivers sampled by LTEF pulsed-DC electrofishing surveys during 2017-2018 with river kilometers (RKM), the number of sampling locations within each sample region (N), whether each pool/reach is impounded by a lock and dam (Y/N), and mean (± SE) annual discharge with gauging station.
River
Year one sample site, one 15-min electrofishing run, per five river miles (see DeBoer et al. 2017;Moody-Carpenter et al. 2020). All fishes were measured for total length (mm) in the field. For ageing, freshwater drum sagittal otoliths (hereafter 'otoliths') were removed and one pectoral spine per channel catfish was disarticulated in the field before releasing the channel catfish alive (Colombo et al. 2010). Channel catfish <250 mm and freshwater drum <200 mm were not collected for this study due to electrofishing size selectivity (Buckmeier and Warren Schlechte 2009;Reynolds and Kolz 2012).
In the laboratory, all freshwater drum otoliths and channel catfish spines were fixed in Envirotex Lite epoxy (Environmental Technology, Inc, Fields Landing, CA). Freshwater drum otoliths and channel catfish spines were sectioned with a Buehler Isomet low-speed saw (Illinois Tool Works (ITW), Lake Bluff, IL) into 800-1000 mm sections. Otoliths were sectioned along the transverse plane through the nucleus, and spines were sectioned at the basal processes. Sectioned otoliths and spines were then coated with pure glycerin and photographed for aging with a Leica DMC2900 camera mounted on a Leica S8AP0 microscope (Leica Biosystems Inc., Buffalo Grove, IL). Otolith and spine images were independently aged by counting annuli for each fish by two experienced readers to gain a consensus age, with a third reader consulted for any disagreements.
Statistical analyses
Data from individual rivers in 2017 and 2018 were pooled across years for all analyses for each species to increase sample sizes. Size structure (i.e. distribution of lengths) differences were assessed using length-frequency histograms for each fish species and river combination (Neumann and Allen 2007). Kolmogorov-Smirnov tests (K-S test) were used to determine whether the distribution of lengths differed between river combinations within species, and p-values were adjusted with a Bonferroni correction for multiple comparisons (Neumann and Allen 2007).
The von Bertalanffy growth function was used to model fish growth for each species by river, L t ¼ L 1 ð1 À e ÀK tÀt 0 ð Þ Þ where L t is total length (mm) at age t (years), L 1 is theoretical maximum total length of an individual within the population, K is the growth coefficient for the population, and t 0 is the theoretical age when the total length is zero (von Bertalanffy 1938;Isely and Grabowski 2007). Mean growth coefficient estimates (i.e. L 1, K, and t 0 ) and their 95% confidence intervals were used to identify differences in indicated parameters among rivers by nonoverlapping 95% confidence intervals for each species.
Instantaneous total annual mortality rate (Z; 1/year) and annual mortality (A) were calculated for each river using the weighted catch-curve regression method (Miranda and Bettoli 2007;Ogle 2016), where the slope of a regression line is fitted to the descending limb of a weighted catch-curve using age-frequency data obtained from aged samples for each river. The ascending left limb and dome of the catch curve represent age-groups that are under sampled by the sampling gear and were not used in each analysis. The channel catfish catch-curve analyses included age-5 and older for the Illinois and Mississippi rivers, age-4 and older for the Ohio and Wabash river. For the freshwater drum catch-curve analyses, age-1 and older were used for the Illinois and Mississippi rivers and age-2 and older were used for the Wabash and Ohio rivers. A one-way analysis of covariance (ANCOVA) was used to identify differences between the instantaneous total mortality rate, which was derived from the negative slope of the descending limb of the catch-curve analyses, between two rivers at a time for each species (Pope and Kruse 2007;Ogle 2016). To compensate for multiple comparisons, the Bonferroni correction method was applied to the p-values (Ogle 2016). Both channel catfish and freshwater drum data used in mortality analyses met assumptions of normality and homogeneity of variances based on residual plots.
All statistical analyses were performed using R statistical environment (version 4.0.3; R Core Team 2020). Size structure, individual growth, and mortality analyses were conducted using FSA: Fisheries Stock Analysis Package (version 0.3.32; Ogle et al. 2021). von Bertalanffy growth model start values were generated by using the vbStarts() function, parameters were estimated using the nls() function and 95% confidence intervals were generated using the nlsBoot() function. Mortality analyses were completed with the anova() function for each comparison. Significance was determined at a ¼ 0.05 with adjusted p-values using p.adjust() where the p-value is multiplied by the number of comparisons.
Channel catfish
Overall, 720 channel catfish were collected during 2017-2018 from the Illinois, Mississippi, Ohio, and Wabash rivers. After removing fish without an assigned age and channel catfish <250 mm total length due to under sampling of small fish, 635 channel catfish were used for the size structure analyses and 706 channel catfish for the individual age and mortality analyses. Mean lengths of channel catfish from the four rivers ranged from 430-454 mm (Figure 2). Fewer (total catch) but longer channel catfish between 250-490 mm were observed on both the Ohio and Mississippi rivers compared to the Illinois and Wabash rivers. The Ohio and Mississippi rivers contained larger fish overall with 50% of channel catfish shorter than 450 mm and 50% shorter than 420 mm and 430 mm for the Illinois and Wabash rivers, respectively. The size structure of channel catfish differed between the Illinois and Ohio rivers (K-S Test; D ¼ 0.200, p < 0.05) and the Illinois and Mississippi rivers (D ¼ 0.188, P < 0.05) with a higher proportion of smaller channel catfish collected in the Illinois River. The channel catfish size structure also differed between the Mississippi and Wabash rivers (K-S Test; D ¼ 0.204, p < 0.05) with the Wabash River having a higher proportion of smaller fish.
Freshwater drum
A total of 890 freshwater drum were collected in the four rivers sampled through LTEF. We used 784 freshwater drum in the size structure analyses and 885 fish in the individual growth and mortality analyses. Freshwater drum mean lengths ranged from 308-399 mm ( Figure 5). Fifty percent of freshwater drum were shorter than 410 mm on the Ohio River, 300 mm for both the Illinois and Mississippi rivers, and 310 mm on the Wabash River. The size structure differed between the Ohio River and all other rivers (K-S Test, all p < 0.001) with the Ohio River having fewer (total catch) and longer fish overall (280-650 mm), but the maximum length of freshwater drum was similar among the Illinois, Ohio, and Wabash rivers. The Mississippi River also had a different size structure than the Wabash River with a higher proportion of smaller freshwater drum collected in the Wabash River (D ¼ 0.139, p < 0.001).
Discussion
We observed variability in size structure, individual growth, and annual mortality of channel catfish and freshwater drum populations from the Illinois River and sections of the Mississippi, Ohio, and Wabash rivers. Channel catfish was generally longer in the Ohio and Mississippi rivers compared to the Illinois and Wabash rivers, while freshwater drum from the Ohio River were fewer but longer compared to the other three rivers. Growth did not significantly differ among rivers for channel catfish, but the Ohio River had the highest K and the lowest L 1 and inversely the Illinois River had the lowest K and the highest L 1. Freshwater drum K was significantly higher and L 1 lower in the Mississippi River compared to all other rivers. Channel catfish annual mortality was significantly higher in the Mississippi River compared to the Wabash River, and freshwater drum annual mortality was lower than all other rivers. These findings provide a current snapshot of channel catfish and freshwater drum population dynamics in these four large rivers that serve as a baseline or updated reference for evaluating current management regulations and identifying future management and research needs.
Management of channel catfish using length-based recreational and commercial harvest regulations has been shown to decrease fishing mortality, prevent or slow growth overfishing and recruitment overfishing (Pitlo 1997;Slipke et al. 2002;Stewart et al. 2016), and alter population demographics (e.g. Pitlo 1997;Eder et al. 2016). Harvest regulations for channel catfish differ among our four study rivers; state harvest regulations for the unimpounded Wabash River are reported at the river scale, while regulations in impounded rivers are reported by pool (e.g. UMR; Table 1). Recreational and commercial harvest occurs in all sampled pools and reaches of our four study rivers except commercial harvest is not permitted in the upper portion (pools 3-5) of the Illinois River (Table 1). The Ohio River is managed for a trophy fishery by placing greater restrictions on the harvest of large channel catfish (vs. smaller channel catfish) with a length-based harvest slot limit (!330.2 mm to 711.2 mm); these regulations permit harvest of one fish !711.2 mm per Table 3. von Bertalanffy growth coefficients with 95% confidence intervals by river location for channel catfish and freshwater drum collected in 2017 and 2018 (L 1 ¼ maximum length (mm), K ¼ growth coefficient (years-1), and t 0 ¼ theoretical age at length 0). Table 1). The other three rivers are managed by minimum length-based restrictions (Table 1). Under these harvest regulations, our data indicates that the Ohio and Mississippi rivers have larger channel catfish fish overall, but the Ohio River has a lower maximum length than the other three rivers (von Bertalanffy growth model; Figure 3) and a lower proportion of larger fish ( Figure 5). Similarly, Gainer et al. (2021) reported a larger proportion of UMR channel catfish total catch in length bins between 350-500 mm, but the proportion of fish identified at each length was higher and mean length lower in their study. The Wabash and Ohio rivers have the same channel catfish harvest slot limit, but unlike the Ohio River, the Wabash River is comprised of smaller channel catfish overall (does not have a low proportion of larger channel catfish) and size structure that significantly differed from the Mississippi River. These results suggest that although the Ohio and Wabash rivers are regulated with the same harvest slot limit, they have different size structures and potentially could be managed more effectively through population-specific harvest regulations. Recently, modeling by Oliver et al. (2021) showed that the Ohio River harvest slot limit, implemented in 2013-2014, was unlikely to impact the channel catfish population demographics or fishery yield, despite the fishery being previously unregulated. The Wabash River harvest regulations also changed in 2015 from commercial (254 mm in Indiana; 381 mm in Illinois) and recreational (254 mm in Indiana; no regulation in Illinois) minimum length limits to the same Ohio River slot limit, citing concerns over increased fishing mortality and harvest of immature channel catfish (Colombo 2007;Donabauer 2009). Our data show that the Wabash River channel catfish theoretical maximum length, growth, and mortality has remained relatively unchanged under the harvest slot limit regulation when compared to Colombo et al. (2008) AC electrofishing data, but electrofishing is known to under sample small channel catfish (Colombo et al. 2008) and may lack the resolution to show a change in the population (Table 2).
Channel catfish exhibit varying growth potential among fisheries and lack general geographical patterns, making statewide harvest regulations ineffective at managing all channel catfish populations within a state (Hubert 1999;Slipke et al. 2002;Stewart et al. 2016); Table 3 for parameters and parameter descriptions. however, we did not find conclusive evidence of growth differences among the four sampled rivers. These results are similar to recent findings in the Illinois River channel catfish population where limited spatial variability in growth was observed (DeBoer et al. 2021), but our mean coefficient estimates for L 1 and t 0 were lower and K was higher for all four rivers in comparison compared to the North American river average (Jackson et al. 2008). Channel catfish is a slow growing long-lived fish but can mature quickly (ages 2-4;Helms 1975;Graham and Deisanti 1999;Hubert 1999;Slipke et al. 2002) and reach 300-375 mm at maturity (Hubert 1999;Shephard and Jackson 2005). In relation to current harvest regulations and general age and length at maturation, our data suggests channel catfish would become legal to harvest before reaching maturity in the Ohio and Wabash rivers but would reach maturity and become eligible for harvest at the same time in the Illinois and Mississippi rivers, thus potentially protecting them more from recruitment overfishing. Additionally, length-based slot limit regulations may be more effective in systems with rapid fish growth (Stewart et al. 2016) where fish outgrow the upper end of the slot limit and become protected. With a long-lived, slow-growing fish like channel catfish (Hubert 1999), the Ohio River might be a good candidate for slot regulations, which increases fish growth, with its current growth rate higher than the other three rivers (Table 3).
Estimated channel catfish annual mortality in our four sampled rivers differed from some Midwestern rivers with our constricted channel catfish age range and single gear used. Channel catfish ages ranged from 0-12 years in most of our sampled rivers and ages used to calculate annual mortality were between 4-12 years (Table 4). In comparison, Oliver et al. (2021) estimated 27% (Bayesian derived 95% credible intervals; 11-37) annual mortality of channel catfish for the Ohio River collected using multiple gear types, with ages ranging from 0-23 years, which was lower than our mortality estimate of 47.4%. Similarly, annual mortality estimates calculated in two sections of the non-commercially harvested Minnesota River used ages ranging from 5-17 years and 7-20 years collected in hoop nets (Sindt 2021). However, our Wabash River channel catfish mortality estimate (30.6%) was similar to estimates by Colombo et al. (2008 [1-15 years;A ¼ 31%], 2010 [2-17 years; A ¼ 33%]) calculated using only electrofishing data, and Donabauer (2009 [18-35%]) calculated from hoop net data. Current comparative data for our selected pools of the UMR and the Illinois River are lacking, and our data could serve as a baseline to Table 3 for parameters and parameter descriptions. build upon. The UMR channel catfish had documented overexploitation before harvest regulations were changed in 1984-1985 from a minimum length limit of 330.2 mm to 381 mm; at that time annual mortality estimates ranged from 61-91% (Pitlo 1997) which is higher than our current estimate (54.4%) and Bueltmann and Phelps (2015) Middle Mississippi River estimate (43%).
Despite freshwater drum lacking harvest regulations, they are a key contributor to many recreational and commercial fisheries and more states are beginning to re-examine the importance of nongame fish species, increasing the value of baseline population demographics for individual systems. For example, in the UMR and its tributaries (including the Illinois River), freshwater drum comprised 5.4% of the total commercial fishing value and harvest averaged 1.29 million pounds annually from 2001-2005 (U.S. Army Corps of Engineers 2012). Freshwater drum are also harvested through bowfishing (harvesting fish with a bow and arrow or crossbow), a growing sport; although, freshwater drum is not heavily targeted like other nongame fish such as common carp (Cyprinus carpio) and buffalofishes (Ictiobus spp.; Scarnecchia and Schooley 2020). Even with this commercial harvest and recreational interest, mortality rates of freshwater drum are relatively low in Midwestern rivers. The annual mortality estimates for this study ranged from 7.6-29.4%, with the Ohio River having a significantly lower mortality rate than the other three study rivers. Among the study rivers, the Mississippi River freshwater drum had the highest mortality rate (29.4%) and significantly lower L 1 and higher K; but our estimate might have been impacted by a truncated age range in our sample (Table 3). Our Mississippi River mortality estimate of 29.4% was higher than the estimate by Abner and Phelps (2018;pools 13-16/open river, 1992-1996 data) at 15%-25% but similar to HDR Engineering (2021) that reported pool 14 annual mortality at 33.8% for 2020.
Freshwater drum have complex age-at-length relationship patterns across their North American range, highlighting the need to establish population dynamics for individual systems to inform potential future harvest regulations (Rypel 2007;Jacquemin and Pyron 2013). For example, Butler and Smith (1950) documented freshwater drum age-at-maturity for males between 3-6 year of age and 279.4-355.6 mm and females between 5-6 years of age and 330.2-381 mm in the UMR. These ages and lengths are older and larger than reported for mature males (3-4 years, 61-480 mm) and mature females (3-4 years, and 156-584 mm) for rivers in Alabama, USA (Rypel 2007). Additionally, freshwater drum exhibits growth sexual dimorphism in older fish (>5 years; females reach a larger body size than males; Rypel 2007) potentially exposing females to preferential harvest resulting in differential mortality. Freshwater drum in our study ranged in ages from 0-29 years with the majority of fish being 1-3 years of age. Additional research should incorporate sex-specific growth differences in population demographics of freshwater drum for our four study rivers.
Biases (i.e. unequal probability of sampling individuals of a population) associated with individual sampling gears have been documented to influence population metrics (e.g. size structure, growth, and mortality; Colombo et al. 2008;Reynolds and Kolz 2012), and comparisons of population demographics made among different gear types should be interpreted with caution. Pulsed-DC electrofishing is size selective, effectively sampling catfishes 250-850 mm but under-sampling catfishes <200 mm and is considered less effective at capturing catfish than other large river sampling gears (e.g. hoop net; Buckmeier and Warren Schlechte 2009;Gainer et al. 2021). Additionally, it was necessary in this study to combine both sexes of freshwater drum to achieve suitable sample sizes in this study, so any comparison to our growth and mortality analyses for freshwater drum do not account for known differences due to sexual dimorphism (Rypel et al. 2006;Rypel 2007). Lastly, fish were pooled across each river in our study to increase sample size, so we made the assumption that growth and mortality rates were the same within multiple pools of each river. Future demographic studies should strive to account for sex-specific growth differences and reduce sampling bias by using multiple gear types to provide the best available data for managers to analyze the impact of fishery management and environmental health regulations.
Conclusion
This study provides a snapshot of current channel catfish and freshwater drum population demographics across four large Midwestern rivers where relatively limited population demographic research exists. Outcomes of this study increase our understanding of channel catfish and freshwater drum populations in these rivers and identifies potential knowledge gaps for future research needed to monitor these populations. We learned that channel catfish population demographics in the Ohio, and Wabash rivers vary among systems under the same harvest regulations, and the Wabash River channel catfish population demographics are similar to those documented by Colombo et al. (2008) before the implementation of the current harvest regulations. Additionally, although freshwater drum harvest is unregulated and their population demographics vary among rivers, their mortality is relatively low. Thus, future studies should focus on determining channel catfish reach specific population demographics as variation within or between systems could have ramifications from statewide harvest regulations, freshwater drum sex-specific population demographics, and biological factors driving population demographics in these large Midwestern rivers to inform future management strategies. followed in accordance with the University of Illinois Institutional Animal Care and Use Committee (EIU, 16-003;INHS, 17018;SIU, 16-008;WIU, 16-09).
|
v3-fos-license
|
2020-01-17T02:00:43.852Z
|
2020-01-16T00:00:00.000
|
210700968
|
{
"extfieldsofstudy": [
"Computer Science",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1109/tip.2021.3124156",
"pdf_hash": "18726788e8a74aed4420745fc420dddd3950e1c6",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2340",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "18726788e8a74aed4420745fc420dddd3950e1c6",
"year": 2020
}
|
pes2o/s2orc
|
Rethinking Motion Representation: Residual Frames with 3D ConvNets for Better Action Recognition
Recently, 3D convolutional networks yield good performance in action recognition. However, optical flow stream is still needed to ensure better performance, the cost of which is very high. In this paper, we propose a fast but effective way to extract motion features from videos utilizing residual frames as the input data in 3D ConvNets. By replacing traditional stacked RGB frames with residual ones, 20.5% and 12.5% points improvements over top-1 accuracy can be achieved on the UCF101 and HMDB51 datasets when trained from scratch. Because residual frames contain little information of object appearance, we further use a 2D convolutional network to extract appearance features and combine them with the results from residual frames to form a two-path solution. In three benchmark datasets, our two-path solution achieved better or comparable performances than those using additional optical flow methods, especially outperformed the state-of-the-art models on Mini-kinetics dataset. Further analysis indicates that better motion features can be extracted using residual frames with 3D ConvNets, and our residual-frame-input path is a good supplement for existing RGB-frame-input models.
Introduction
For action recognition, motion representation is an important challenge to extract motion features among multiple frames. Various methods have been designed to capture the movement. 2D ConvNet based methods use interactions in the temporal axis to include temporal information [12,28,15,16,29]. 3D ConvNet based methods improved the recognition performance by extending 2D convolution kernel to 3D, and computations among temporal axis in each convolutional layers are believed to handle the movements [24,18,32,1,9,26]. State-of-the-art methods showed further improvements by increasing the number of used frames and the size of the input data as well as deeper backbone networks [6,2,25]. In a typical implementation of 3D ConvNets, these methods used stacked RGB frames as the input data. However, this kind of input is considered not enough for motion representation because the features captured from the stacked RGB frames may pay more attention to the appearance feature including background and objects rather than the movement itself, as shown in the top example in Fig. 1. Thus, combining with an optical flow stream is necessary to further represent the movement and improve the performance, such as the two-stream models [8,7,21]. However, the processing of optical flow greatly increases computation time 1 . Besides, obtaining two-stream results activation of the optical flow stream only after the optical flow data are extracted, which causes high latency.
In this paper, we propose an effective strategy based on 3D convolutional networks to pre-process RGB frames for the generation and replacement of input data. Our method retains what we call residual frames, which contain more motion-specific features by removing still objects and background information and leaving mainly the changes between frames. Through this, the movement can be extracted more clearly and recognition performance can be improved comparing to just using stacked RGB inputs as shown in the bottom sample in Fig. 1. Our experiments reveal that our approach can yield significant improvements over top-1 accuracies when those ConvNets are trained from scratch on UCF101 [22] and HMDB51 [14] datasets. One may think that our approach is naive and therefore cannot be applied to videos with global motion, but this will also be addressed in Section 5.1.
For larger action recognition datasets such as Minikinetics [32] and Kinetics [13], the definitions of the actions become more complex such as Yoga containing various combination of simple actions, and these datasets have a large amount of compound labels, such as playing guitar and playing ukulele, where the movement is almost the same and the difference is mainly on the objects. In this case, it is difficult to distinguish by only motion representation without enough appearance features. Therefore, we propose a two-path solution, which combines the residual input path with a simple 2D ConvNet to extract appearance features from a single frame. Experiments show that our proposed two-path method obtains better performance over some two-stream models on UCF101 / HMDB51 / Minikinetics datasets when using the same input shapes and similar or even shallower network architectures.
Our contributions are summarized as follows: • We propose a simple, fast, but effective way for 3D ConvNets to better extract motion features by using stacked residual frames as the model input. • The proposed two-path solution including a 3D Con-vNet with residual input as the motion path and a 2D ConvNet as the appearance path can achieve better performance than other methods using similar settings. • Our proposal can avoid the requirement of high-cost computation for optical flow while ensuring high performance. Our analysis also suggests potential limitations in the current action recognition task.
We would like to clarify that we are proposing a new way for motion representation. For this purpose, we do not always focus on the better performance than other approaches based on very deep and complex DNN architectures as well as other training / parameter settings. Instead, we discuss why and how much our approach is reasonable as compared to optical-flow-based and RGB-only approaches. We will release our code if the paper is accepted.
Related works
In this section, traditional action recognition networks are introduced. Though temporal modeling usually exists among those networks, we use another subsection to introduce and discuss this in detail because temporal information is a key feature. Model combination is set as another subsection to clearly see the solution route maps for high accuracies.
Deep action recognition
2D solution. 2D ConvNets based methods mainly consist of frame-level feature representation and temporal modeling to fuse these features. When treating each frame of a video as a single image, 2D ConvNets which are effective for image classification task can be directly applied to video recognition. Karpathy et al. [12] tried different ways to fuse features from 2D ConvNet and then used fused features to classify videos. Temporal Segment Networks (TSN) [28] was designed to extract average features from stride sampled frames. Two-stream ConvNets [21,8,7] used an additional optical flow stream. And for both RGB stream and optical flow stream, 2D ConvNets were used. Recent works such as Temporal Bilinear Networks (TBN) [15] and Temporal Shift Module (TSM) [16] are variants of 2D ConvNets. Compared to 3D counterparts, 2D methods are more efficient because fewer parameters are used, and the performance is highly related to the temporal modeling. Our method uses a 2D network to extract appearance features considering the high efficiency of 2D models, and the proposed appearance path uses less input than existing 2D ConvNets, which is more efficient.
3D solution. 3D ConvNets based methods directly use 3D convolution kernels to process input video frames. The computation between frames is carried out when the temporal kernel size is 2 or larger, and spatial-temporal features can be automatically learned by network optimization. Tran et al. [24] proposed C3D, which consists of 8 directlyconnected convolutional layers and 2 fully-connected layers. Hara et al. [9] conducted many experiments on the 3D version of residual networks, including different depths and using some variants such as ResNeXt [31]. Carreira et al. [1] proposed I3D based on Inception network. Slow-Fast [6] used two ResNet pathways to capture multi-scale information in the temporal axis. Despite of different network architectures, 3D convolutional kernel also has variants. One k × k × k kernel can be separated into two parts, k × 1 × 1 and 1 × k × k. Based on this, P3D [18], R(2+1)D [26], and S3D [32] were proposed. The backbones of mainstream networks are ResNets [10] and Inception network [23]. Neural architecture searching (NAS) is used in [17] to get efficient network architectures. However, because the parameter number is larger than 2D counterparts, 3D models are prone to overfitting when trained from scratch on small datasets such as UCF101 [22] and HMDB51 [14]. Fine-tuning models pre-trained on very large dataset such as Kinetics [13] is one solution to acquire good performance on these small datasets. From another point of sight, our proposed method focus more on the movement itself and utilize a 3D ConvNet with higher motion representation ability by using residual frames as input. In this way, we can reduce the tendency to over-fit on small datasets compared to normal RGB inputs when using the same network architectures.
Temporal modeling
For 2D ConvNets, some models [12,28] have been proposed which simply averaged frame features to represent videos. Donahue et al. [11] used 2D models to extract features using long short-term memory (LSTM) [5]. Zhou et al. [33] proposed Temporal Relation Network to learn temporal dependencies. Temporal Bilinear Networks [15] uses temporal bilinear modeling to embed temporal information. Temporal Shift Module [16] shifts 2D feature maps along temporal dimension.
For 3D ConvNets, temporal modeling is automatically processed by learning kernels in the temporal axis. Because 3D ConvNets use stacked RGB frames as input, the computation among frames is believed to learn motion features, while the spatial computation is for spatial feature embedding. Therefore, existing 3D models do not pay much attention to this part, and trusting the capabilities of network. Recently, Crasto et al. [2] trained a student network using RGB-frame input by learning feature representation from a teacher network, which had been trained using optical flow data to enhance temporal modeling.
Our proposed two-path method consists of an appearance path using a 2D ConvNet only to extract appearance features and a motion path using 3D ConvNet to calculate motion features. Temporal modeling only exists in the motion path. The use of residual frames differs from using as motions exist not only in the temporal dimension of residual frames, but also in the spatial dimension because one residual frame is generated from two adjacent frames.
Two-stream model
Two-stream models usually stand for those methods combining 2D features / results from RGB stream with optical flow stream [21,8,7]. Some researchers extended the concept by combining RGB-frame-input path with another path which uses pre-computed extra motion features, such as trajectories [27] or SIFT-3D [19], as well as optical flow. Many existing methods can then be extended by combining motion feature stream to further improve their performances [2,1,26]. To distinguish our proposal from the aforementioned two-stream methods, we refer to our method as 'two-path' rather than 'two-stream' because we do not use any pre-computed motion features.
Proposed method
In this section, we first introduce our proposal that uses residual frames as a new form of input data for 3D Con-vNets. Because residual frames lack enough information for objects, which are necessary for the compound phrases used for label definitions in most video recognition datasets, we further propose a two-path solution to utilize appearance features as an effective complement for motion features learned from the residual inputs.
Residual frames
For 3D ConvNets, stacked frames are set as input, and the input shape for one batch data is T ×H ×W ×C, where T frames are stacked together with height H and width W , and the channel number C is 3 for RGB images. We denote the data as T HW for simplicity. The convolutional kernel for each 3D convolutional layer is also in 3 dimensions, being k T × k H × k W . Then for each 3D convolutional layer, data will be computed among three dimensions simultaneously. However, this is based on a strong assumption that motion features and spatial features can be learned perfectly at the same time. To improve performance, many existing 3D models expand weights from 2D ConvNets to initialize 3D ConvNets, and this has been proved to provide higher accuracies. Pre-training on larger datasets will also enhance performance when fine-tuned on small datasets.
When subtracting adjacent frames to get a residual frame, only the frame differences are kept. In a single residual frame, movements exist in the spatial axis. Using residual frames for 2D ConvNets have been attempted and proved to be somewhat effective [30]. However, because actions or activities are complex with much longer durations, stacked frames are still necessary. In stacked residual frames, the movement does not only exist in the spatial axis, but also in the temporal axis, which is more suitable for 3D ConvNets because 3D convolution kernels will process data in both spatial and temporal axes. Using stacked residual frames helps 3D convolutional kernel to concentrate on capturing motion features because the network does not need to consider the appearance information of objects or backgrounds in videos.
Here we use f rame i to represent the i th frame data, and F rame i∼j denotes the stacked frames from the i th frame to the j th frame. The process to get residual frames can be formulated as follows, The computational cost is cheap and can even be ignored when compared with the network itself or optical flow calculation.
With this change, 3D ConvNet can extract motion features by focusing on the movements in videos alone. However, by ignoring objects and backgrounds, some movements in similar actions become indistinguishable. For example, in the actions Apply Eye Makeup and Apply Lipstick, the main difference lies in the location of the movement being around the eyes or the mouth rather than the movement itself. In this example, 3D ConvNets may be able to distinguish them to some extent but the loss of information does Figure 2. Framework of our two-path network. The motion path and the appearance path are trained separately using cross-entropy loss. Action recognition is carried out within each path. In inference period, the output probabilities from two paths are averaged. In this way, both motion features and appearance features are utilized for final classification.
increase the difficulty. Therefore, we use a 2D ConvNet to process the lost appearance information and combine with a 3D ConvNet using residual frames as input to form a twopath network.
Two-path network
Our two-path network is formed by a motion path and an appearance path, which is illustrated in Fig. 2 Motion path. Because residual frames are used in this path, movements then exist in both spatial axis and the temporal axis. Therefore, 3D convolution layers are used in this path. Because there are many existing 3D convolution based network architectures which have been proved effective in many action recognition datasets, we do not focus on designing a new network architecture in this paper. To verify the robustness and versatility of our proposal, we conduct experiments on various models, and discussed especially on ResNet-18-3D as its good performance. In the original ResNet-18-3D [9], convolution with stride is used at the beginning of several residual blocks to perform downsampling. We attempt another version of residual blocks which uses max-pooling layers at the end of each corresponding blocks. These two versions have almost the same network parameters. Appearance path. By using residual frames with 3D Con-vNets, motion features can be better extracted, while background features which contains object appearances are lost. The lost part can be extracted by a 2D ConvNet, which uses one RGB frame as input. The goal for our appearance path is to embed object and background appearances which are mostly lost in the motion path. Therefore, in contrast to TSN or other complex models, a simple 2D ConvNet is sufficient. The naive 2D ConvNet treats action recognition as a simple image classification problem. During training, only one frame in a video is randomly selected in one epoch.
For the combination of these two paths, we average the predictions for the same video sample. There are early fu-sion methods that may be more effective, which we leave as our future work.
Datasets and metrics
Datasets. There are several commonly used datasets for video recognition tasks. Thanks to the large amount of videos and labels in these datasets, deep learning methods can detect a high number of actions. We mainly focus on the following benchmarks: UCF101 [22], HMDB51 [14], and Kinetics400 [13]. UCF101 consists of 13,320 videos in 101 action categories. HMDB51 is comprised of 7,000 videos with a total of 51 action classes. Kinetics400 is much larger, consisting 400 action classes and contains around 240k videos for training, 20k videos for validation and 40k videos for testing. For the Kinetics400 dataset, because it is very large, we mainly perform our experiments on its subset, Mini-kinetics [32], which consists of 200 action classes with 80,000 videos for training and 5,000 videos for validation. The actual data used in our experiments may be a little smaller because some videos were unavailable. Metrics. We report all experiments with top-1 and top-5 accuracies for all experiments. The performance on Minikinetics was evaluated on the validation split. We also use correlation coefficient index for deeper analysis between different models, which may indicate the relationships between the knowledge learned from existing models.
Scratch training and fine-tuning
There are always two ways to train a network, either training from scratch or fine-tuning from a pre-trained one. There is an obvious gap between these two training routes. Thanks to the proposal of the Kinetics datasets, several 3D convolution based models have been proposed with better performances using pre-trained models. Therefore, many recent works based their results on fine-tuned models for small datasets such as HMDB51 and UCF101, and trained from scratch for larger datasets such as Kinetics400 and its subset, Mini-kinetics.
Models can benefit from larger datasets, but training on larger datasets significantly increases computation time, so repeatedly increasing the size of datasets to improve performance is not always a solution. In this paper, in addition to the default settings discussed above, we also look into the situation that no additional knowledge is available. Specifically, we want to explore the limitations for 3D ConvNets on UCF101 and HMDB51 without any additional datasets.
Implementation details
Motion path. In this path, stacked residual frames are set as the network input data. Residual frames are used identically to traditional RGB frame clips. For 3D ConvNets in action recognition, there are several input setting choices. 3D ConvNets started from [24] which used a clip of 16 consecutive frames, with a 112 × 112 slice cropped in the spatial axis. To achieve the state-of-the-art results, clips in size 64 × 224 × 224 were used in many recent works. When using such a large input data size, improvements can be achieved but limited while longer training time as well as larger memory occupations are necessary. Therefore, for all of our motion path, following [24], frames are resized to 170 × 128 and 16 consecutive frames are stacked to form one clip. Then, random spatial cropping is conducted to generate an input data of size 16 × 112 × 112. Before it is fed into the network, random horizontal flipping is performed. Jittering along the temporal axis is applied during training. The backbone in most our experiments is ResNet-18-3D. We tried two variants of ResNet-18-3D, the difference of which is whether using convolution with strides at the beginning of some residual blocks or using max-pooling at the end of corresponding blocks instead. R(2+1)D, I3D, and S3D are also tested to verify the robustness of our proposal. The batch size is set to 32. When models are trained from scratch, the initial learning rate is set to 0.1. We trained models for 100 epochs on UCF101 and HMDB51, and used 200 epochs for Mini-kinetics. When fine-tuning on UCF101 and HMDB51 using Kinetics400 pre-trained models, model weights are from [9] and the network architecture remains the same as [9]. The initial learning rate became 0.001, and 50 epochs were sufficient. Appearance path. In contrast to TSN, our appearance path used a simpler model which treats action recognition as image classification because appearances in consecutive frames changed infrequently, and the goal for this path is to capture appearance features for background and objects. Frames are first resized to 256 × 256 and random spatial cropping and random horizontal flipping are applied in sequence to generate input data with a size of 224 × 224. This progress is standard in image classification to enable the use of many pre-trained models. ResNet-18, ResNet-34, ResNet-50, and ResNeXt-101 were used to test the impact of different model depth. In addition, models were also trained from scratch to see the performances when no additional knowledge is provided. Training recipes. We noticed that few works paid attention to training recipes in video recognition. We used several training recipes to train our models. Specifically, we tried to use different activation functions and different learning rate decay methods. Experiments for ths part are mainly carried out on the UCF101 dataset. For larger datasets, we find that some settings still work, and we think they can be called as 'bag of tricks' in video recognition tasks. Testing and Results Accumulation. There are two means of testing for action recognition using 3D ConvNets. One is to uniformly get video clips from one video, which means a fixed number of clips is generated and set as the input of the model, regardless of the video length. The predictions are averaged over all video clips to generate the final result. The other method uses non-overlapping video clips, which means longer videos will produce more video clips. The final result for one video is also generated by averaging these video clips. We performed a small test for these two means of testing and found the difference can be ignored because all of the clip results are averaged in both methods. Thus, we used the uniform method in our experiments, and our appearance path used a fixed number frames sampled from all video frames to match the motion path.
Results and discussion
In this section, results from single paths are first introduced. The motion path is used to investigate the effectiveness of stacked residual frames. Then, results from the appearance path are reported. Further analysis is conducted to explore the connections between models, especially RGB 2D model and RGB / residual 3D model. Finally, we show the performance of our proposed two-path network comparing to various existing models.
Single path
Motion path. For the motion path, different training recipes were investigated first. Different activation functions were tried and we found that, in contrast to existing 3D convolution based methods [9,26,32,24,18,1] which use ReLU as the default activation function, replacing ReLU with ELU improved the top-1 accuracy by 2.6% points (from 51.9% to 54.5%) and 3.3% points (from 58.0% to 61.3%) for two experimental versions (convolution with stride and maxpooling) of ResNet-18 (Table 1). Similar results can be found for Mini-kinetics. To get better performance, we use ResNet-18 with max-pooling layers as our default model version.
Compared to RGB clips, stacked residual frames maintain movements in both spatial and temporal axes, which takes greater advantage of 3D convolution. Results are shown in Table 1 and the following discussion is all based on this table. By simply replacing RGB clips with our proposed residual clips, ResNet-18-3D results can be improved from 51.9% to 72.4%. To the best of our knowledge, this outperforms the current state-of-the-art results when models are trained from scratch on UCF101. In addition to directly using residual frames, feature differences were carried out. In Model ResNet-18(fea diff), we used RGB clips as input while calculating feature differences in the temporal axis after first convolutional layer. The results were then fed into the rest of the network. However, this produced lower accuracies than directly using the residual frames as the network input. R(2+1)D, I3D, and S3D are also experimented and Table 2. Top-1 results for motion path on three benchmark datasets. Training on Kinetics400 costs too much time. Therefore, for fine-tuning models, we used pre-trained models provided in [9], the only difference is that we use our residual input. The reported results are on UCF101 split 1 and HMDB51 split 1. Figure 3. Visualization using grad-cam [20]. The number is the corresponding prediction probability for each sample. Residualinput model focused more on the moving entity and the moving area while RGB-input included more background information.
improvements are achieved by more than 10% when replacing original RGB input with our residual frames.
To sum up our residual inputs, we can see that this approach is robust for different model architectures. Because ResNet-18 is light-weighted and has good performance, we used ResNet-18 as the default backbone in our motion path.
We also tested the performance on HMDB51 and Mini-kinetics. Results are shown in Table 2. On HMDB51 split 1, the results can be improved from 22.2% to 34.7% when replacing the original input with residual frames. However, the improvement can not be observed for Mini-kinetics because the labels are more related to objects rather than actions, which is the main reason of introducing our appearance path. Residual-input model can also benefit from pre-trained models when fine-tuning, yielding 89.0% on UCF101 split 1. The results on HMDB51 are not as good as RGB model because on this dataset, the range of one variation of one action is larger. For example, the category Dive including bungee jumping and a movement by a score keeper on the ground. And many movements are inconsistent in one category while the samples are few, which greatly increases the difficulty for residual inputs.
For deeper analysis, we further use grad-cam [20] for visualization. As shown in Fig. 3, the residual-input model pays attention to the action entity while the RGBinput model focuses more on the background. The prediction probability is low for BreastStroke because RGB model gives higher probability for another swimming style FrontCrawl.
The first 16 out of 64 convolutional filters in conv1 layers from the RGB-input model and the residual-input model are illustrated in Fig. 4. These two models are both trained from scratch on Mini-kinetics. We can see that the filters in the RGB-input model are similar among different temporal axes. For this reason, using ImageNet [3] pre-trained models can achieve good performance, even when using naive 2D models, while with our residual inputs, the performance is a little bit lower. The filters in residual input model differs from each other among different temporal axis, indicating that this model is more sensitive for changes in time. The accuracy differences between our residual-input model and the RGB-input model are illustrated in Fig. 5. We show the best-5 and worst-5 classes. The positive peak belongs to the class playing bagpipes and we found that in this category, there are global movements cause by lens shake and other irrelevant movements by bystanders while our residual-input model can handle this kind of movement. Movements in throwing discus and hula hooping are highly consistent. In contrast, movements in yoga varies from each other while the appearance information plays a more important role.
Based on our analysis, the ability of 3D ConvNets may be limited because of the ambiguity in action labels. Additionally, more attention is paid to appearance rather than movements for RGB 3D models. Appearance path. For the appearance path, four ResNet architectures were used, namely ResNet-18, ResNet-34, ResNet-50, and ResNeXt-101. Scratch training as well as fine-tuning from ImageNet pre-trained models were both tried. The results are shown in Table 3. Table 3. Top 1 accuracies using appearance path on UCF101 split 1, HMDB51 split 1, and Mini-kinetics. Models are trained either from scratch or by using fine-tuning.
We can clearly find that the gap is large for 2D ConvNets between these two training ways, which is consistent with previous works on image classification tasks. However, pretraining also needs much time if no pre-trained models are provided. For better performance, deeper networks generally provide higher scores.
Regarding Mini-kinetics, ImageNet pre-trained models were directly used and high accuracies could be achieved. Among these 2D ConvNets, the best top-1 accuracy was 70.5% points which is very high. However, in this case, the action recognition task is treated as a simple image classification task, which does not benefit from the use of any temporal information.
The performance of ResNet-18-2D using pre-trained weights is 79.6%, which is close to the performance of scratch training using ResNet-18-3D in Table 1, 72.4%. Though it may be unfair to compare these two models be-cause the 2D version utilizes image classification knowledge to initialize its parameters while the 3D version does not. Duplicating ImageNet pre-trained model parameters in 3D ConvNets could be a good solution. However, it is still prone to mainly using appearance features. Analysis among models. The difference between 2D convolution and 3D convolution is that 3D convolution has another dimension which is aimed to process temporal information. For continuous frames, especially those trimmed videos provided in video recognition datasets, the difference between frames is limited. Therefore, the 3D convolution may not process temporal information efficiently. Duplicating ImageNet per-trained model parameters as the initial model parameters does provide improvements, but then spatial-temporal convolution might be lazy during fine-tuning progress because even for models trained from scratch, model weights are tend to be similar among temporal axis (Fig. 4).
Here we introduce the correlation coefficient index to calculate the relationships between different models. 2D models and 3D models were tested. For 2D ConvNets, we also used optical flow stream as a comparative model. Correlation coefficient indexes for per-category accuracies between two different models will be reported in Table 4. The backbone networks are ResNeXt-101-2D and ResNet-18-3D. All models were fine-tuned to ensure the classification performance. From the table, we can see that the correlation coefficient index for RGB 2D and 3D models is high, which indicates that these two approaches may make judgement in a similar way while optical flow stream differs significantly. Our residual-input model has a high correlation with RGB 3D models because of the same network architecture. However, the correlation becomes lower with RGB 2D models because using residual frames results in more motions being used for classification rather than appearance.
Two-path network
By combining the motion path with the appearance path, appearances and motions can be used to get the predictions. Because we have several models, we tried different combinations among different models. For example, in the UCF101 dataset, we try different combinations by selecting Table 5. Results from different combination of different models on UCF101 split 1. Our combination yielded the best performances.
Method
Optical flow UCF101 HMDB51 top-1 top-5 top-1 top-5 Two-stream [21] 86.9 -58.0 -Two-stream (+SVM) [21] 88.0 -59.0 -I3D [1] 98. Table 6. Two-path results on UCF101 and HMDB51. Accuracies are calculated by averaging results from 3 splits. The size of the input clips for the state-of-the-art method I3D is 8× our motion path input and the network parameters are around 2× larger. Our two-path network is even better than the basic two-stream model which required optical flow features. Table 5. In our implementation, the optical flow path used a ResNeXt-101 backbone, which is the same as our appearance path. However, the combination of optical flow and other RGB models produces side effects on the accuracies.
The identical values in this table are the result of rounding because the accuracies happen to be close enough to exceed the points of precision used in this table.
Here, we do not focus on developing a new network architecture, and therefore, we only compare our method with some corresponding methods, as shown in Table 6. Our single motion path can outperform TSN [28] and I3D-RGB [1] which only use RGB input data. Without any additional computation for optical flow and only using ResNet18, we can have better performance than the original two-stream model [21] which uses optical flow. On the other hand, our model is not better than the state-of-the-art such as [1]. But it is out of the scope of our paper because many settings including the input size and network architectures are totally different.
For Mini-kinetics, results are shown in Table 7. We mainly compared our method with TBN [15] and MARS [2], which does not use optical flow yet achieving good performances. WTBN used temporal bilinear modeling to process temporal information, which is insufficient to extract motion features compared with ours. The backbone network for MARS is ResNeXt-101-3D. To get the results using distillation methods, their networks should be trained on optical flow inputs first, and then another network is built to learn features from optical flow stream. The process is complex and is much more expensive than our proposed two-path method. The backbone network for our motion path is ResNet-18-3D, which is shallower than that used in MARS. There is much room for our proposed solution to improve by using deeper networks and other feature fusion methods.
Conclusion
In this paper, we mainly focused on extracting motion features without optical flow. 3D ConvNets are believed to be capable of capturing motion features when RGB frames are set as input, but we demonstrated that it is not always true. We improved use of 3D convolution by using stacked residual frames as the network input. The overhead for this computation was negligibly small. With residual frames, the results of 3D ConvNets could be improved significantly when trained from scratch on UCF101 and HMDB51 datasets. Besides residual frames input, we proposed a twopath network, of using the motion path to extract motion features while the appearance path uses RGB frames to get the corresponding appearance. By combining the results from two paths, the state-of-the-art could be achieved on the Mini-kinetics dataset and better or comparable results can be achieved on UCF101 and HMDB51 datasets compared with the corresponding two-stream methods. Our results and analysis imply that residual frames can be a fast but effective way for a network to capture motion features and they are a good choice for avoiding complex computation for optical flow. In our future work, we will focus on performance improvement by investigating better combination method for our two-path network.
|
v3-fos-license
|
2019-02-15T14:18:47.104Z
|
2010-08-28T00:00:00.000
|
62270052
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.34105/j.kmel.2010.02.018",
"pdf_hash": "d5ec728de01d0ad575886c0d5a02f44a6d269f74",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2342",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "9480a155af6614742ddb6d906bcffd772738bf4d",
"year": 2010
}
|
pes2o/s2orc
|
Design and Implementation of an Extensible Learner-Adaptive Environment
: This paper describes the design and implementation of a flexible architecture that is capable of extending the functions of a learner-adaptive self-learning environment. A “courseware object”, which is a program module that is used to implement various educational functionalities, has been newly introduced to ensure both function extensibility as well as content reusability. A prototype system was designed and implemented to investigate the feasibility of the proposed architecture and to identify the core behavior and interaction schema of courseware objects. The results from this trial indicated that several learner-adaptive functionalities including the SCORM 2004 standard specifications will be able to be successfully implemented into the proposed architecture.
Yosuke Morimoto is an associate professor at the Open University of Japan. He graduated and received his Ph.D. in Engineering from Tokyo Institute of Technology in 2005. He has specialized in educational technologies. He is currently mainly engaged in designing and developing retrieval/sharing systems for learning content.
Introduction
It is widely known that the interoperability and reusability of learning content is a critical issue that needs to be addressed to provide high-quality e-learning services with rich learning experiences. Enormous amounts of effort have been expended to confront this issue by establishing and disseminating e-learning content specifications (Fallon & Brown, 2003;Nakabayashi, 2004) including the Aviation Industry CBT Committee (AICC) Computer Managed Instruction (CMI) specifications (Aviation Industry CBT Committee, 2004), the Advanced Distributed Learning (ADL) Sharable Content Object Reference Model (SCORM) (Advanced Distributed Learning, 2006), and the IMS Global Learning Consortium Common Cartridge (CC) (IMS Global Learning Consortium, 2008). Some of these attempts have successfully achieved interoperability between e-learning content and learning-management systems (Kazi, 2004;Nakabayashi et al., 2006;Nakabayashi et al., 2007;Shih et al., 2005;Yang et al., 2004). On the other hand, learneradaptive techniques have been regarded as an effective means of enhancing learning experience by providing suitable learning content and resources that match the learner's current status. There have been numerous proposals and studies on learner-adaptive techniques (Fletcher 1975;Murray, Blessing & Ainsworth, 2003;Wenger, 1987) that have been based on the traditional overlay model (Carr & Goldstein;1977) and the bug model (Brown & Burton, 1978) as well as a Web-based training system (Nakabayashi et al., 1995), sophisticated adaptive hypermedia (Brusilovsky, 2003;De Bra & Ruiter, 2001), and a system using domain ontology (Sosnovsky et al., 2007).
However, little consideration has been given to interoperability and reusability of content in the field of learner-adaptive systems. Most existing learner-adaptive systems have usually been designed to implement a certain single learner-adaptive strategy without any consideration being given to support multiple learner-adaptive strategies or even to extend a single implemented strategy. Without such a framework for extending functions, it would be difficult to add new functions that could improve the effectiveness of learning. This is because newly added functions may conflict with those towards executing existing learning content by leading to a damage of the reliable behavior of this content. In addition, it would take too long for standardization organizations to authorize extensions of functions to existing standard specifications. It is thus very difficult to achieve both content-system interoperability and system-function extensibility in conventional learner-adaptive systems.
To overcome this problem, the authors have proposed a new learning-system architecture that aims at achieving the goals of both extending learner-adaptive functions and making learning content interoperable (Nakabayashi, Morimoto & Hada, 2008;Nakabayashi, Morimoto & Hada, 2009). To achieve this goal, the proposed architecture introduces the concept of a "courseware object", which is a program module that is used to implement various educational functionalities. This architecture allows for the incremental extensions of functions by adding new courseware objects. Since the existing functions are not affected, this ensures that existing content will always work properly. Following these earlier investigations, the authors designed and implemented a prototype system to investigate the feasibility of the proposed architecture and to identify the core behavior and interaction schema of courseware objects. The results from a trial showed that several learner-adaptive functionalities including the SCORM 2004 standard specifications and their extensions could be successfully implemented on the proposed architecture.
Issues with Conventional Learner-Adaptive Systems
It was common to employ a system architecture, as shown in Figure 1, that separated the content from the platform in the past evolution of learner-adaptive systems (Nakabayashi et al., 1996;Wenger, 1987). The content in this configuration consisted of learning material that was specific to a particular learning subject with a particular learning goal, and the platform implemented common learner-adaptive functionalities, which were independent of the specific learning subject or learning goal. By separating content from the platform, this configuration was intended to make it much easier to design learneradaptive content. This was because the designer could concentrate on creating content to fulfill the learning objectives or goals without having to worry about how to implement learner-adaptive functionalities in detail.
Figure 1. Configuration for conventional learner-adaptive system
The drawback to this configuration was the lack of a framework for extending functions. Once the platform was designed and implemented, it was difficult to extend it by adding new functionalities because the existing learning content that had been designed before the platform was extended may not work properly on the extended system. Moreover, these extensions needed to be authorized as new standard specifications to achieve system interoperability, but this authorization process took a long time. It was also necessary to update existing platforms to meet the new specifications, which was also a time-consuming process. Thus, it was almost impossible to make both the system interoperable with content and extend its functions in conventional learner-adaptive systems. A representative standard with specifications for learner-adaptive systems, SCORM 2004, employed the same configuration and resulted in a lack of function extensibility.
The Proposed Architecture
To overcome the problems described in the previous section, the authors propose a new learner-adaptive system architecture that is capable of both function extensibility and system interoperability (Nakabayashi, Morimoto & Hada, 2008; Nakabayashi, Morimoto & Hada, 2009). To accomplish this, the proposed architecture introduces the concept of a "courseware object", which is a program module used to implement various educational functionalities such as learner adaptation to choose the most suitable learning material for the learner, material presentation to tailor the way the learning material is presented, and learner tracking to record the status of the learner's progress, i.e., functions usually embedded in the platform in a conventional configuration. For example, the courseware object can implement simple linear, branch, and remedial sequencing taking into account the test results, or much more sophisticated strategies such as scenario-based sequencing using a state-transition machine.
As shown in Figure 2, in the proposed architecture, the courseware object is clearly separated from the platform. It is possible to incrementally extend functions with this configuration by adding new courseware objects. Since this addition does not affect functions previously implemented with existing courseware objects, existing content always works properly. Moreover, courseware objects can be distributed with content, thus enabling existing platforms to be immediately updated for newly developed functionalities. This eliminates the long time lags that result from conducting standard authorization processes and installing platform updates.
Figure 2. Configuration of the proposed learner-adaptive system
Similar to the conventional configuration, the content consists of learning materials specific to a particular learning subject in this architecture. In addition, the content has a link to the courseware objects used to implement the learner-adaptive behavior that the content designer requires. The content designer may reuse existing courseware objects to implement his/her new content, or may ask an IT engineer to develop new ones if there is none suitable to meet his/her purpose for content design. The courseware objects may be delivered and reused with the content to allow for both system interoperability with content and functions extensibility.
The role of the platform is completely different from that in the conventional configuration. Instead of implementing a particular learner-adaptive behavior, the platform coordinates the communication between courseware objects. When the learner launches the content, the platform reads it and instantiates the required courseware objects. When the learner interacts with the system, the platform forwards the information from the learner to the proper courseware objects to carry out certain learneradaptive behaviors.
Design Issues with the Proposed Architecture
To achieve the goal of the proposed architecture, courseware objects developed by various designers with various timing should be combined to work together. To meet these requirements, it is necessary to define some standards or make agreements on a communication scheme between courseware objects, the information courseware objects manage and update, and the responsibility of courseware objects.
To investigate these issues, the authors designed the system based on the following principles and assumptions. Firstly, it was assumed that the content was structured hierarchically or like a tree. This is because content with a hierarchical structure is widely adopted in learning materials by various standards including AICC CMI ( Figure 3. Configuration of the proposed system treating hierarchical content Secondly, it was assumed that courseware objects were assigned to each hierarchical node of content as outlined in Figure 3. A courseware object assigned to a content node is responsible for managing the learner-adaptation behavior of the sub-tree under the assigned node. In particular, according to the pedagogical strategy implemented in it, the courseware object sequences its child nodes by taking into account of their learner progress information. This makes it possible to implement different pedagogical strategies in different sub-trees. It was also assumed that the communication between courseware objects was limited only between parents and children. Based on this assumption, the authors attempted to define the required communication patterns between courseware objects and what interface courseware objects should provide for other courseware objects.
Implementation of the Prototype System
Based on the design principles discussed in the previous section, the authors implemented several learner-adaptive functions to further investigate the feasibility of the proposed architecture and to identify the core behavior and interaction scheme of courseware objects. One of the functions implemented was a subset of SCORM 2004 behaviors including: Continue, previous, choice, start, suspend and resume navigation requests, Default rollup behavior, Skip precondition rule, and Retry, continue and previous post condition rules.
Another function implemented was a sequencing function based on the statetransition machine. The following sections give details on the implementation of the prototype system.
Command execution
In SCORM 2004, the learner interacts with the system using navigation commands such as "continue" (meaning move to the next page) or "choice" (meaning jump to the specified page). In the command-processing schema that has been designed, the command from the learner is sent to the current object, or the courseware object associated with the content page currently presented to the learner, to deal with the command. If the object cannot process the command, then it forwards, or escalates, the command to its parent object in the content tree. The parent also tries to deal with the command, then it escalates the command to its parent object if it cannot process it. This is repeated until it encounters a parent node capable of dealing with the command. Figure 4 illustrates the process to execute the command. First of all, the current object receives the command. It then escalates the command to its parent to select the candidate next page from its children. If the parent cannot find a suitable child, then it escalates the command to the grandparent. The grandparent makes its children select a suitable node from their children. This recursive behavior is repeated until a suitable candidate for the next page is found. This results in a behavior that gradually expands the search space for the candidate in the content tree from the local (the smallest sub-tree containing the current object) to the global (the entire content tree). The identified node for the next page will be presented to the learner, and its associated courseware object will be the new current object.
To implement the SCORM 2004 specifications, the control modes, limit conditions, and precondition rules that affect the selection of the candidate child node are evaluated when the parent node selects the candidate child. It needs to be noted that the criteria or the strategy for selecting the child node may differ from object-to-object allowing different learner-adaptation functionalities to be implemented in different nodes of the single content tree.
Rollup
To update the learner-progress status associated with each tree node, rollup from the current object to the root node is conducted before a command is executed. During the rollup process, the courseware object assigned to each tree node updates its learnerprogress status from the learner-progress status of its child nodes. Although this is similar to the rollup behavior in SCORM 2004, all courseware objects may implement their own rollup criteria.
Evaluation of the post-condition rules
To implement the SCORM 2004 specifications, the post-condition rules associated with each tree node, which may result in the command changing to another, are evaluated after rollup and before a command is executed. This process is similar to the evaluation behavior of post-condition rules in SCORM 2004; however, again all courseware objects may implement their own rule-evaluation criteria.
Generation of the command list
Since a courseware object may have its own unique commands, and since a command from a learner will be escalated from the current object toward the root node of the content tree until a certain node that can handle the command is encountered, commands defined in each courseware object from the current object to the root node are collected as a list of commands that is presented to the learner. This command list is generated after the previous command has been executed.
Courseware object for learning objectives
In addition to the tree nodes, the SCORM 2004 content structure may have learning objectives, which can be created independently from the tree structure. A learning objective is an entity to hold the learner's success status as global information. In the prototype system, a learning objective is implemented as a kind of a courseware object. The learner's success status information is stored from a tree node courseware object to learning objective courseware object. The stored success status information may be read later by the other tree node courseware objects. Table 1 outlines the current status of the SCORM 2004 functions implemented with the SCORM 2004 courseware objects of the prototype system. Almost all the main functions of the SCORM 2004 specifications have been implemented. Functions not implemented in the prototype system include references to additional objectives other than primary objectives in the sequencing rules, rollup conditions and rollup controls, and delivery controls. These functions not available in the prototype system can rather easily be implemented later not by modifying the communication schema described above but by modifying the SCORM 2004 courseware objects themselves. For example, complicated rollup conditions and rollup controls can be implemented within the SCORM 2004 courseware objects by adding a mechanism to interpret the condition part of the rollup rules and rollup controls in addition to the default rollup behavior that has already been implemented. This does not require any modifications to the communication schema for the rollup behavior described above. The same discussion can be applied to references to the additional objectives in the sequencing rules and delivery controls. The former can be implemented by enhancing the rule-condition interpretation logic of the SCORM 2004 courseware objects, which is currently only capable of handling primary objectives. The latter can be achieved by adding a function to check delivery control flags in the SCORM 2004 courseware objects for leaf nodes.
Evaluation of the implementation of SCORM 2004
The prototype system was evaluated with several types of sample content to check if the communication schema for the prototype system could correctly implement the basic SCORM 2004 sequencing functions. The most complicated sample content is given in Figure 5 with the test procedure in Table 2. Behavior of handling the post-condition rule was evaluated in Step 5, where the retry rule of node 12 was activated so that traversal from node 123 to node 121 took place despite the continue navigation command. The behavior of the command execution schema described in Subsection 5.1.1 is highlighted in Steps 6,8,10 and Step 11. The leaf nodes receiving navigation commands such as continue or previous escalate the navigation command to their parents in these steps. Each parent tries to find the candidate node in its descendants. If there are no proper candidates, the parent again escalates the navigation command to its parent. This behavior works correctly in the operation steps above, resulting successful traversal beyond the sub-trees. This indicates that the communication schema for the prototype system can be used to mimic the behavior of the original SCORM 2004 specifications described with the complicated procedural pseudo code.
The state-transition machine
Within the framework of communication patterns described in Subsection 5.1, a pedagogical strategy based on the state-transition machine has been implemented. In particular, a courseware object holds the following state-transition
Further Issues
There are several open issues related to the design and implementation of the proposed architecture. Short-term issues are to confirm the feasibility of implementing full SCORM 2004 functions and other commonly required easy-to-understand functions such as "hint" or "remedial". Assuring interoperability with existing SCORM 2004 content as well as installing functionalities that are familiar to content designers are important steps towards the dissemination of the proposed architecture. Other issues include extending the manifest-file format defining the courseware structure. It is necessary to extend the current SCORM 2004 manifest-file format so that it is capable of assigning a courseware object to each content node.
It is also important to consider the programming and execution environment. The environment to implement the proposed architecture must have capabilities to deal with courseware objects, especially dynamic combinations of courseware objects at run time. A naive implementation is placing an execution environment in a learning management system (LMS) constructed by using a certain object-oriented language. In this case, the communication schema described in the previous section will be implemented as the method call of an object. However, since the abstract communication schema between courseware objects is standardized, it is not necessary to place these objects in one LMS. For example, a courseware object can be implemented as a Web service on a separate server. If there are courseware objects implementing large-scale simulations or adaptive testing (Wainer, 2000) with huge item pools placed on an external Web server, these courseware objects can be reused as parts of various learning content. In this case, the communication schema between the courseware objects will be implemented using a Web-service protocol. Another interesting possibility would be to implement courseware objects as widgets. A widget is a small application module running on a client terminal communicating with the Web server. It can easily be implemented with a widely used script language such as JavaScript. Developer's Toolkits are also helpful for implementing widgets equipped with certain learner-adaptive functionalities associated with a specific user interface.
In addition to the above, the framework should be discussed to deal with a common vocabulary for commands, learner progress status, and events to generalize communication between courseware objects. It will also be necessary to consider contentauthoring environments in the future using courseware objects and a repository of courseware objects.
Conclusion
The authors discussed the design and implementation of a flexible learner-adaptive architecture that is capable of extending functions. By introducing the concept of a "courseware object", which is a program module that implements various educational functionalities, the proposed architecture is capable of incrementally extending functions while maintaining the existing functionalities. A trial implementation was carried out to investigate the basic behavior and communication schema of courseware objects that implemented the basic functions of SCORM 2004 and other learner-adaptive functions. Future work includes further investigations into communication schemata between courseware objects, manifest file extensions, and execution environments.
|
v3-fos-license
|
2023-11-15T16:21:56.281Z
|
2023-11-01T00:00:00.000
|
265202431
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1424-8220/23/22/9113/pdf?version=1699679080",
"pdf_hash": "dd6d3e0ba8c8ca2383191bf66ae2741d7d5f5ec6",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2344",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"sha1": "537bcaef599210f0b8b76bb3ebd464da68ddf63f",
"year": 2023
}
|
pes2o/s2orc
|
Research on Recognition of Coal and Gangue Based on Laser Speckle Images
Coal gangue image recognition is a critical technology for achieving automatic separation in coal processing, characterized by its rapid, environmentally friendly, and energy-saving nature. However, the response characteristics of coal and gangue vary greatly under different illuminance conditions, which poses challenges to the stability of feature extraction and recognition, especially when strict illuminance requirements are necessary. This leads to fluctuating coal gangue recognition accuracy in industrial environments. To address these issues and improve the accuracy and stability of image recognition under variable illuminance conditions, we propose a novel coal gangue recognition method based on laser speckle images. Firstly, we studied the inter-class separability and intra-class compactness of the collected laser speckle images of coal and gangue by extracting gray and texture features from the laser speckle images, and analyzed the performance of laser speckle images in representing the differences between coal and gangue minerals. Subsequently, coal gangue recognition was achieved using an SVM classifier based on the extracted features from the laser speckle images. The fusion feature approach achieved a recognition accuracy of 94.4%, providing further evidence of the feasibility of this method. Lastly, we conducted a comparative experiment between natural images and laser speckle images for coal gangue recognition using the same features. The average accuracy of coal gangue laser speckle image recognition under various lighting conditions is 96.7%, with a standard deviation of the recognition accuracy of 1.7%. This significantly surpasses the recognition accuracy obtained from natural coal and gangue images. The results showed that the proposed laser speckle image features can facilitate more stable coal gangue recognition with illumination factors, providing a new, reliable method for achieving accurate classification of coal and gangue in the industrial environment of mines.
Introduction
Coal is an important primary energy that plays a significant role in supporting the steady and rapid development of the global economy and society.During the process of coal mining, separation coal from gangue (the waste mineral) represents a critical processing step, promoting the efficient and comprehensive utilization of coal.Traditional methods for coal and gangue separation involve manual sorting, where operators rely on their subjective judgment based on the appearance, density, and other characteristics of coal and gangue for identification and separation.This method heavily relies on the subjective judgment of operators, which introduces a level of variability and potential errors in the separation process.Moreover, manual sorting is labor-intensive and time-consuming, resulting in increased costs and reduced efficiency.Mechanical separation techniques, such as the jigging process and heavy medium separation, are also used in industrial production [1,2].These methods involve the efficient separation of coal and gangue through mechanical devices and physical principles.They are suitable for large-scale coal processing and mining production, meeting the requirements of high output and fast processing.However, mechanical separation techniques typically involve the use of heavy machinery, which can be costly to install, operate, and maintain.Additionally, the operation of such equipment can lead to environmental concerns, including noise pollution, water contamination, and potential environmental damage [3].These issues indicate obvious shortcomings and limitations in traditional coal gangue separation technology.
With the rapid development of sensing and computer technologies, exploring more intelligent and automated technical solutions for coal gangue separation has become a hot topic in the field of coal mining [4,5].In recent years, many scholars have conducted extensive research on coal gangue sorting based on image recognition technology [6][7][8][9][10][11]. Early image recognition methods were based on differences in grayscale and the texture of the surface natural images of coal and gangue, which were collected using CCD cameras.These methods utilized thresholds to distinguish coal and gangue by extracting statistical information about grayscale and texture as image features, achieving high accuracy in coal gangue recognition in laboratory settings [12][13][14][15][16][17].However, it has been gradually found to be difficult to generalize these methods in practical industrial production, partly due to the stability issues associated with these surface image features.Wang et al. [18] investigated the impact of illumination levels in the image capture environment on the accuracy of coal gangue recognition in natural images.Further research into image recognition revealed that the natural image's feature differences between coal and gangue undergo significant changes with the variation in illumination, and the pattern of these changes also shows marked differences between the two.Li et al. [19] extracted four grayscale parameters and four texture parameters of coal and gangue natural pictures collected under different illuminance conditions in the range of 2000-7500 lx.Through the normalization and comparison of these parameters, the influence of external light factors on the natural image features of coal and gangue was revealed.These studies indicate that natural image features are highly sensitive to changes in lighting conditions.Consequently, frequent retraining and adjustment of the recognition model are required in different mining sites or varying environmental lighting to ensure accurate coal gangue recognition.This significantly increases the complexity and cost of implementing coal gangue recognition in the field, resulting in reduced system stability in practical applications.
To improve the reliability of feature extraction while accounting for illumination, an increasing number of advanced sorting methods using different light sources have been proposed and tested by some scholars for automatic or intelligent recognition of coal and gangue, including infrared detection [20,21], X-ray detection [22,23], laser scanning [24], spectroscopic detection [25,26], and others.These technologies have improved some of the problems associated with current image recognition methods to varying degrees, but new shortcomings also exist.While infrared detection methods can address the influence of lighting conditions on image features, they are highly sensitive to environmental temperature and humidity, which may lead to inaccuracies in identification.X-ray detection methods have been widely applied in coal and gangue identification, but they require the use of radiation sources, which may pose radiation safety concerns.Laser scanning methods involve the emission of laser beams from a laser device to scan and measure coal and gangue, extracting shape and surface features for rapid identification.However, laser scanning methods are sensitive to factors such as occlusion, scattering, and reflection, which can introduce noise and incompleteness in the data, requiring complex algorithms for data processing and restoration.Spectral detection methods utilize the reflective and absorption characteristics of coal and gangue in different wavelength bands for identification, offering significant advantages in distinguishing coal and gangue with significant compositional differences.However, spectral detection equipment is costly and not suitable for large-scale applications.To enhance the level of coal and gangue image recognition in practical coal mining and processing operations, further research and technological improvements are needed to overcome the limitations of these methods and achieve more accurate, reliable, and economically viable coal and gangue identification techniques.
Since the advent of the laser technique, speckle generated by coherent lasers has received great attention.Many scholars have studied the statistical characteristics of speckle patterns and proposed techniques for measuring surface roughness [27][28][29][30][31][32].Due to the differences in physicochemical structures between coal and gangue, significant differences exist in the roughness and reflective properties of their mineral surfaces [33].Laser speckle images capture abundant information on the surface characteristics of coal and gangue minerals.Compared to other light sources, lasers possess strong anti-interference, directivity, and coherence.Thus, laser speckle images are less affected by collected illumination factors under complex industrial conditions, and the obtained image information is more reliable, without the radiation hazards.Therefore, using laser speckle imaging to identify coal and gangue has great potential.
This article proposes a novel method for coal gangue recognition based on laser speckle imaging and explores its feasibility and stability for distinguishing coal from gangue.Firstly, the principle of laser speckle was analyzed, and specific techniques for feature extraction, including grayscale and texture analysis, were introduced.The inter-class separableness and the intra-class compactness of collected laser speckle images of coal and gangue were evaluated using boxplots of the extracted features.The feasibility of this approach was confirmed by utilizing an SVM classifier based on the extracted features from coal gangue laser speckle images to recognize coal and gangue.Furthermore, to assess the stability of the proposed method in coal gangue recognition under varying illuminance levels, comparative experiments were conducted with a natural image-based feature recognition method.These experiments aimed to simulate the weak and unstable lighting conditions typically encountered in production site scenarios.Six sets of coal gangue natural images and coal gangue laser speckle images were collected at illuminance levels of 100, 288, 442, 672, 770, and 912 lux.The coal gangue recognition accuracy of these two types of images was compared under different illuminance levels to evaluate the stability of this new approach with respect to lighting conditions during image acquisition.
Laser Speckle Theory and Surface Characteristic Analysis of Coal Gangue
The molecular orientation, density fluctuation and interaction with matter in the surface medium cause light to deviate from the original propagation direction as the surface of coal and gangue is irradiated by single-wavelength light, which is light scattering.When coherent light is incident on the surface of a scattering medium, the scattering direction and intensity of light are influenced by factors such as particle size, shape, composition, and surface roughness, resulting in random scattering patterns.The mutual interference of these scattered light waves gives rise to a random distribution of speckles, known as laser speckle [34].The observed scattering pattern in laser speckle carries detailed information about the surface roughness and morphology of the scattering medium, providing valuable insights into the unique characteristics of coal and gangue particles.The laser speckle imaging principle is shown in Figure 1.
The microscopic surface of coal and gangue is composed of a large number of mutually unrelated scattering points, and each point area is a diffraction surface element.When the laser beam is irradiated to the diffraction surface element point, a complex amplitude superposition is formed on the observation plane.According to the Fresnel principle [35], the complex amplitude of a point on the observation plane is where ε is the coherent illumination region, and for any point P on the ε plane, its complex amplitude is U 0 (P), and s is a diffraction surface element.h(P, Q) is the impulse response from the diffraction plane to the observation plane.The microscopic surface of coal and gangue is composed of a large number of m ally unrelated scattering points, and each point area is a diffraction surface element.W the laser beam is irradiated to the diffraction surface element point, a complex amplit superposition is formed on the observation plane.According to the Fresnel principle the complex amplitude of a point on the observation plane is where ε is the coherent illumination region, and for any point P on the ε plane, its com amplitude is U0(P), and s is a diffraction surface element.
) ( Q P, h
is the impulse respo from the diffraction plane to the observation plane. If the aperture plane is in the x0y0 plane and the observation point is in the xy pl Equation (1) can be expressed as follows:
With Fresnel approximation, the impulse response can be approximately expre as: z is the distance from the diffraction surface element to the observation plane.imaging system from the diffraction plane element to the observation plane is regar as a linear space-invariant system.The light intensity at a point on the observation p is expressed as: It can be seen from the above that the speckle intensity distribution at each poin If the aperture plane is in the x 0 y 0 plane and the observation point is in the xy plane, Equation (1) can be expressed as follows: h(x 0 , y 0 ; x, y) = exp(jkz) jλz exp j k 2z (x With Fresnel approximation, the impulse response can be approximately expressed as: z is the distance from the diffraction surface element to the observation plane.The imaging system from the diffraction plane element to the observation plane is regarded as a linear space-invariant system.The light intensity at a point on the observation plane is expressed as: It can be seen from the above that the speckle intensity distribution at each point of the observation plane is related to the distance z from the diffraction surface element to the observation plane, and the distance z is affected by the surface roughness.The speckle distributions of laser speckle images formed by different surface roughness of samples are also different.
Coal and gangue have different surface morphology characteristics, and the surface microscopic diagram is shown in Figure 2. Coal is a porous rock with a rough and glossy surface [36,37], the molecular structure of the micro-surface is large, and the molecular accumulation is relatively loose under the scanning electron microscope image, as shown in Figure 2a.However, the gangue, with a glossy and dim surface, has a stable spatial structure, minerals are closely intergrown, the micro-surface is fine-grained, and it shows a flat structure under the scanning electron microscope image, as shown in Figure 2b.
surface [36,37], the molecular structure of the micro-surface is large, and the molecula accumulation is relatively loose under the scanning electron microscope image, as show in Figure 2a.However, the gangue, with a glossy and dim surface, has a stable spatia structure, minerals are closely intergrown, the micro-surface is fine-grained, and it show a flat structure under the scanning electron microscope image, as shown in Figure 2b.The laser speckle image of coal gangue is formed by irradiating the surface of coa and gangue with coherent light, as shown in Figure 3. Due to the large size and loos arrangement of the molecular structure of the coal, the surface of coal is rougher, and th surface profile is more strongly scattered by coherent light, so, as shown in Figure 3a, th laser speckle contrast increases, the edge of the speckle intensity distribution is sharp, an the distribution of the flare is more obvious.The surface of the gangue is flat, with mor fine particles, and the surface height fluctuation is small.In Figure 3b, the laser speckl images of the gangue show dense distribution and fewer bright spots.The brightness o its image is weaker than that of the coal speckle image.The statistical parameters of lase speckle images are used to quantitatively evaluate the distribution of light intensity in speckle imaging.Based on the statistical law of light intensity change, the gray histogram is used to extract the gray features of speckle images, and the texture features are extracted by typical texture feature methods, such as the gray level co-occurrence matrix [17,38 wavelet transform [39], Tamura [40], fractal dimension [41], etc.According to a large num ber of previous experiments, the gray level co-occurrence matrix has better performanc in describing the texture change of coal gangue laser speckle.
Grayscale Features
The gray histogram [11] of the image describes the overall statistical characteristic of the gray level distribution, namely the speckle distribution characteristics in the coa gangue laser speckle image.The gray histogram is expressed as Equation ( 6): The laser speckle image of coal gangue is formed by irradiating the surface of coal and gangue with coherent light, as shown in Figure 3. Due to the large size and loose arrangement of the molecular structure of the coal, the surface of coal is rougher, and the surface profile is more strongly scattered by coherent light, so, as shown in Figure 3a, the laser speckle contrast increases, the edge of the speckle intensity distribution is sharp, and the distribution of the flare is more obvious.The surface of the gangue is flat, with more fine particles, and the surface height fluctuation is small.In Figure 3b, the laser speckle images of the gangue show dense distribution and fewer bright spots.The brightness of its image is weaker than that of the coal speckle image.The statistical parameters of laser speckle images are used to quantitatively evaluate the distribution of light intensity in speckle imaging.Based on the statistical law of light intensity change, the gray histogram is used to extract the gray features of speckle images, and the texture features are extracted by typical texture feature methods, such as the gray level co-occurrence matrix [17,38], wavelet transform [39], Tamura [40], fractal dimension [41], etc.According to a large number of previous experiments, the gray level co-occurrence matrix has better performance in describing the texture change of coal gangue laser speckle.
in Figure 2a.However, the gangue, with a glossy and dim surface, has a stable spat structure, minerals are closely intergrown, the micro-surface is fine-grained, and it sho a flat structure under the scanning electron microscope image, as shown in Figure 2b.The laser speckle image of coal gangue is formed by irradiating the surface of co and gangue with coherent light, as shown in Figure 3. Due to the large size and loo arrangement of the molecular structure of the coal, the surface of coal is rougher, and t surface profile is more strongly scattered by coherent light, so, as shown in Figure 3a, t laser speckle contrast increases, the edge of the speckle intensity distribution is sharp, a the distribution of the flare is more obvious.The surface of the gangue is flat, with mo fine particles, and the surface height fluctuation is small.In Figure 3b, the laser speck images of the gangue show dense distribution and fewer bright spots.The brightness its image is weaker than that of the coal speckle image.The statistical parameters of la speckle images are used to quantitatively evaluate the distribution of light intensity speckle imaging.Based on the statistical law of light intensity change, the gray histogra is used to extract the gray features of speckle images, and the texture features are extract by typical texture feature methods, such as the gray level co-occurrence matrix [17,3 wavelet transform [39], Tamura [40], fractal dimension [41], etc.According to a large nu ber of previous experiments, the gray level co-occurrence matrix has better performan in describing the texture change of coal gangue laser speckle.
Grayscale Features
The gray histogram [11] of the image describes the overall statistical characterist of the gray level distribution, namely the speckle distribution characteristics in the co gangue laser speckle image.The gray histogram is expressed as Equation ( 6):
Feature Extraction 2.2.1. Grayscale Features
The gray histogram [11] of the image describes the overall statistical characteristics of the gray level distribution, namely the speckle distribution characteristics in the coal gangue laser speckle image.The gray histogram is expressed as Equation ( 6): b is a gray level in a coal gangue laser speckle image, ranging from 0 to 255; n(b) represents the number of pixels of gray level b; M × N is the pixel area in the coal gangue laser speckle image.
Four statistical characteristics are extracted based on the distribution of the gray histogram, namely gray mean (m), gray standard deviation (σ), skewness (s), and kurtosis (k).The calculation formulas are as follows:
Texture Features
The gray level co-occurrence matrix (GLCM) is defined by the joint probability density of pixels in two positions, which not only reflects the brightness distribution, but also the position distribution between pixels with the same brightness in the reaction space.In this paper, four independent texture features based on GLCM are extracted, including angular second moment (ASM), entropy (ENT), contrast (CON), and inverse difference moment (IDM).
ASM reflects the uniformity of an image's gray distribution and texture fineness.A higher value indicates a more uniform and rough texture in the image.In the identification of coal and gangue, their roughness can be differentiated by comparing the second-order moments of angles between them.The calculation is as in Equation (11): ENT, reflecting the randomness of image information, is defined as Equation ( 12) when all values in the co-occurrence matrix are equal or pixel values exhibit maximum randomness.By comparing the entropy of coal and gangue images, their textural complexity can be differentiated.
CON, reflecting the clarity of the image and the depth of the texture, is defined as Equation (13).In coal gangue identification, CON can reflect the clarity of mineral textures.
IDM, reflecting the degree of local change of image texture, is defined as Equation (14).
where the matrix P(j, k|d, α) presents the number of pixel pairs (j, k) with adjacent intervals in the α direction.In this experiment, α is 45
Experimental Instrument
A coal gangue laser speckle observation instrument was set up, as depicted in Figure 4.The instrument consists of five main parts: a semiconductor laser, the collimating and beam expanding system, an industrial camera, a carrier platform, a computer, and an adjustable LED light source.The semiconductor laser source (LSR638NL) with a wavelength of 638 nm and a power of 15.5 mW was used to provide a stable coherent light.This laser source emits light at a specific wavelength suitable for the scattering experiments.To focus and direct the laser beam onto the sample surface, a collimating and beam expanding system (GCO-2503) was employed.This system allows for continuous expansion of the spot diameter on the surface of the coal gangue samples up to 36 mm.It ensures proper illumination and collection of the scattered light for subsequent analysis.The scattered light was captured by an industrial camera (Aca2440-75uc) with a resolution of 2448 pixels × 2048 pixels.The captured light was then converted into a digital signal for further processing and analysis.
of 638 nm and a power of 15.5 mW was used to provide a stable coherent light.This lase source emits light at a specific wavelength suitable for the scattering experiments.To focu and direct the laser beam onto the sample surface, a collimating and beam expanding system (GCO-2503) was employed.This system allows for continuous expansion of th spot diameter on the surface of the coal gangue samples up to 36 mm.It ensures prope illumination and collection of the scattered light for subsequent analysis.The scattered light was captured by an industrial camera (Aca2440-75uc) with a resolution of 2448 pixel × 2048 pixels.The captured light was then converted into a digital signal for further pro cessing and analysis.
During the acquisition of mineral laser speckle images, the industrial camera wa fixed on the carrier platform perpendicular to the surface of the coal gangue samples a 460 mm.To ensure a high contrast in the scattered speckle images of coal gangue, th exposure time of the industrial camera was set at 3000 μs.The laser source was positioned at an angle of 15° relative to the direction of the industrial camera, with the collimating and beam expanding system placed directly behind it.The coal gangue samples wer placed on the carrier platform, and the coherent beam was irradiated onto the surface o the samples through the beam system.The industrial camera captured the laser speckl patterns with varying intensity distributions on the surface of the coal gangue samples To control the illuminance environment, an adjustable LED light source was used to col lect images under different lighting conditions.
The ROI Extraction of the Coal Gangue Laser Speckle Images
The background areas of the original coal gangue laser speckle images collected b the experiment are large.The speckle regions of interest were extracted from the lase speckle images in order to improve the speed and accuracy of the feature extraction of th coal gangue laser speckle images.During the acquisition of mineral laser speckle images, the industrial camera was fixed on the carrier platform perpendicular to the surface of the coal gangue samples at 460 mm.To ensure a high contrast in the scattered speckle images of coal gangue, the exposure time of the industrial camera was set at 3000 µs.The laser source was positioned at an angle of 15 • relative to the direction of the industrial camera, with the collimating and beam expanding system placed directly behind it.The coal gangue samples were placed on the carrier platform, and the coherent beam was irradiated onto the surface of the samples through the beam system.The industrial camera captured the laser speckle patterns with varying intensity distributions on the surface of the coal gangue samples.To control the illuminance environment, an adjustable LED light source was used to collect images under different lighting conditions.
The ROI Extraction of the Coal Gangue Laser Speckle Images
The background areas of the original coal gangue laser speckle images collected by the experiment are large.The speckle regions of interest were extracted from the laser speckle images in order to improve the speed and accuracy of the feature extraction of the coal gangue laser speckle images.
The propagation of coherent beams in the air is affected by suspended air particles, which introduce shot noise.In this study, Gaussian filtering was used to remove such noise from the laser speckle images during the acquisition process.The Gaussian filter convolves the image with a Gaussian kernel, which assigns weights to neighboring pixels based on their distance from the center pixel.These weights were determined using a Gaussian distribution with a specified standard deviation.By adjusting the standard deviation, we can fine-tune the denoising effect to match the specific noise characteristics and desired Sensors 2023, 23, 9113 8 of 15 image quality.The Gaussian filtering method effectively balanced noise reduction and preservation of important image details, resulting in improved quality of the collected laser speckle images.After graying and Gaussian filtering, the maximum between-class variance (Otsu) [42] adaptive threshold algorithm was used to segment the target area of the coal gangue laser speckle.We extracted a region of interest (ROI) of size 250 × 250 pixels by center clipping.The process of speckle region of interest extraction is illustrated in Figure 5.
convolves the image with a Gaussian kernel, which assigns weights to neighboring pixel based on their distance from the center pixel.These weights were determined using Gaussian distribution with a specified standard deviation.By adjusting the standard de viation, we can fine-tune the denoising effect to match the specific noise characteristic and desired image quality.The Gaussian filtering method effectively balanced noise re duction and preservation of important image details, resulting in improved quality of th collected laser speckle images.After graying and Gaussian filtering, the maximum be tween-class variance (Otsu) [42] adaptive threshold algorithm was used to segment th target area of the coal gangue laser speckle.We extracted a region of interest (ROI) of siz 250 × 250 pixels by center clipping.The process of speckle region of interest extraction i illustrated in Figure 5.
Feature Analysis of Coal Gangue Laser Speckle Images
In the experiment, 120 laser speckle images of coal and gangue under natural illum nance were collected, and their gray and texture features were extracted according to Sec tions 2.2.1 and 2.2.2.To assess the effectiveness of different features in distinguishing be tween coal and gangue, we analyzed the inter-class separableness and intra-class com pactness of these features.Inter-class separableness measures the level of differentiatio between different classes, indicating that higher differences in features among differen classes result in greater inter-class separableness.Intra-class compactness represents th similarity of samples within the same class, indicating that higher similarity among sam ples within the same class leads to greater intra-class compactness.We generated boxplot to analyze these laser speckle image features of coal gangue.The advantage of using box plots is that they provide a visual means to better illustrate the spatial distribution of thes features.
The boxplots of the four gray feature parameters of the coal gangue laser speckl images are shown in Figure 6. Figure 6a shows that the gray mean values of coal an gangue are roughly the same.The reason is that the gray distribution of the images wa basically the same after the coal and gangue were irradiated by laser.As shown in Figur 6b, the gray standard deviation distribution of coal is more dispersed than that of gangue and the average level is higher than that of gangue.Due to the large surface roughness o coal, the surface flares of speckle images are more distributed and more unevenly distrib uted, and the gray dispersion is large.As shown in Figure 6c, the average gray deviatio of coal is higher than that of gangue, and the speckle distribution of gangue laser speckl images is smoother and more uniform.As shown in Figure 6d, the average gray kurtosi
Feature Analysis of Coal Gangue Laser Speckle Images
In the experiment, 120 laser speckle images of coal and gangue under natural illuminance were collected, and their gray and texture features were extracted according to Sections 2.2.1 and 2.2.2.To assess the effectiveness of different features in distinguishing between coal and gangue, we analyzed the inter-class separableness and intra-class compactness of these features.Inter-class separableness measures the level of differentiation between different classes, indicating that higher differences in features among different classes result in greater inter-class separableness.Intra-class compactness represents the similarity of samples within the same class, indicating that higher similarity among samples within the same class leads to greater intra-class compactness.We generated boxplots to analyze these laser speckle image features of coal gangue.The advantage of using boxplots is that they provide a visual means to better illustrate the spatial distribution of these features.
The boxplots of the four gray feature parameters of the coal gangue laser speckle images are shown in Figure 6. Figure 6a shows that the gray mean values of coal and gangue are roughly the same.The reason is that the gray distribution of the images was basically the same after the coal and gangue were irradiated by laser.As shown in Figure 6b, the gray standard deviation distribution of coal is more dispersed than that of gangue, and the average level is higher than that of gangue.Due to the large surface roughness of coal, the surface flares of speckle images are more distributed and more unevenly distributed, and the gray dispersion is large.As shown in Figure 6c, the average gray deviation of coal is higher than that of gangue, and the speckle distribution of gangue laser speckle images is smoother and more uniform.As shown in Figure 6d, the average gray kurtosis of coal is higher than that of gangue, but some data ranges overlap, indicating that the gray distribution of coal gangue laser speckle images is concentrated near the average value, but the dispersion of coal kurtosis is large, and the dispersion of the gray data distribution is large.Some feature components have individual abnormal values in the boxplots, such as the gray mean of coal, and the rationality of their existence is not ruled out without knowing the reasons for the abnormal values.This is also applicable to the following texture feature.
of coal is higher than that of gangue, but some data ranges overlap, indicating that the gray distribution of coal gangue laser speckle images is concentrated near the average value, but the dispersion of coal kurtosis is large, and the dispersion of the gray data distribution is large.Some feature components have individual abnormal values in the boxplots, such as the gray mean of coal, and the rationality of their existence is not ruled out without knowing the reasons for the abnormal values.This is also applicable to the following texture feature.The box diagram of the four texture feature parameters of the coal gangue laser speckle images is shown in Figure 7.As shown in Figure 7a, the average value of the angular second moment of gangue is higher than that of coal, which reflects that the scattering degree of coal and gangue to a laser is different, and the difference between the samples' angular second moment feature classes is large.After laser speckle irradiation, there were regular speckle distributions on the surface of gangue, and the large value of the angular second moment reflects the regular texture change of the gangue surface.The angular second moment value of the coal laser speckle image is low, and the distribution range is small.After laser speckle irradiation, an obvious uneven distribution of bright spots appears on the coal, the local gray change on the surface is obvious, the speckle distribution is dispersed, and the texture change is irregular.As shown in Figure 7b, the laser speckle image entropy of coal is large, the laser speckle image entropy of gangue is small, and the distribution range of the two is stable.The bright spots on the coal surface are randomly distributed.After laser irradiation, the randomly distributed high bright spots will appear on the speckle image, so the surface of the coal block is more complex, and the surface of the gangue is simpler [43].As shown in Figure 7c, the average contrast of coal is higher than that of gangue, and the data dispersion is high, because the surface gully of coal is deeper than that of gangue, and the surface contour of different coal blocks has strong laser scattering, which enhances the contrast of the laser speckle.As shown in Figure 7d, the inverse difference moment value of the laser speckle images of gangue is larger, and the local change in gangue surface is slower.After laser irradiation, the images' speckle distribution was more regular, and the speckle change rule was strong.However, the inverse difference moment of the coal laser speckle image is small, the gray change is uneven in different regions, and the local change of the surface is more irregular.The box diagram of the four texture feature parameters of the coal gangue laser speckle images is shown in Figure 7.As shown in Figure 7a, the average value of the angular second moment of gangue is higher than that of coal, which reflects that the scattering degree of coal and gangue to a laser is different, and the difference between the samples' angular second moment feature classes is large.After laser speckle irradiation, there were regular speckle distributions on the surface of gangue, and the large value of the angular second moment reflects the regular texture change of the gangue surface.The angular second moment value of the coal laser speckle image is low, and the distribution range is small.After laser speckle irradiation, an obvious uneven distribution of bright spots appears on the coal, the local gray change on the surface is obvious, the speckle distribution is dispersed, and the texture change is irregular.As shown in Figure 7b, the laser speckle image entropy of coal is large, the laser speckle image entropy of gangue is small, and the distribution range of the two is stable.The bright spots on the coal surface are randomly distributed.After laser irradiation, the randomly distributed high bright spots will appear on the speckle image, so the surface of the coal block is more complex, and the surface of the gangue is simpler [43].As shown in Figure 7c, the average contrast of coal is higher than that of gangue, and the data dispersion is high, because the surface gully of coal is deeper than that of gangue, and the surface contour of different coal blocks has strong laser scattering, which enhances the contrast of the laser speckle.As shown in Figure 7d, the inverse difference moment value of the laser speckle images of gangue is larger, and the local change in gangue surface is slower.After laser irradiation, the images' speckle distribution was more regular, and the speckle change rule was strong.However, the inverse difference moment of the coal laser speckle image is small, the gray change is uneven in different regions, and the local change of the surface is more irregular.The median lines of coal laser speckle image features (except for gray mean) are observed to lie outside the box of corresponding gangue features.This observation suggests a significant difference in the central tendencies of these features between coal and gangue samples.Such findings provide evidence for the strong inter-class separability of the two categories based on the analyzed features.Furthermore, the texture features outperformed the gray features, as evidenced by the more pronounced spacing between the feature boxes of the two minerals.Moreover, the box lengths representing the interquartile ranges can indicate the degree to which a feature distribution is stretched or compressed.Since the image features vary significantly in magnitude, we conducted scale standardization on the interquartile ranges to facilitate the comparative analysis of all features in capturing the intra-class compactness of minerals.The standardized outcomes are depicted in Figure 8.The gray mean, gray deviation, angular second moment, entropy, and inverse difference moment demonstrate smaller scale ranges, indicating their significant capability in reflecting the intra-class compactness of minerals.Conversely, gray kurtosis exhibits the largest scale range, implying a weaker representation of internal similarities of the two mineral samples.Additionally, the blue curve encompasses a smaller area, indicating that the extracted laser speckle image features possess a superior ability to reflect the internal similarity within the gangue samples compared to the coal samples.The median lines of coal laser speckle image features (except for gray mean) are observed to lie outside the box of corresponding gangue features.This observation suggests a significant difference in the central tendencies of these features between coal and gangue samples.Such findings provide evidence for the strong inter-class separability of the two categories based on the analyzed features.Furthermore, the texture features outperformed the gray features, as evidenced by the more pronounced spacing between the feature boxes of the two minerals.Moreover, the box lengths representing the interquartile ranges can indicate the degree to which a feature distribution is stretched or compressed.Since the image features vary significantly in magnitude, we conducted scale standardization on the interquartile ranges to facilitate the comparative analysis of all features in capturing the intra-class compactness of minerals.The standardized outcomes are depicted in Figure 8.The gray mean, gray deviation, angular second moment, entropy, and inverse difference moment demonstrate smaller scale ranges, indicating their significant capability in reflecting the intra-class compactness of minerals.Conversely, gray kurtosis exhibits the largest scale range, implying a weaker representation of internal similarities of the two mineral samples.Additionally, the blue curve encompasses a smaller area, indicating that the extracted laser speckle image features possess a superior ability to reflect the internal similarity within the gangue samples compared to the coal samples.The median lines of coal laser speckle image features (except for gray mean) are observed to lie outside the box of corresponding gangue features.This observation suggests a significant difference in the central tendencies of these features between coal and gangue samples.Such findings provide evidence for the strong inter-class separability of the two categories based on the analyzed features.Furthermore, the texture features outperformed the gray features, as evidenced by the more pronounced spacing between the feature boxes of the two minerals.Moreover, the box lengths representing the interquartile ranges can indicate the degree to which a feature distribution is stretched or compressed.Since the image features vary significantly in magnitude, we conducted scale standardization on the interquartile ranges to facilitate the comparative analysis of all features in capturing the intra-class compactness of minerals.The standardized outcomes are depicted in Figure 8.The gray mean, gray deviation, angular second moment, entropy, and inverse difference moment demonstrate smaller scale ranges, indicating their significant capability in reflecting the intra-class compactness of minerals.Conversely, gray kurtosis exhibits the largest scale range, implying a weaker representation of internal similarities of the two mineral samples.Additionally, the blue curve encompasses a smaller area, indicating that the extracted laser speckle image features possess a superior ability to reflect the internal similarity within the gangue samples compared to the coal samples.
Verification of Coal Gangue Recognition Based on Laser Speckle Image
Using the aforementioned features of coal gangue laser speckle images as input, coal gangue recognition was realized based on the SVM recognition model.SVM [44,45], which constructs an optimal hyper-plane in high-dimensional or infinite-dimensional space, is a supervised learning model that improves the generalization ability of a learning machine by minimalizing structural risk and empirical risk while maximizing the confidence range.This allows for the derivation of effective statistical rules even with limited statistical samples.SVM simplifies classification and regression problems, particularly in small-sample scenarios.In this study, the SVM classifier is employed to categorize the 240 coal gangue laser speckle images collected from the aforementioned experiments.The classification is based on features derived from the gray-level co-occurrence matrix.For the SVM classifier, a linear kernel function was used, and the penalty factor C was set to 1.The database was divided into training sets (comprising 168 samples) and test sets (comprising 72 samples).The recognition accuracy of the coal gangue laser speckle images is presented in Table 1.Table 1 illustrates significant variations in the accuracy of SVM recognition models constructed using individual features.Specifically, the ENT (entropy) and IDM (inverse difference moment) features demonstrate higher accuracy rates of 93.1% and 87.5%, respectively.These findings underscore the importance of entropy and energy features in the recognition of coal gangue images.Conversely, the m (mean) and k (kurtosis) exhibit lower accuracy rates of 52.8% and 68.6%, respectively.This can be attributed to the weak performance of the m feature in terms of inter-class separability and the k feature in terms of intra-class compactness.Overall, the texture features of laser speckle images outperformed the grayscale features, with the former exhibiting an average accuracy rate 14.8% higher than the latter.By utilizing the fusion of all features extracted from coal and gangue laser speckle images, the SVM recognition model achieved a recognition accuracy of 94.4%, surpassing the accuracy obtained by using any individual feature alone.These results highlight the complementary information provided by different features and the improved classification results achieved through feature fusion.
Identification Stability of Coal Gangue Laser Speckle Images
The identification feasibility of coal gangue laser speckle images was explored in this study.However, only one kind of illuminance was studied, whereas the illuminance environment of the actual coal caving site is weak and unstable, which seriously affects the feature extraction and coal gangue recognition accuracy.Therefore, in order to verify the role of the research method in this paper to solve this problem, in the future, it is necessary to collect more kinds of illuminance from different minerals to assess whether the proposed research method is universal.In order to further discuss and verify the stability of coal gangue laser speckle image recognition to illuminance, an image acquisition system with controllable illuminance was established to collect the coal gangue natural images and coal gangue laser speckle images, and each group was illuminated with an illuminance of 100, 288, 442, 672, 770, and 912 lx.The gray and texture features of the coal gangue natural images and coal gangue laser speckle images under different illuminances were abstracted, and the training set and test set were divided according to 7:3; the recognition accuracy of two different images with different illuminances is shown in Figure 9.It can be observed from Figure 9 that the accuracy of coal gangue las recognition based on gray features and texture features remains relativ different lighting conditions.The average accuracy of coal gangue lase recognition under various lighting conditions is 96.7%, with a standard recognition accuracy of 1.7%.In comparison, the average accuracy of coa nition based on natural images is 78.7%, with a standard deviation of the curacy of 2.6%, and the highest recognition rate achieved is 81.9%.This in ural images exhibit larger fluctuations in recognition accuracy with cha conditions.The experimental results demonstrate that the proposed coal tion method based on laser speckle images achieves higher and more st accuracy under varying lighting conditions to a certain extent.
Conclusions
This paper presents a novel method for identifying coal and gangue lighting conditions using laser speckle imaging technology.The analysis tures and four texture features confirms that laser speckle images effectiv the differences in the distribution of these minerals.The extracted textur better inter-class separability and intra-class compactness compared to th Using a single texture feature, the SVM model achieves an average recog of 86.1% for coal and gangue, outperforming the accuracy of 71.3% achi gray feature.By fusing all the extracted features, the model's recognition c itively optimized, resulting in an increased accuracy of 94.4%.Furthermo method achieves an average recognition accuracy of 96.7% under differe ditions, with a maximum accuracy of 98.7%.These results significantly su nition accuracy obtained from natural coal and gangue images using th The method also exhibits good stability in coal and gangue recognition standard deviation of recognition accuracy.It can be observed from Figure 9 that the accuracy of coal gangue laser speckle image recognition based on gray features and texture features remains relatively stable under different lighting conditions.The average accuracy of coal gangue laser speckle image recognition under various lighting conditions is 96.7%, with a standard deviation of the recognition accuracy of 1.7%.In comparison, the average accuracy of coal gangue recognition based on natural images is 78.7%, with a standard deviation of the recognition accuracy of 2.6%, and the highest recognition rate achieved is 81.9%.This indicates that natural images exhibit larger fluctuations in recognition accuracy with changes in lighting conditions.The experimental results demonstrate that the proposed coal gangue recognition method based on laser speckle images achieves higher and more stable recognition accuracy under varying lighting conditions to a certain extent.
Conclusions
This paper presents a novel method for identifying coal and gangue under different lighting conditions using laser speckle imaging technology.The analysis of four gray features and four texture features confirms that laser speckle images effectively demonstrate the differences in the distribution of these minerals.The extracted texture features show better inter-class separability and intra-class compactness compared to the gray features.Using a single texture feature, the SVM model achieves an average recognition accuracy of 86.1% for coal and gangue, outperforming the accuracy of 71.3% achieved by a single gray feature.By fusing all the extracted features, the model's recognition capability is positively optimized, resulting in an increased accuracy of 94.4%.Furthermore, the proposed method achieves an average recognition accuracy of 96.7% under different lighting conditions, with a maximum accuracy of 98.7%.These results significantly surpass the recognition accuracy obtained from natural coal and gangue images using the same features.The method also exhibits good stability in coal and gangue recognition with a smaller standard deviation of recognition accuracy.
Based on the above aspects, the proposed method shows great promise for coal and gangue identification in practical production applications, particularly under challenging lighting conditions.It can also be extended to address other image-related tasks in coal mining processes conducted in low-light and unstable environments.For instance, it can be employed for identifying equipment failures in underground mines and detecting and measuring deformations in roadway structures.Furthermore, this paper only focuses on training models using the eight typical coal and gangue image features.Future research can explore feature design methods to enhance the performance of the proposed method.
For example, researchers can incorporate morphological features to further explore the performance of local texture information in laser speckle for reflecting mineral categories.Additionally, considering the impact of model parameters and structure on recognition accuracy, research on model optimization methods is necessary to improve their recognition accuracy and stability.
Figure 8 .
Figure 8.The standardized interquartile ranges of coal gangue laser speckle image features.
Figure 8 .
Figure 8.The standardized interquartile ranges of coal gangue laser speckle image features.Figure 8.The standardized interquartile ranges of coal gangue laser speckle image features.
Figure 8 .
Figure 8.The standardized interquartile ranges of coal gangue laser speckle image features.Figure 8.The standardized interquartile ranges of coal gangue laser speckle image features.
Figure 9 .
Figure 9.The recognition accuracy with different illuminances.
Figure 9 .
Figure 9.The recognition accuracy with different illuminances.
Table 1 .
Recognition accuracy of coal gangue laser speckle images.
|
v3-fos-license
|
2018-12-27T14:22:05.259Z
|
2012-01-01T00:00:00.000
|
59503838
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://kirj.ee/public/proceedings_pdf/2012/issue_4/Proc-2012-4-320-329.pdf",
"pdf_hash": "4a7e36b700183a6af9d982c5eea6cbee4ebb5272",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2346",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "4a7e36b700183a6af9d982c5eea6cbee4ebb5272",
"year": 2012
}
|
pes2o/s2orc
|
The Estonian H 1 N 1 influenza 2009 outbreak was highly underestimated
The H1N1 influenza strain Mexico 2009 (H1N1pandemic09) led to mild symptoms (with no or low fever) in Estonia during the 2009–2010 outbreak. Due to the lack of clinical signs, it was difficult to estimate the real spreading of this influenza virus in Estonia and no cases of H1N1 influenza were officially registered in animals either. We used an ELISA method to screen blood sample collections for the presence of anti-H1N1 and anti-H3N2 antibodies. All sera were also tested with the hemagglutination inhibition (HI) assay. Out of the 123 samples from human patients, 23 (i.e. 18.7%) were seropositive for the H1N1pandemic09 virus. In addition, blood samples from six persons were positive for both H1N1 and H3N2 viruses, while according to the data from the Estonian Health Board, people aged 15–65 had a general disease rate of around 3.9%. Almost all of the tested animals from two herds (out of four studied) were seropositive for H1N1pandemic09. The seven HA protein sequences isolated from Estonia were aligned with a consensus sequence of the pandemic H1N1 HA sequences from Mexico using ClustalW, and 12 amino acids substitutions were found.
*
In Europe, the H1N1 influenza strain Mexico 2009 (H1N1pandemic09) [1], also known as 'swine flu', appeared to be a mild to moderate disease affecting preferentially school-age children.Elderly adults were underrepresented in severe cases [2][3][4][5][6].However, the true proportion of infected persons could not be well assessed due to the lack of serological evidence of asymptomatic cases.Asymptomatic and mild cases are missed by current reporting techniques of influenza and only a few studies provide assessment of seroprevalence during an epidemic [3,7].
Also the H1N1pandemic09 generally led to only mild symptoms in Estonia during the 2009-2010 outbreak.Hence, due to the lack of clinical signs, it was difficult to estimate the real spreading of this influenza virus in Estonia.No cases of H1N1 influenza were officially registered in animals either during this period.However, even a mild H1N1 influenza infection would afford a good level of protection [8,9].The H1N1 vaccine became available only at the end of the outbreak.Consequently, only 13.4% of the persons belonging to high-risk groups and about 2.7% of the whole Estonian population were vaccinated [10].
The aim of this study was to assess how well the spread of H1N1pandemic09 during the 2009-2010 outbreak in Estonia was estimated.Our hypothesis was that the outbreak was highly underestimated, which would deserve a definitive epidemiologic demonstration.The potential consequences of underestimation are hereby discussed.
MATERIALS AND METHODS
To provide an easy way to test the H1N1pandemic09 influenza virus (IV) seropositivity, we designed an ELISA test that identifies the specific reactivity to two different H1N1 viruses and to one H3N2 virus.To get the first insight into the real spread of the H1N1 virus in the Estonian population during this outbreak, we used this ELISA test on serum samples collected from 123 patients, mainly voluntary blood donors, during spring 2010.The same test was also carried out with samples from 95 pigs collected in four different herds.All samples were tested also with the hemagglutination inhibition (HI) assay.
Blood samples
Human blood samples were collected from 27 volunteers (10 pregnant women among them) and from 96 blood donors.Two millilitres of venous blood was coagulated during 2 h at room temperature and centrifuged for 10 min at 800 g, after which the serum was isolated.For both human and animal samples, data on age and sex were collected.Information about vaccination was available for human samples Nos 1-19 only.Among those, volunteers No.
ELISA
Two commercially available vaccines -inactivated influenza virus either produced in cell culture on Vero cells (Celvapan, H1N1 IV Pandemic09 developed by Baxter) or propagated in chicken eggs (Pandemrix, H1N1 IV Pandemic09 by ClaxoSmithKleine) -were diluted in 50 µL of the Coating Buffer per well (pH 9.6) and used to coat Nunc Maxi-Sorp Immuno Plates.The final concentration of antigens was 3.75 µg/mL.The plates were incubated overnight at 4 °C and washed three times with 0.05% Tween 20 in ddH 2 O between every step.Non-specific binding sites were blocked with 200 µL of 2% casein/PBS and incubated for 1 h at room temperature.Both serum samples and controls were diluted by two-fold dilution series up to 1 : 64 000.Of the dilutions 50 µL was added to plates with negative samples (PBS without antigen) and incubated overnight at 4 °C.Then 50 µL of secondary antibody (DAKO Rabbit Anti-Human IgA, IgG, IgM, Kappa, Lambda/HRP; DAKO Rabbit anti-pig antibody 162.5 pg/mL) was added to each well and incubated for 1 h at room temperature.Freshly mixed peroxidase substrate reagent (1 mM tetramethylbenzidine and 2.3 mM H 2 O 2 in 0.1 M potassium citrate buffer, pH 4.5) was added to the plates and incubated for 20 min at room temperature.To stop the reaction 1 M H 2 SO 4 was added to each well.Optical densities (OD values) were detected by reading the plates on an ELISA plate reader (Labsystems Multiskan MCC/340) at the wavelength of 450 nm.Samples with the absorbance value twice as high as that of the average absorbance of the negative controls were considered positive.
All samples were also tested for trivalent influenza vaccine Vaxigrip (3.75 µg/mL), which is composed of H1N1-A-Brisbane 07, H3N2-A-Brisbane 07, and B-Brisbane 08 virus subtypes to exclude the prevalence of antibodies to those subtypes.
Hemagglutination inhibition assay
Human and swine sera were inactivated by incubation at 56 °C for 30 min and HI assays of human and swine sera were conducted following a standard protocol [11].Briefly, two-fold serial dilutions of human or swine sera were mixed and pre-incubated in 96-well plates for 30 min at room temperature with 8 HA units of the virus antigen per well.Chicken red blood cells were added at a final concentration of 0.25%, and the plate was incubated at room temperature for 30 min.HI titres were determined at the highest dilution that displayed hemagglutination activity.Specific HI activity of sera was calculated as the lowest concentration of sera that displayed hemagglutination activity.Samples with HI titre higher than 40 were considered positive.
RESULTS
In spring 2009, of the 123 human samples 23, that is 18.7% (samples Nos 1,5,7,14,17,25,28 31,34,36,45,49,54,60,63,76,82,83,88,91,107,108,and 123), were seropositive for the H1N1 virus (Fig. 1).During sampling, patient 1 had acute viremia and high temperature, and the antibody titre was still rather low.Additionally, blood samples from six persons were highly positive for both H1N1 and H3N2 (Figs 1 and 3) viruses, indicating that these patients had been most probably infected by both viruses (Fig. 1, patients Nos 28, 36, 60, 76, 82, and 83).The titres measured in these double seropositive patients were significantly higher than those observed after vaccination against seasonal influenza with Vaxigrip trivalent influenza vaccine (see samples Nos 4 and 6 from vaccinated patients aged 51 and 63, respectively).Human sera Nos 12, 15, 39, 40, 44, 69, and 77 showed positive values to Vaxigrip and not to H1N1pandemic09 (Fig. 3).Among the 123 tested patients only 2 persons (Nos 5 and 17, aged 40 and 36, respectively) had been infected with the H1N1pandemic09 virus (confirmed cases) and had exhibited high fever and other typical acute clinical signs.Patients Nos 5 and 17 suffered the infection one month and four months before blood sampling, respectively.Both of them had a similarly high level of specific anti-H1N1 antibodies.It is noteworthy that two other patients who had not shown any signs of acute respiratory disease during the previous year had even higher titres of anti-H1N1 antibodies (patients 7 and 14).Patient No. 14 could only indicate that she had had a bit sore throat a month before blood sampling.
We adapted the test to measure the titres of anti-H1N1pandemic09 in pigs because these animals were regularly infected worldwide during the H1N1 epidemic in 2009-2010 [12,13].To get a first insight into H1N1pandemic09 seropositivity in Estonian pig herds, blood samples from four different locations were analysed.Farms Nos 1 and 4 were located in the vicinity of Tartu in south-east Estonia and farms 2 and 3 were located in north-east Estonia.The results were homogeneous within each farm, suggesting that the virus was well spread in a herd when present at a given site (Fig. 2).Animals from farm No. 1 had no anti-H1N1 antibodies and most probably never met the virus.This farm is a closed breeding farm, which does not take in animals from abroad.In contrast, the animals from farms 2 and 4 were almost all seropositive, most individuals having high titres of antibodies directed against the H1N1 virus.A few animals from farm 3 were seropositive, with especially high titres against Celvapan, indicating that the virus H1N1pandemic09 was present in this site.The results obtained with the two antigens used (Celvapan from Vero cells and Pandemrix produced in eggs) were fully consistent.The observations indicate that the H1N1 virus spread in these three farms, and that most of the animals had been infected.Remarkably, most of the tested humans and animals from the four herds showed high titres of anti-H3N2 antibodies (Fig. 4).
All results from the ELISA were confirmed with hemagglutination inhibition assay.The results of the assay were also consistent with the ELISA.Serum dilutions from 32 up to 512 showed specific inhibition of hemagglutination (HI titres ≥ 128; Figs 5 and 6).
The seven Estonian H1N1 HA protein sequences (Appendix 1) and a consensus sequence of the pandemic H1N1 HA sequences from Mexico (Appendix 2) were aligned using ClustalW [14].As a result, 12 sites were found where some of the sequences differed from one another (Table 1, Appendices 1 and 2).In addition, 5216 H1N1 sequences were obtained from the GenBank and aligned using the built-in ClustalW algorithm of MEGA4.Analysis of the obtained alignment showed that while some of the changes in the amino acid composition of the H1N1 HA in Estonia had been commonly detected worldwide, others were quite rare (Table 1).Position 179 where two of Estonian sequences had amino acid substitution from serine (S) to asparagine (N) belongs to immunodominant (S179N) epitope (pos.168-182) [15].A similar change was found in porcine sequence ACH69547.1.Position 154 with amino acid substitution P → S belongs to another immunogenic region, where the change of amino acid alters the antigenic properties of the protein [16].Table 1.Amino acid differences between seven Estonian 2009 pandemic H1N1 HA sequences and a consensus of the pandemic H1N109 HA sequences from Mexico (H1N1pan09).For comparison, amino acids in these positions of an H1N1 strain isolated in 2007 from swine in Europe (GenBank ID ACH69547.1)and in a strain isolated in 1977 from humans in the USSR (GenBank ID ABD60933.
DISCUSSION
In this study, we used an ELISA method to screen blood sample collections for the presence of anti-H1N1 and anti-H3N2 antibodies.We established that in 2009-2010 both humans and pigs had been infected with the H1N1 virus in Estonia at surprisingly high rates.Considering that the majority of the seropositive patients had not exhibited typical signs of influenza, the virus that spread in Estonia was probably attenuated compared to the strains that were characteristic of the 2009-2010 pandemy in many other countries.It is well known that the seasonal H1N1 influenza spreads easily within human population and may induce a partial protection via crossreaction.As the 1977 influenza outbreak in the former USSR was due to an H1N1 virus [17], the elevated number of cases observed in Estonia that year [18] was most likely due to H1N1 viruses.Several younger blood donors had probably not been exposed to this seasonal H1N1.
No mortality or abnormal respiratory diseases were reported in the studied swine herds during the first half of 2010.Since the health status of the tested pigs was followed by a veterinarian in the studied farms, the fact that no signs of influenza were observed indicates that the H1N1 virus present was also apperantely attenuated.In fact, the last laboratory-confirmed case together with influenza virus isolation from a pig in Estonia was registered in 1957 [19].Interestingly, almost all the animals analysed in 2010 were seropositive for the H3N2 influenza virus, but with antibody titres lower than against the H1N1pandemic09 virus, suggesting that they had recently encountered the H1N1pandemic09 virus.Considering that the sampling was carried out in May 2010 and the serum antibody level is typically high for four months after infection [8], the high rates observed in many animals probably indicate an infec-tion during spring 2010.Reports from Scandinavia, Poland, and even New Caledonia suggest that in 2009-2010 several previously clean farms turned positive for the H1N1pandemic09 virus [20][21][22][23][24].However, with the increasing numbers of human infections, a spillover of this virus to pigs during the 2009-2010 pandemic was quite plausible also in Estonia like it was in the case of the above-mentioned countries.
According to the Estonian Health Board, 124 000 cases of H1N1pandemic09 were estimated in Estonia in 2009-2010 and 21 deaths were associated with the H1N1 influenza virus during that period.The first case of H1N1 was confirmed on 29 May 2009.Until September, only occasional cases were registered, mostly linked to travellers.The influenza epidemic in Estonia started in October-November 2009, with twice as many patients with signs of respiratory disease compared to the previous years in the same period.The virus did not appear to be more virulent than in the previous autumns.From December, the number of patients decreased, but the number of respiratory disease recorded was still high in March.At the beginning of 2010, several confirmed cases of H1N1 in adults were associated with low fever and mild clinical signs.The highest disease rate -19.5% -was among children aged 0-14, while people aged 15-65 had a much lower disease rate of around 3.9%, and those over 65 years a still lower rate (0.95%), probably due to the protection afforded by the previous encounter with H1N1 viruses [10,25].
Considering that the analysed patients were 15-65 years old, 23 seropositive persons among 123 appears to be an unexpectedly high proportion.However, it is well compatible with the spread of an attenuated virus causing no clinical disease.Our results deserve to be confirmed by an epidemiologic study to assess the true rate of infection of the Estonian population by the H1N1 virus.We could find only seven sequences of H1N1 hemag-glutinin from Estonia in the databases, which do not provide a comprehensive description of the viral diversity present in the population in the period.However, analysis of the obtained alignment showed that while some of the changes in the amino acid composition of the H1N1 HA in Estonia had been commonly detected worldwide, others were quite rare (Table 1).For example, a study from Finland shows that one or two amino acid changes (N125D and/or N156K) in the major antigenic site of the hemagglutinin of the influenza A (H1N1)2009 virus may lead to significant reduction in the ability of the patient and vaccine sera to recognize influenza A(H1N1)2009 viruses [16].Such studies are important to find out whether or not the H1N1 seropositivity and protection rate are indeed significantly underestimated in Estonia due to frequent asymptomatic H1N1 infections.This knowledge has significant practical importance because it determines the planning of the vaccination against the H1N1 influenza virus with social and financial consequences.
CONCLUSIONS
Our results indicate the need for a better characterization of the status of the influenza circulating in Estonia and other countries during epidemic episodes.Better links with clinics and coordination with comprehensive surveys of the disease among farms would also bring a lot more understanding of the basic mechanism of the spreading of the viruses.ELISA and HI methods were used to screen both Estonian human and porcine blood samples of different origin for the presence of anti-H1N1 antibodies.We could establish that both humans and pigs had been infected with the H1N1 virus with often having no clinical signs, which may indicate that an attenuated virus spread in the Estonian population during that period.We also show here for first time that several herds in Estonia were infected with the pandemic H1N1 virus in spring 2009.We could establish that in Estonia, like in other countries, both humans and pigs were infected with the H1N1 virus at surprisingly high rates in [2009][2010].
ACKNOWLEDGEMENTS
This work was supported by Estonian Research Council Targeted Financing Project No. SF0140066s09.This work was also supported by the Estonian Science Foundation under grant ETF8914 and the Competence Centre for Cancer Research.We thank Dr Eero Merilind and Irina Tutkina from Nõmme Family Doctors' Centre for interesting discussions and human blood samples, Dr Riin Kullaste, Director of Blood Centre, North Estonia Medical Centre, and Dr Erna Saarniit for donor blood samples.We thank Marianne Metsaoru, Agnes Sambrek, Marju Kuusik, and all voluntary participants for their collaboration.APPENDIX 1
MOLECULAR PHYLOGENETIC ANALYSIS BY THE MAXIMUM LIKELIHOOD METHOD
The evolutionary history was inferred by using the Maximum Likelihood method based on the JTT matrix-based model [26].The bootstrap consensus tree inferred from 1000 replicates is taken to represent the evolutionary history of the taxa analysed [27].The percentages of replicate trees in which the associated taxa clustered together in the bootstrap test (1000 replicates) are shown next to the branches [27].Initial tree(s) for the heuristic search were obtained automatically as follows.When the number of the common sites was < 100 or less than one fourth of the total number of the sites, the maximum parsimony method was used; otherwise BIONJ method with MCL distance matrix was used.The tree is drawn to scale, with branch lengths measured in the number of substitutions per site.The analysis involved 10 amino acid sequences.Evolutionary analyses were conducted in MEGA5 [28].
4 and No. 6 had been vaccinated against seasonal influenza (H1N1, H3N2, and influenza B).None of the participants had been vaccinated against H1N1pandemic09, since human sera were collected in October 2009-January 2010.Blood samples were taken from 96 swine from four herds in May 2010.Studies involving humans comply with the principles of the Helsinki Declaration of 1975 and with due subsequent amendments by the World Medical Assembly.Ethics Review Committee (ERC) on Human Research of the University of Tartu, No. 181/T-1, was issued 20.04.2009 to Sirje Rüütel Boudinot.
Fig. 1 .
Fig. 1.Optical densities (OD) of 1 : 16 000 serum dilutions from 123 human patients showing reaction to Celvapan (grey bars) and Pandemrix (black bars) H1N1pandemic09 antigens.Human sera Nos 124 and 125 from spring 2008 were used as negative references.Sera Nos 5 and 17 from patients with confirmed cases of H1N1pandemic09 were used as positive references.Sera Nos 5 and 17 were taken one month and four months after confirmed infection, respectively.A human serum was considered as positive for H1N1pandemic09 when 1 : 16 000 dilution gave twice higher OD values than negative controls.The cutoff values for Celvapan and Pandemrix ELISA are represented as solid and dashed lines, respectively.
Fig. 2 .
Fig. 2. Optical densities (OD) of 1 : 16 000 serum dilutions from 95 swine showing reaction to Celvapan (grey bars) or Pandemrix (black bars) H1N1pandemic09 antigenes.Swine sera Nos 96 and 97 from the year 2007 and swine sera Nos 98 and 99 from the year 2008 were used as negative references.Swine serum was considered as positive to H1N1pandemic09 when 1 : 16 000 dilution gave twice higher OD values than negative control porcine sera Nos 96-99, which were taken before the H1N1 pandemic 2009/2010 (OD 100 or higher).The cutoff values for Celvapan and Pandemrix ELISA are represented as solid and dashed lines, respectively.
Fig. 5 .
Fig. 5. Highest human serum dilutions that gave hemagglutination activity in the presence of H1N1pandemic09 antigens from Celvapan (grey bars) and Pandemrix (black bars).Sera Nos 124 and 125 from spring 2008 were used as negative controls.The cutoff value for the hemagglutination assay is represented as solid line.
Fig. 6 .
Fig. 6.Highest swine serum dilutions that gave hemagglutination activity in the presence of H1N1pandemic09 antigens from Celvapan (grey bars) and Pandemrix (black bars).Sera Nos 96-99 from the year 2008 were used as negative controls.The cutoff value for the hemagglutination assay is represented as solid line.
|
v3-fos-license
|
2024-04-28T15:21:13.003Z
|
2024-04-26T00:00:00.000
|
269413818
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2079-4991/14/9/760/pdf?version=1714113918",
"pdf_hash": "a700a2c5526166fc67a56ae3ac5a69205d0f6f5c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2347",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Biology"
],
"sha1": "3dbf7af0388e2079ff47c7b58d82566a45200f30",
"year": 2024
}
|
pes2o/s2orc
|
Graphene Oxide (GO)-Based Bioink with Enhanced 3D Printability and Mechanical Properties for Tissue Engineering Applications
Currently, a major challenge in material engineering is to develop a cell-safe biomaterial with significant utility in processing technology such as 3D bioprinting. The main goal of this work was to optimize the composition of a new graphene oxide (GO)-based bioink containing additional extracellular matrix (ECM) with unique properties that may find application in 3D bioprinting of biomimetic scaffolds. The experimental work evaluated functional properties such as viscosity and complex modulus, printability, mechanical strength, elasticity, degradation and absorbability, as well as biological properties such as cytotoxicity and cell response after exposure to a biomaterial. The findings demonstrated that the inclusion of GO had no substantial impact on the rheological properties and printability, but it did enhance the mechanical properties. This enhancement is crucial for the advancement of 3D scaffolds that are resilient to deformation and promote their utilization in tissue engineering investigations. Furthermore, GO-based hydrogels exhibited much greater swelling, absorbability and degradation compared to non-GO-based bioink. Additionally, these biomaterials showed lower cytotoxicity. Due to its properties, it is recommended to use bioink containing GO for bioprinting functional tissue models with the vascular system, e.g., for testing drugs or hard tissue models.
Introduction
Modern tissue engineering has made tremendous progress in developing hybrid cell culture scaffolds that replicate the extracellular matrix (ECM).The design and manufacture of functional tissues using biomaterials is at the same time the greatest challenge and limitation in the processing and clinical application of artificial constructs.In regenerative medicine, the main goal is not to create a fully functional artificial tissue but to develop an artificial scaffold for culturing cells that mimics their native environment and stimulates their regeneration.The ECM matrix is usually the most important part of artificial constructs.It is made up of a complex network of structural and regulatory proteins arranged in a fibrous matrix that has specific biological functions [1][2][3][4].The designed scaffold that can replace the native ECM structure should maintain the 3D structure of the cells and allow the diffusion of nutrients, metabolites and soluble factors to ensure tissue regeneration [4][5][6].During the process of designing and manufacturing artificial scaffolds, it is necessary to take into consideration many features, such as their morphology, porosity, mechanical parameters, surface topography, stability, degradation and biocompatibility [7][8][9][10][11].The development of new biomaterials is of growing interest to many research groups.Among them, hydrogels containing alginate, gelatin or hyaluronic acid have great potential in bioprinting [12][13][14], regenerative medicine [15,16] and drug delivery systems [17,18].Bioprinting is an additive manufacturing method using hydrogel biocomposites [19] Hydrogels, as hydrophilic polymeric materials, are able to efficiently retain water, which does not have a negative impact on their structural and physicochemical properties [20][21][22].
An important element in the development of a new biomaterial composition is to give it suitable characteristics that meet technical and biological requirements.It is worth paying attention to such material properties as printability, viscosity, degradability and functionality, as well as biocompatibility, cytotoxicity and bioactivity [23,24].The composition of an ideal biomaterial should replicate the natural environment for cells as closely as possible.The selection of a material biocomposition with appropriate properties is crucial for the formation of scar structures that can promote cell growth.The selected materials should exhibit properties suitable for use in the chosen bioprinting technique, i.e., a material with good viscosity, adequate stiffness and a degree of cross-linking to achieve stable fibers in precision printing [25][26][27][28][29][30].Due to hydrogels' unique characteristics, these biomaterials are suitable for the development of 3D scaffolds [31].Additionally, to enhance their functional qualities and optimize their usefulness in tissue engineering, the final formulations can be enriched with additional functional ingredients or polymer composites [32].The most common substances added to organic hydrogels are silica [33], hydroxyapatite [34], gold and silver nanoparticles [35].Despite the many valuable advantages of using inorganic particles in the bioprinting process, many disadvantages have also been identified, including (i) an increase in shear stress during printing, which can inhibit cell proliferation and viability and affect cell functions; (ii) a decrease in the diffusion of cross-linking agents and (iii) the low homogeneity of the material that prevents the crosslinking process [29,36].One of the common additives used in modern tissue engineering is graphene oxide (GO) [37], which is an atomically thin sheet with a large surface area and numerous hydrophilic functional groups (e.g., hydroxyl, epoxy and carbonyl).These functional groups make it possible to carry out a wide range of chemical modifications [38].Its unique properties lead to a number of common applications, including in the design of optical devices [39], photoelectric [40] and metadevices [41].Recently, biomaterials for tissue engineering have frequently used graphene oxide as a component.The results of studies conducted on 2D cultures determined that the GO improved (i) the adhesion and proliferation of human neural stem cells (hNSCs) [42], (ii) cell growth [43][44][45] and (iii) osteogenic differentiation [45].Additionally, in 3D studies, the GO was incorporated into alginate- [32,46] collagen- [47] and gelatin methacrylate (GelMA)-based [48] hydrogel scaffolds, resulting in the formation of a functional hybrid scaffold with significant potential for cell differentiation and proliferation.
The application of GO as an additive to biomaterials in biomedicine has not yet been tested and its influence on the immune system is not fully understood.Regardless of the type of biomaterial, any new material that is introduced into the body has an impact on the immune system.The immune response is a complex process that aims to protect against foreign molecules, pathogens or other substances [49].The incorporation of a biomaterial into a living organism can induce an immune response, as the immune system can recognize this new material as foreign and potentially damaging.This might result in a variety of effects, including inflammation at the site of implantation or incubation with the materials, activation of various types of immune cells and cytokine production [50].
Similarly, the GO-based biomaterial may induce various biological consequences, which depend on a wide range of factors, such as GO properties, concentration, additional GO surface modification, exposure time and individual cell characteristics [51,52].It is crucial to highlight that research on nanoparticle-based biomaterials is still at an early stage.To determine the clinical utility of new biomaterials and understand their influence on living organisms, systematic studies are crucial in order to avoid and/or inhibit potential adverse immune reactions.The impact of GO-based biomaterials on the cellular immune system also requires detailed investigation [49,51].
The aim of this work was to optimize the composition of a new GO-based biomaterial with unique properties suitable for the bioprinting of functional cell culture scaffolds.The experimental work included the evaluation of properties such as rheology, printability, mechanical strength, elasticity, degradation, soaking and swelling, as well as biological properties, such as cytotoxicity of biomaterials, cell proliferation on the biomaterials, the assessment of lactate dehydrogenase (LDH) and the expression of immune response associated genes (responsible for the expression of cell surface receptors; stress response; oxidoreductases; proteases; transcription factors; signal transduction; cytokines and cytokine receptors; chemokines and chemokine receptors; and cell cycle and protein kinases).For these purposes, we employed several analytical and biological techniques and characterized the selected GO-and non-GO-based biomaterials.
A paste-like dECM biomaterial was obtained as follows: dECM was digested in a pepsin solution (1 mg/mL in 0.01 M HCl; Sigma-Aldrich, St. Louis, MO, USA, Merck Millipore, Burlington, MA, USA) and then neutralized using 0.1 M NaOH (Sigma-Aldrich, USA).After neutralization, an additional amount of dECM powder was added to prepare a paste-like dECM biomaterial.
The GO-based hydrogel was prepared by mixing dissolved GelMa, HaMa, glycerol, LAP and GO in PBS.To prepare the final composition of the bioink, the particular GO-based hydrogel was mixed with a paste-like dECM biomaterial (volume ratio: 1:1).
Rheology
Rheological analysis of hydrogels and bioinks was performed using the Anton Paar MCR 72 rheometer (Anton Paar, Graz, Austria) with a plate and conical sensor system.A small volume of biomaterial was placed on the sample table and the viscosity was tested at a shear rate of 100 s −1 at 25 • C. The storage modulus and the loss modulus of the biomaterials were tested under 1-100% deformation conditions at 14 • C (for hydrogels) and 20 • C (for bioinks), i.e., below the experimentally determined gelation temperature of the materials.All the tests performed for bioinks were carried out using a plate (d = 25 mm) and the table with samples was set at a distance of 1 mm from the plate, while for hydrogels the process was carried out with a cone (d = 50 mm) and the table with the samples was set at a distance of 0.102 mm from the cone.
Printability
The printability of the developed biomaterials was tested using a three-stage assessment system: a fusion test of fibers printed in the form of a template, a collapse test of a fiber printed on a three-dimensional platform and an assessment of fiber continuity during continuous bioink printing in a volume of 3 mL.Prints were made using a BIO X™ printer (Cellink, Göteborg, Sweden). Figure 1 shows the template and platform model.
Rheology
Rheological analysis of hydrogels and bioinks was performed using the Anton Paar MCR 72 rheometer (Anton Paar, Graz, Austria) with a plate and conical sensor system.A small volume of biomaterial was placed on the sample table and the viscosity was tested at a shear rate of 100 s −1 at 25 °C.The storage modulus and the loss modulus of the biomaterials were tested under 1-100% deformation conditions at 14 °C (for hydrogels) and 20 °C (for bioinks), i.e., below the experimentally determined gelation temperature of the materials.All the tests performed for bioinks were carried out using a plate (d = 25 mm) and the table with samples was set at a distance of 1 mm from the plate, while for hydrogels the process was carried out with a cone (d = 50 mm) and the table with the samples was set at a distance of 0.102 mm from the cone.
Printability
The printability of the developed biomaterials was tested using a three-stage assessment system: a fusion test of fibers printed in the form of a template, a collapse test of a fiber printed on a three-dimensional platform and an assessment of fiber continuity during continuous bioink printing in a volume of 3 mL.Prints were made using a BIO X™ printer (Cellink, Göteborg, Sweden). Figure 1 shows the template and platform model.
Fiber Splicing Test
In order to carry out the fiber splicing test, a g-code was designed, according to which two layers were printed one after the other using the particular material [53][54][55].The prepared print follows a 0°-90° pattern, which captures the 2D effect and increases the distance between fibers (FD).The distance between fibers was in the range of 1-5 mm with increments of 1 mm.The printing speed, needle diameter and printing distance used in
Fiber Splicing Test
In order to carry out the fiber splicing test, a g-code was designed, according to which two layers were printed one after the other using the particular material [53][54][55].The prepared print follows a 0 • -90 • pattern, which captures the 2D effect and increases the distance between fibers (FD).The distance between fibers was in the range of 1-5 mm with increments of 1 mm.The printing speed, needle diameter and printing distance used in the test were as follows: 20 mm/s, 0.609 mm or 0.437 mm and 0.8 mm.During the test, the material was dispensed at the appropriate range of pressures and temperatures.The printed construct was cross-linked with an external UV-Vis 405 nm lamp for 15 s at 13 W/cm 2 (Polbionica Sp. z o.o., Poland).Images of the printouts were taken immediately after their fabrication.The images were developed using Carl Zeiss Vision AxioVision Viewer 4.8 software (Carl Zeiss Vision GmbH, Warszawa, Poland).Two parameters described by Equations ( 1) and (2) were determined from the results, i.e., the percentage of diffusion rate (spreading rate) (Dfr) and printability (Pr).The pore diffusion rate without material spreading is 0 (i.e., A t = A a ), and for an ideal reproduction of the model, the printability equals 1.
where A t is the theoretical pore surface area, A a is the actual surface area of the pore and L is the perimeter of the pore.
Fiber Collapse Test
The deflection at the mid-span of the suspended fiber was analyzed to determine the collapse of the material.For the experiment, a special platform consisting of seven pillars was designed and printed.The particular pillars are spaced from each other by 1, 2, 3, 4, 5 and 6 mm.The dimensions of the two corner pillars are 5 × 10 × 6 mm 3 , while the other five pillars are 2 × 10 × 6 mm 3 .A single fiber of the test material was deposited on the platform, and immediately after that, a picture of the fiber was taken.During the printing process, temperature and pressure conditions were adjusted to the given material, and the print was performed at a speed of 20 mm/s with a 21 G needle (0.609 mm).The collapse area factor (Cf ), which is the percentage of the actual area after the deflection of the suspended fiber in relation to the theoretical area, was calculated using Equation (3): where A c a is the actual area under the curve and A c t is the theoretical area under the curve.If the material is too viscous and is unable to form a "bridge" between two pillars, the actual area is zero and the collapse factor equals 0. On the other hand, if the fiber does not collapse and forms a straight link between pillars, then A c t = A c a and the factor is 100%.
Smoothness and Fiber Continuity
The continuity of the fiber when printing 2-3 mL of the tested bioink was determined using a 0/1 system, where 0 means the fiber is broken and 1 means the fiber is continuous.
Mechanical Testing
The mechanical compressive strength of the printed constructs was investigated using a static compression test.For the experiment, cylindrical specimens (d = 10 mm and h = 5 mm; 100% filling; cross-linking with an external UV-Vis lamp after each layer) were printed using a BIO X™ printer (Cellink, Sweden).Each printed construct was subjected to an external force between 0 and 0.05 N and compressed at a constant rate of 10 mm/min at room temperature until 80% strain was achieved.Measuring points were collected every 0.025 s.After the measurement, a picture of the deformed sample was taken using a camera.Based on the results, the mechanical strength of the samples was calculated as the maximum stress (the ratio of the force to the area of the printed sample) and Young's modulus as the directional coefficient of the simple stress-strain relationship of the sample in the strain range of 0.1-0.5.An important parameter is also the conventional elastic limit, which is the stress required to deform the samples by 10%.
Degradation
Degradation studies were performed in simulated body fluid (SBF) with or without enzyme (0.1 mg/mL collagenase).Collagenase breaks peptide bonds in collagen, which remains the main component of hydrogels and bioinks.300 µL of bioinks were placed on Petri dishes and then cross-linked with UV light (λ = 365 nm, 13 mW/cm 2 for 15 s).The samples were flooded with the appropriate SBF solution and then incubated at 37 • C for 21 days.At given time points, the weight loss or gain of the sample was monitored by fluid withdrawal and lyophilization.Time 0 consisted of freeze-dried and weighed samples immediately after cross-linking.The test for each variant was performed in 3 replicates.
The degree of biodegradation was calculated according to the following formula: where w 0 is the dry weight of the sample immediately after pouring and w 1 is the dry weight of the sample after degradation time t.
Effective Swelling and Absorbability
Absorbability: Hydrogels were cross-linked as mentioned in Section 2.6 and incubated in deionized water at 37 • C for 24, 48 and 72 h.At the given time points, water was collected, and the sample was weighed (WS).The test for each variant was performed in 3 replicates.The water absorption coefficient of the swollen gel was calculated as follows: where W 0 is the weight of the sample for time 0 h and W s is the weight of the sample after time t.Swelling ratio: Samples were cross-linked under the same conditions as mentioned in Section 2.6, placed in an aqueous environment (deionized water) and stored at room temperature for 24 and 48 h.After this time, water was collected and samples were lyophilized and weighed.At time 0 h, a sample was subjected to lyophilization immediately after cross-linking.The test for each variant was performed in 3 replicates.The degree of swelling was determined as follows: where M 0 is the dry weight of the sample for time 0 h and M s is the dry weight of the sample after time t.
Biological Tests 2.8.1. Viability Assay
The viability of the L-929 cell line (ATCC ® Manassas, VA, USA) was assessed using an indirect MTT (3-(4,5-dimethylthiazol-2,5-diphenyltetrazolium bromide) colorimetric assay (MedChemExpress, Monmouth Junction, NJ, USA).Extracts of fragmented biomaterials (HGO1, HGO2, BGO1 and BGO2) were prepared by depositing them on inserts for 24 h in a DMEM culture medium.Subsequently, these extracts, along with a positive control (CP, 0.1% Triton/DMEM), were added to cells seeded in 96-well plates at densities of 1 × 10 4 /well, 5 × 10 3 /well and 2.5 × 10 3 /well.The cells were then incubated at 37 • C, 5% CO 2 in the DMEM medium for 24, 48, and 72 h, respectively.At the end of the incubation period, the medium was removed and 1 mg/mL MTT (Sigma Aldrich ® , USA) was added.The absorbance of formazan was measured at 570 nm using a microplate scanning spectrophotometer (BioTek, Winooski, VT, USA).Cell viability was calculated in comparison to untreated cell culture (negative control, CN) using the formula where OD sample is the absorbance of the sample at λ = 570 nm (average of 5 replicates) and OD negative control is the absorbance of the negative control at λ = 570 nm (average of 6 replicates).
According to the current ISO 10993-5:2009(E) norm [56], a material that does not show cytotoxicity is one for which the cell viability is at least 70%
Cytotoxicity Assay
To investigate the direct impact of the biomaterial on the L-929 cell line, cells were seeded into 6-well plates pre-coated with GO-supplemented biomaterials (HGO1, HGO2, BGO1 and BGO2) at densities of 2.5 × 10 5 /well, 6.3 × 10 4 /well and 4 × 10 3 /well.The cells were subsequently incubated at 37 • C and in a humidified 5% CO 2 atmosphere in a DMEM medium for 1, 3 and 7 days.
To carry out the assay, the culture medium was collected and diluted at a ratio of 1:100 in the LDH Storage Buffer.The samples were stored at −20 • C until testing.The entire procedure had been carried out according to the manufacturer's protocol.The luminescence signal was measured using a microplate reader (BioTek Synergy H1 Plate Reader) after a one-hour incubation at room temperature.
The Evaluation of Cell Proliferation Utilizing the Alamar Blue Assay
The proliferation of the L-929 cell line (ATCC ® , CCL-1™, Manassas, VA, USA) was measured at 1, 3 and 7 days after being directly cultivated on the surface of biomaterials (HGO1, HGO2, BGO1 and BGO2) using the Alamar Blue assay (Invitrogen™, Waltham, MA, USA).According to our modified protocol, cells were incubated for 24 h with Alamar Blue reagent in a ratio of 1:10.After incubation, 100 µL of the medium was transferred to black plates to absorb light and reduce background and crosstalk.The absorbance of each sample was then measured at 530 nm and 590 nm using a plate reader.
The Assessment of the Expression of Immune Response Genes in the L-929 Cell Line Following Its Interaction with BGO1 Biomaterial
Real-time PCR gene expression analysis was used to assess the fold change expression of immune response-related genes.The L-929 cell line required to assess the expression level of immune function genes was seeded at the density mentioned above, directly on the biomaterial BGO1, which was characterized by improved printability, a lack of variation in rheological properties, increased mechanical strength in comparison to the reference sample (without GO) and a lack of cytotoxicity.Total RNA from the L-929 cells was isolated using the PureLink™ RNA Mini Kit (Invitrogen™, Waltham, MA, USA).The purity of the extracted RNA was assessed using the NanoDrop™ One/OneC Microvolume UV-Vis Spectrophotometer (Thermo Scientific™, Waltham, MA, USA) by measuring absorbance at 260 nm and 280 nm wavelengths.The reverse transcription reaction was performed using the High-Capacity RNA-to-cDNA kit (Applied Biosystems™, Waltham, MA, USA) according to the manufacturer's instructions.Gene expression was investigated with TaqMan ® Array 96-WELL FAST Plates (Applied Biosystems™): mouse immune response (cat no: 4418856; Waltham, MA, USA).All samples were analyzed in duplicates using 50 ng total RNA/sample.Real-time PCR was performed using a CFX96 Touch Real-Time PCR Detection System instrument (Bio-Rad, Hercules, CA, USA).The results were normalized to the Hprt1 reference gene.The relative gene expression was calculated using the 2 −∆∆Cq method [57].The evaluation of immune response gene expression was conducted in L-929 cells cultured on the BGO1 biomaterial, known for its favorable physicochemical and biological characteristics.The control sample was cells cultured at 37 • C in a humidified 5% CO 2 atmosphere.The experiments were performed at two time points: after 1 and 7 days of incubation.In this study, 92 assays for immune response-associated genes and 4 assays for candidate endogenous control genes were evaluated.
The STRING database was exploited to prepare the chain graphs depicting the physical and/or functional interactions between the proteins encoded by the investigated transcripts [58].
Statistical Analysis
The results are presented as mean and standard deviation.The Shapiro-Wilk test was used to determine the distribution's normality.For comparisons among samples in the Alamar Blue assay and LDH assay, a one-way analysis of variance (ANOVA) was employed.Subsequently, the Tukey test, as a post hoc analysis, was conducted to determine statistically significant differences between samples at specific time points.The significance thresholds were * p < 0.05, ** p < 0.01 and *** p < 0.001.The statistical analysis was carried out using Statistica 13.1 software (StatSoft Polska Sp. z o.o., Cracov, Poland).
Rheology
Due to its utility in the 3D-bioprinting process, the biomaterial must be characterized by rheological properties, such as the storage modulus (G ′ ), which refers to the elastic properties and serves as a measure of retaining the elastic shape, as well as the loss modulus (G ′′ ), which represents the viscous part or the amount of energy dissipated in the sample [59].The results revealed that the BGO1, BGO2 and BREF bioinks showed a higher storage modulus (G ′ ) (higher energy storage capability of the material) than the loss modulus (G ′′ ) in the tested range of oscillation amplitude (see Figure 2B), which suggests that they can be regarded as mainly elastic biomaterials.This feature is especially important in the context of the use of such biomaterials in the process of bioprinting, as the greater elasticity of the biomaterial allows it to be extruded more easily [60].The material should exhibit viscoelastic features, so it must show viscous-like behavior for better extrusion and to eliminate issues related to nozzle clogging, and elastic-like behavior to maintain the stability of the printed fiber.Since the storage modulus is a measure of energy that has to be put into the sample to distort it, it has to be emphasized that for bioprinting purposes, the biomaterial's resistance to deformation cannot be too high as it may prevent its extrusion.As a consequence, the liquid-like properties of the biomaterial cannot outbalance the elastic ones and vice versa; therefore, the appropriate biomaterial should exhibit a balance between the storage and loss moduli [61].The value of the storage modulus obtained for the reference bioink (BREF a non-GO-BGO1 variant) is slightly lower than the values determined for the bioinks supplemented with GO, which may suggest that the addition of this nanomaterial has an impact on the biomaterial's elasticity and increases its solid-like behavior.Additionally, differences in the ratios of GelMa and HaMa (BGO1 and BGO2) did not introduce significant changes in the component values of the complex modulus.Chen et al. [62] and Li et al. [60] have proven that low GO concentrations in bioink enable the connection of polymer chains via hydrogen bonding, which increases elastic energy storage capacity (G').Higher GO concentrations prevent the formation of hydrogen bonds and reduce the biomaterial's elasticity but do not have any influence on its viscous behavior (G ′′ ).However, the lack of a paste-like dECM significantly decreases the complex modulus values: 2 times for HGO2 and 30 times for HGO1, and since the complex modulus is a measure of the material's overall resistance to deformation, the addition of a paste-like dECM results in an increase in the biomaterial's solid-like behavior.and 30 times for HGO1, and since the complex modulus is a measure of the material's overall resistance to deformation, the addition of a paste-like dECM results in an increase in the biomaterial's solid-like behavior.Figure 2C shows the viscosity values measured for all the variants (both bioinks and hydrogels).Hydrogels have viscosity values 100 times lower than the corresponding bioinks, while the comparison between the particular variants HGO1/BGO1 vs. HGO2/BGO2 revealed that the highest values were determined for HGO1/BGO1.It might be due to the fact that HGO1/BGO1 variants contain a lower concentration of both HaMa and GelMa.The highest viscosity was determined for BGO1 (approx.450 mPa•s), while the lowest for HGO2 (approx.37 mPa•s).It has to be emphasized that the suitable bioink with potential for application in bioprinting should rather exhibit lower viscosity in order not to clog the printhead [63].However, the bioink viscosity must be adapted to its potential use, since this feature affects not only the printability but also its compatibility with cells.It is commonly known that more viscous bioinks provide stronger mechanical support and are more deformation-resistant, which increases the printing fidelity but, on the other hand, the excessive viscosity significantly reduces the survival of cells.Additionally, higher viscosity requires applying higher printing pressure, which has a negative impact on the survival and functionality of cells.In turn, lower viscosity is more likely to provide suitable conditions for cells but it also negatively affects printing fidelity and resolution [64].
Degradation, Absorbability and Swelling Ratio
Based on the results (see Figure 3A), it can be seen that significantly more water is absorbed by hydrogels (HGO1 and HGO2) compared to bioinks (BGO1 and BGO2).In general, biomaterials absorb most of the water during the first 48 h and after this time water absorption decreases, which is a consequence of biomaterial saturation.
The analysis of water absorption per mg of sample (Figure 3B) revealed that after 24 h, the hydrogels (HGO1 and HGO2) exhibit the lowest water absorption per mg of sample but after 48 h this trend is reversed.For the bioinks (BGO1 and BGO2), it was determined that the highest absorption occurs after 24 h and it decreases in the following days.It can Figure 2C shows the viscosity values measured for all the variants (both bioinks and hydrogels).Hydrogels have viscosity values 100 times lower than the corresponding bioinks, while the comparison between the particular variants HGO1/BGO1 vs. HGO2/BGO2 revealed that the highest values were determined for HGO1/BGO1.It might be due to the fact that HGO1/BGO1 variants contain a lower concentration of both HaMa and GelMa.The highest viscosity was determined for BGO1 (approx.450 mPa•s), while the lowest for HGO2 (approx.37 mPa•s).It has to be emphasized that the suitable bioink with potential for application in bioprinting should rather exhibit lower viscosity in order not to clog the printhead [63].However, the bioink viscosity must be adapted to its potential use, since this feature affects not only the printability but also its compatibility with cells.It is commonly known that more viscous bioinks provide stronger mechanical support and are more deformation-resistant, which increases the printing fidelity but, on the other hand, the excessive viscosity significantly reduces the survival of cells.Additionally, higher viscosity requires applying higher printing pressure, which has a negative impact on the survival and functionality of cells.In turn, lower viscosity is more likely to provide suitable conditions for cells but it also negatively affects printing fidelity and resolution [64].
Degradation, Absorbability and Swelling Ratio
Based on the results (see Figure 3A), it can be seen that significantly more water is absorbed by hydrogels (HGO1 and HGO2) compared to bioinks (BGO1 and BGO2).In general, biomaterials absorb most of the water during the first 48 h and after this time water absorption decreases, which is a consequence of biomaterial saturation.
The analysis of water absorption per mg of sample (Figure 3B) revealed that after 24 h, the hydrogels (HGO1 and HGO2) exhibit the lowest water absorption per mg of sample but after 48 h this trend is reversed.For the bioinks (BGO1 and BGO2), it was determined that the highest absorption occurs after 24 h and it decreases in the following days.It can be concluded that bioinks containing dECM can be easily saturated with water on the first day, after immersion in the liquid.
be concluded that bioinks containing dECM can be easily saturated with water on the first day, after immersion in the liquid.The degree of hydrogel swelling was 6.5 (HGO2) and 8.5 (HGO1) times higher than for bioinks.This means that the addition of dECM increases the cross-linking densities, which causes a reduced water adsorption capacity.High values of the hydrogel swelling ratio may be due to the content of GelMa, whose swelling ratio for a concentration of 7.5% w/v is 1273% [65].
The degree of degradation after 21 days was 50-90% for non-enzymatic degradation and 80-90% for enzymatic degradation.The bioinks were more resistant to non-enzymatic degradation than hydrogels (BGO1 < HGO1 and BGO2 < HGO2), while the results of enzymatic degradation revealed that both bioinks and hydrogels had similar values for the degree of degradation (HGO1 and BGO1 as well as HGO2 and BGO2).Since the addition of dECM increases the cross-linking densities, it is believed that it also increases the resistance of the biomaterial to non-enzymatic degradation.
The degree of degradation is very important, especially for biomaterials employed in bioprinting.Degradation should occur at a well-defined rate so that cells can create their own network of extracellular matrix.Both GelMa and HaMa degrade easily.A 10% GelMa (degree of substitution above 70%) degrades completely within 6-8 h in a Hanks' Balanced Salt Solution environment with the addition of collagenase [66].On the other hand, a 1% HaMa degrades completely within 5 days in the PBS environment [67].The obtained results have proven that the higher the cross-linking density (the addition of GO and dECM), the more stable the network and the slower the degradation (Table 2).
Printability
In order to determine the optimal printing parameters, several fibers were printed using BREF, BGO1 and BGO2 bioinks.As can be noticed, the use of GO-based bioinks generates the need to improve printing pressure compared to the reference sample (BREF).For BGO1 and BGO2, the value of applied pressure was about 20 and 30 kPa higher than the values determined for the reference bioink (BREF).To evaluate the optimal The degree of hydrogel swelling was 6.5 (HGO2) and 8.5 (HGO1) times higher than for bioinks.This means that the addition of dECM increases the cross-linking densities, which causes a reduced water adsorption capacity.High values of the hydrogel swelling ratio may be due to the content of GelMa, whose swelling ratio for a concentration of 7.5% w/v is 1273% [65].
The degree of degradation after 21 days was 50-90% for non-enzymatic degradation and 80-90% for enzymatic degradation.The bioinks were more resistant to non-enzymatic degradation than hydrogels (BGO1 < HGO1 and BGO2 < HGO2), while the results of enzymatic degradation revealed that both bioinks and hydrogels had similar values for the degree of degradation (HGO1 and BGO1 as well as HGO2 and BGO2).Since the addition of dECM increases the cross-linking densities, it is believed that it also increases the resistance of the biomaterial to non-enzymatic degradation.
The degree of degradation is very important, especially for biomaterials employed in bioprinting.Degradation should occur at a well-defined rate so that cells can create their own network of extracellular matrix.Both GelMa and HaMa degrade easily.A 10% GelMa (degree of substitution above 70%) degrades completely within 6-8 h in a Hanks' Balanced Salt Solution environment with the addition of collagenase [66].On the other hand, a 1% HaMa degrades completely within 5 days in the PBS environment [67].The obtained results have proven that the higher the cross-linking density (the addition of GO and dECM), the more stable the network and the slower the degradation (Table 2).
Printability
In order to determine the optimal printing parameters, several fibers were printed using BREF, BGO1 and BGO2 bioinks.As can be noticed, the use of GO-based bioinks generates the need to improve printing pressure compared to the reference sample (BREF).For BGO1 and BGO2, the value of applied pressure was about 20 and 30 kPa higher than the values determined for the reference bioink (BREF).To evaluate the optimal printing parameters, one should take into consideration not only the way of manufacturing a 3D object with the desired physicochemical characteristics but also the influence of printing parameters on cell survival.Many experiments are currently being conducted to determine the effect of applied extrusion pressure on cell survival and model mapping [29,68,69].Since the higher concentration of the additive requires the application of higher printing pressure [70] and this may have a negative impact on cell survival, such bioink will not find application in bioprinting even if it exhibits distinctive physicochemical properties and improved printability.The influence of the additive on printing pressure was confirmed by Chor et al. [32].The printability tests of an alginate-based material supplemented with GO required the application of much higher pressures.Similar results were obtained for GO-based alginate-gelatin material by Li et al. [60].
The application of extrusion bioprinting requires considering such a parameter as the magnitude of shear stress.This feature depends on the extrusion pressure, nozzle diameter, printing speed and viscosity of the printed material [36].Shear stress can have a significant impact on cellular processes and can alter cell signaling and protein expression [71].In general, bioinks should have low viscosities in order to pass through printing nozzles [72].However, using high-viscosity bioinks and narrow nozzles, it is possible to print 3D objects with higher resolution.Yet, these two features-the high viscosity of the material and the small diameter of the nozzles-generate excessive shear stresses, which has a negative impact on cells that are loaded in the bioink.Additionally, the printing of high-resolution 3D objects takes longer, so it is essential to employ dedicated syringes and advanced motor controls to reduce the time needed [73].Therefore, it is crucial to maintain a delicate balance between the applied shear stress and the resolution of the 3D object.It is important because excessive shear stress can harm cell membranes and induce cell apoptosis [36].
Another important feature that has to be evaluated is the printing temperature.In our study, the bioink composition did not affect the printing temperature; in each case, it was possible to print a continuous fiber at the temperature range of 24 to 25 • C. It is commonly known that high temperatures are impoverishing for living cells, so methods such as thermal inkjet printing are not suitable for tissue engineering [74].However, this method can be utilized to print 3D scaffolds using acellular materials.For the BREF material, a smooth and continuous fiber was obtained at a temperature of 24-25 • C and an extrusion pressure of 30-35 kPa, while for the BGO1 material, the optimal printing parameters were a temperature of 25 • C and a pressure of 50-60 kPa, and for BGO2, a temperature of 25 • C and a pressure of 70-75 kPa.
In this study, we also determined the optimal extrusion rate for our biomaterials (20 mm/s).It has to be emphasized that this value depends on the biomaterial's properties and its composition.In 2022, Zhu et al. [75] found that the bioprinting of an artificial ear using a biomaterial supplemented with other nanoparticles (Cu-doped bioglass nanoparticles) requires maintaining a 3D-printer extruder's flow rate at 10 mm/s.This suggests that for each kind of biomaterial, it is essential to first optimize the bioprinting conditions, as they might be affected by several factors, such as the properties of the components and/or the method of sample preparation.
Figure 4A shows the results of the confluence of printed fibers.A photographic image of the printed model is shown, along with the printing parameters and the dependence of the material spread rate and printability coefficient on the size of the printed pore.In our study, for all materials, as the pore size increases, the diffusion rate decreases and printability increases for pores larger or equal to 4 mm 2 .The lowest diffusion rate and highest printability for a pore of 4 mm 2 were determined for the BGO2 material.Figure 4C shows the results of the collapse test of a fiber printed on a special platform.All materials enabled the printing of a stable and continuous fiber; however, a more satisfactory result was obtained when using BGO2 bioink.In each case, printability close to 0.8 was achieved.It has to be emphasized that printability ca. 1 and the material's relatively low diffusion rate is an important parameter that determines the material's usefulness in bioprinting [54,76].
In this research area, it is very common to present photographic images of printed models with pore geometries highlighted.Habib et al. [54] determined parameters characterizing printability such as Pr and Dfr (diffusion rate) for an alginate-based material containing carboxymethylcellulose (CMC).Materials containing a higher concentration of CMC-3 and 4% show a lower diffusion rate and a higher printability than the non-CMC material.In turn, Im et al. [77] determined the confluence of fibers and the printability parameter (Pr) for an alginate material supplemented with cellulose nanofibers and polydopamine nanoparticles.It was confirmed that a printability value equals approximately 1, which may suggest that this material is very similar to the ideal one, but it is worth noting that any deviation from this value (1) indicates non-ideal printability.It was also determined that the addition of nanomaterial to free alginate leads to an increase in the Pr value, which confirms that this material has properties closer to the ideal state (value 1) [76].relatively low diffusion rate is an important parameter that determines the material's usefulness in bioprinting [54,76].In this research area, it is very common to present photographic images of printed models with pore geometries highlighted.Habib et al. [54] determined parameters characterizing printability such as Pr and Dfr (diffusion rate) for an alginate-based material containing carboxymethylcellulose (CMC).Materials containing a higher concentration of CMC-3 and 4% show a lower diffusion rate and a higher printability than the non-CMC material.In turn, Im et al. [77] determined the confluence of fibers and the printability parameter (Pr) for an alginate material supplemented with cellulose nanofibers and polydopamine nanoparticles.It was confirmed that a printability value equals approximately 1, which may suggest that this material is very similar to the ideal one, but it is worth noting that any deviation from this value (1) indicates non-ideal printability.It was also determined that the addition of nanomaterial to free alginate leads to an increase in the Pr value, which confirms that this material has properties closer to the ideal state (value 1) [76].
In this study, we have determined that the use of GO has a positive impact on the printability of the biomaterial.However, one should keep in mind that additives may also alter the rheological and biological properties of the biomaterial, which may exclude its application in the bioprinting processes even if its printing characteristics are similar to the parameters of the ideal material [78].
Mechanical Properties
The mechanical parameters of 3D-printed objects were determined using the static compression test to evaluate the sample's response to crushing.The elastic limit is the stress value that is required to deform 10% of the height of the specimen, while Young's modulus represents the stiffness of the material and was determined as the directional coefficient of the most rectilinear segment on the stress-strain curve, in the strain range of 0.1-0.5.The results of the static compression test are shown in Figure 5.In this study, we have determined that the use of GO has a positive impact on the printability of the biomaterial.However, one should keep in mind that additives may also alter the rheological and biological properties of the biomaterial, which may exclude its application in the bioprinting processes even if its printing characteristics are similar to the parameters of the ideal material [78].
Mechanical Properties
The mechanical parameters of 3D-printed objects were determined using the static compression test to evaluate the sample's response to crushing.The elastic limit is the stress value that is required to deform 10% of the height of the specimen, while Young's modulus represents the stiffness of the material and was determined as the directional coefficient of the most rectilinear segment on the stress-strain curve, in the strain range of 0.1-0.5.The results of the static compression test are shown in Figure 5.
In general, the nanomaterial additives should increase the mechanical strength of the printed object [79,80].In our study, the applied force in each case led to the deformation of a 3D structure, and the GO-based bioinks were characterized by higher values of Young's modulus and conventional yield strength.However, it has to be emphasized that these bioinks exhibited lower crush resistance than the reference sample.Yet, one should keep in mind that this test is suitable for determining the resistance of the 3D object and not the biomaterials that were used to produce this object [81].In general, the nanomaterial additives should increase the mechanical strength of the printed object [79,80].In our study, the applied force in each case led to the deformation of a 3D structure, and the GO-based bioinks were characterized by higher values of Young's modulus and conventional yield strength.However, it has to be emphasized that these bioinks exhibited lower crush resistance than the reference sample.Yet, one should keep in mind that this test is suitable for determining the resistance of the 3D object and not the biomaterials that were used to produce this object [81].
The Assessment of GO-Based Biomaterial on Cell Viability of the L-929 Cell Line
The indirect MTT assay was performed at three time points: 24, 48 and 72 h after the application of extracts derived from the biomaterials.Based on the results, no cytotoxic effect of the tested biomaterial was demonstrated.The viability was converted relative to cells that were not treated and cultured under standard conditions.The lack of cytotoxicity was observed for all tested biomaterials: HGO1 88.6% ± 1, HGO2 89.5% ± 4, BGO1 84.4% ± 4 and BGO2 77.3% ± 2 after 24 h exposure.The reference material was 74.4% ± 2 at this point in time.During the second day of incubation, cell viability gradually began to decrease and was HGO1 79.8% ± 9, HGO2 81.8% ± 9 and BGO1 73.9% ± 7. The cytotoxicity effect increased on the second and third day for BGO2 61.6% ± 8 and BGO1 67.1% ± 6, respectively (Figure 6).
The Assessment of GO-Based Biomaterial on Cell Viability of the L-929 Cell Line
The indirect MTT assay was performed at three time points: 24, 48 and 72 h after the application of extracts derived from the biomaterials.Based on the results, no cytotoxic effect of the tested biomaterial was demonstrated.The viability was converted relative to cells that were not treated and cultured under standard conditions.The lack of cytotoxicity was observed for all tested biomaterials: HGO1 88.6% ± 1, HGO2 89.5% ± 4, BGO1 84.4% ± 4 and BGO2 77.3% ± 2 after 24 h exposure.The reference material was 74.4% ± 2 at this point in time.During the second day of incubation, cell viability gradually began to decrease and was HGO1 79.8% ± 9, HGO2 81.8% ± 9 and BGO1 73.9% ± 7. The cytotoxicity effect increased on the second and third day for BGO2 61.6% ± 8 and BGO1 67.1% ± 6, respectively (Figure 6).3.6.The Assessment of the L-929 Cell Line after Exposure to the GO-Based Biomaterials It was observed that as the cells were incubated on the biomaterial, the level of luminescence decreased, which indicates the release of lactate dehydrogenase (LDH) (Figure 7A).When the cells were exposed to the HGO1 extract, the luminescence level was 59,168 ± 7386 on the first day.This value decreased to 14,357 ± 594 on the seventh day (**
The Assessment of the L-929 Cell Line after Exposure to the GO-Based Biomaterials
It was observed that as the cells were incubated on the biomaterial, the level of luminescence decreased, which indicates the release of lactate dehydrogenase (LDH) (Figure 7A).When the cells were exposed to the HGO1 extract, the luminescence level was 59,168 ± 7386 on the first day.This value decreased to 14,357 ± 594 on the seventh day (** p < 0.01).Figure 7 shows that the LDH release for control cells remained relatively constant at the level of 17,000-20,000.The HGO2 biomaterial exhibited luminescence levels comparable to the control cells throughout the incubation period, amounting on average to 15,000.The relative luminescence unit (RLU) values for BGO1 and BGO2 on the first day of incubation were 40,351 ± 3277 and 49,486 ± 6815, respectively, and decreased over time (Figure 7) until the last day of incubation, when they reached values equivalent to those of control cells.On the first day of the LDH assay, no statistically significant differences were noted between the control and HGO2, HGO1 and BGO2, and BGO1 and BGO2.Only the control and HGO2 showed statistically significant differences on the third day.On the seventh day, statistically significant differences were observed solely between the control and BGO1, BGO2, HGO1 and HGO2, respectively.In addition, the LDH assay on the first day showed a lack of statistically significant differences between control and HGO2, HGO1 and BGO2, and BGO1 and BGO2.On the third day, a lack of statistically significant differences was noted between control and HGO2, and on the seventh day, statistically significant differences were only observed for control and BGO1, BGO2, HGO1 and HGO2, respectively (Figure 7B).The results of the proliferation rate evaluation, as depicted in Figure 8 indicate that the level of fluorescence remained close to the control cells' relative fluorescence units (RFU), and was 19,000 for the HGO2 biomaterial across all three time points (RFU 21,000).For the BGO1, RFU was 20,000 on the first and third day of incubation but decreased on the seventh day (up to 9000).For HGO1 and BGO2, the RFU levels were similar to the The results of the proliferation rate evaluation, as depicted in Figure 8 indicate that the level of fluorescence remained close to the control cells' relative fluorescence units (RFU), and was 19,000 for the HGO2 biomaterial across all three time points (RFU 21,000).For the BGO1, RFU was 20,000 on the first and third day of incubation but decreased on the seventh day (up to 9000).For HGO1 and BGO2, the RFU levels were similar to the control cells, on both the first and third day of the experiment (17,000 and 16,000, respectively).However, for these two samples, we detected a significant decrease in fluorescence on the seventh day of incubation (BGO1-9000 and BGO2-1000).All the samples tested with the Alamar Blue assay showed statistically significant differences at a particular time point (Figure 8).Immune Response Genes Expression Levels after L-929 Cell Line Exposure to the BGO1 Biomaterial Genes with a Cq value below 35 were selected for inclusion in the analysis.In the initial phase, both control (non-exposed) and exposed L-929 cells were analyzed on the first and seventh day of incubation on the surface of the BGO1 biomaterial (Figure 9A,B).Noteworthy differences in gene expression fold change were observed for several genes (Figure 9A-C).On the first day of incubation, a negative fold change in the expression of most genes compared to control cells was determined (with values <1); (Figure 9A).Particularly noteworthy were the lowest values (~0, Figure 9A) determined for genes such as Agtr2, C3, Ccl19, Ccl2, Cd40, Cxcl10, Cyp1a2, Fas, Il3, Nos2, Ptgs2 and Tbx21.Similarly, genes such as Ccl5, Cd80, Hmox1, Il6, Selp, Tnf and Lif exhibited a consistent fold change decrease of approximately 0.07.The Cyp7a1 gene had a similar expression as the control sample (Figure 9A).Other genes ranged from 0.2 to 0.8-fold change in expression (Figure 9A).
After seven days of incubation on the surface of the BGO1 biomaterial, significant differences in gene expression were also observed, compared to control cells (nonexposed, Figure 9B).Most genes exhibited a fold change increase during this incubation period.The Socs2 gene displayed the highest expression fold change (19-see Figure 9).Genes such as Cd80, Il5, Smad3, Socs1, Tgfb1, Vcam1 and Nfatc3 also showed increased levels, with values of approximately 4-5 (Figure 9B).Conversely, significant negative fold changes (near 0) were observed for genes C3, Ccl19, Icos, Il3, Tbx21 and Nfatc4.Additionally, the Ptgs2 (0.41) and Lif (0.47) genes exhibited a decreased expression.Moreover, for genes Ccl2, Cd40 and Ikbkb we determined similar levels of expression to those of control cells (non-exposed) at 1.55, 1.29 and 1.11, respectively (Figure 9B).Immune Response Genes Expression Levels after L-929 Cell Line Exposure to the BGO1 Biomaterial Genes with a Cq value below 35 were selected for inclusion in the analysis.In the initial phase, both control (non-exposed) and exposed L-929 cells were analyzed on the first and seventh day of incubation on the surface of the BGO1 biomaterial (Figure 9A,B).Noteworthy differences in gene expression fold change were observed for several genes (Figure 9A-C).On the first day of incubation, a negative fold change in the expression of most genes compared to control cells was determined (with values < 1); (Figure 9A).Particularly noteworthy were the lowest values (~0, Figure 9A) determined for genes such as Agtr2, C3, Ccl19, Ccl2, Cd40, Cxcl10, Cyp1a2, Fas, Il3, Nos2, Ptgs2 and Tbx21.Similarly, genes such as Ccl5, Cd80, Hmox1, Il6, Selp, Tnf and Lif exhibited a consistent fold change decrease of approximately 0.07.The Cyp7a1 gene had a similar expression as the control sample (Figure 9A).Other genes ranged from 0.2 to 0.8-fold change in expression (Figure 9A).
After seven days of incubation on the surface of the BGO1 biomaterial, significant differences in gene expression were also observed, compared to control cells (non-exposed, Figure 9B).Most genes exhibited a fold change increase during this incubation period.The Socs2 gene displayed the highest expression fold change (19-see Figure 9).Genes such as Cd80, Il5, Smad3, Socs1, Tgfb1, Vcam1 and Nfatc3 also showed increased levels, with values of approximately 4-5 (Figure 9B).Conversely, significant negative fold changes (near 0) were observed for genes C3, Ccl19, Icos, Il3, Tbx21 and Nfatc4.Additionally, the Ptgs2 (0.41) and Lif (0.47) genes exhibited a decreased expression.Moreover, for genes Ccl2, Cd40 and Ikbkb we determined similar levels of expression to those of control cells (non-exposed) at 1.55, 1.29 and 1.11, respectively (Figure 9B).An analysis based on the fold changes of immune response-associated genes in incubated for a given time on the surface of the biomaterial was also performed.In case, cells cultured on the surface of the BGO1 biomaterial on the first day of incuba served as the control (Figure 9C).The highest fold increase in expression was observed the Cyp1a2 gene, as depicted in Figure 9C.The second-highest increase in expression observed for the Agtr2 gene.Notable positive fold changes were also observed for Cxcl10, Cd80, Hmox1, Cd40, Nos2 and Selp genes, respectively (Figure 9C).The Cyp7a1 g showed a change in expression similar to the results obtained for cells incubated on BGO1 surface on the first day (Figure 9C).A negative fold change was determined fo Icos, Tbx2 and Nfatc4 genes (bright blue color, Figure 9C).The remaining genes exhib a positive fold change in comparison to the control samples on the first day of incuba In this part of research, we conducted comprehensive assessments of the cytotox of selected biomaterials, and we evaluated the proliferation of cells exposed to a g biomaterial, as illustrated in Figures 7-9.Moreover, we determined the expression pro An analysis based on the fold changes of immune response-associated genes in cells incubated for a given time on the surface of the biomaterial was also performed.In this case, cells cultured on the surface of the BGO1 biomaterial on the first day of incubation served as the control (Figure 9C).The highest fold increase in expression was observed for the Cyp1a2 gene, as depicted in Figure 9C.The second-highest increase in expression was observed for the Agtr2 gene.Notable positive fold changes were also observed for the Cxcl10, Cd80, Hmox1, Cd40, Nos2 and Selp genes, respectively (Figure 9C).The Cyp7a1 gene showed a change in expression similar to the results obtained for cells incubated on the BGO1 surface on the first day (Figure 9C).A negative fold change was determined for the Icos, Tbx2 and Nfatc4 genes (bright blue color, Figure 9C).The remaining genes exhibited a positive fold change in comparison to the control samples on the first day of incubation.
In this part of research, we conducted comprehensive assessments of the cytotoxicity of selected biomaterials, and we evaluated the proliferation of cells exposed to a given biomaterial, as illustrated in Figures 7-9.Moreover, we determined the expression profiles of immune response-associated genes in mouse fibroblast cells, as depicted in Figure 9.These investigations required the analysis of 92 genes that play key roles in orchestrating the immune response-a highly intricate defense mechanism according to which cells and proteins provide protection against pathogens.When the antigen enters the body, it is first recognized, and its presence induces the activation of a signaling cascade to overcome infection.This process recruits various components, such as surface receptors, signaling molecules, cytokines, chemokines, etc., to remove deleterious stimuli [82].The biomaterial that was analyzed contained GO, a single-atomic layered material made by the oxidation of graphite.Based on the findings reported by Mukherjee SP et al., it was determined that various GO derivatives may cause a different response of the immune system [49].It can be determined using fold-change analysis to determine the expression levels of certain genes involved in the immune response.In GO-based biomaterials, a critical factor is the oxygen level of GO (atomic ratio of carbon to oxygen) [83].The reduction of oxygen-containing functionalities reduces its hydrophilicity.The presence of GO may have an impact on the hydrophobic/hydrophilic properties of the biomaterial surface.Additionally, the distribution of hydrophilic GO within the biomaterial may have also altered its hydrophilicity.Therefore, we assumed that the alterations in gene expression profiles that we observed might have been induced by the presence of hydrophilic GO.Indeed, these alterations were observed during the incubation of cells on the biomaterial, as illustrated in Figure 9A,B.In some cases, we determined the reduced expression of certain genes, which may confirm that GO does not rapidly induce an immune system response.
The above-mentioned alterations may lead to a reduced secretion of cytokines and diminished activation of immune cells.These changes were also investigated by other research groups: Lategan K. et al. [84], Yang Z. et al. [85] and Cebadero-Dominguez Ó. et al. [86].As can be seen in Figure 9A, after 1 day of incubation, a decrease in expression of the following genes was observed: Agtr2, C3, Ccl19, Ccl2, Cd40, Cxcl10, Cyp1a2, Fas, Il3, Nos2, Ptgs2, Tbx21, Ccl5, Cd80, Hmox1, Il6, Selp, Tnf and Lif.These genes are associated with various biological processes and can mutually influence one another under diverse physiological and pathological conditions.The intricacies of their interactions are contextdependent and influenced by factors such as tissue types, external stimuli and other variables.For example, Tnf, a tumor necrosis factor, plays a role in inflammation induction and can impact the expression of Ccl2, Ccl5 and Il6, thereby intensifying the inflammatory response [87], as illustrated in Figure 9A.Similarly, Il6 can modulate Hmox1 expression in response to oxidative stress and influence TBX21 activation in Th17 T cells [88].On the other hand, CCL2 and CCL5 participate in the recruitment of inflammatory cells [89] and can be regulated by Il6, Tnf and other inflammatory mediators.PTGS2 is involved in metabolism and can be influenced by various factors, including inflammatory cytokines.CD40 regulates the expression of Cd80 on antigen-presenting cells, affecting the immune response.Meanwhile, AGTR2 and LIF serve distinct roles in regulating developmental processes and immune responses [90].After the first day of cell incubation with BGO1, we determined a negative fold change value, which indicated a decrease in the expression of immune-related genes.
In turn, after 7 days of cell incubation with the BGO1 biomaterial, we determined a positive fold change value, which indicates an increase in the expression of immune-related genes (see Figure 10B).One of the most significant differences in fold change value was observed for the Socs2 gene, which is responsible for encoding regulatory proteins known as suppressors of cytokine signaling.These proteins play a crucial role in negative feedback loops that regulate cytokine signaling pathways, such as IL-2, IL-3, IL-6 and IL-7 [91].Upon activation, SOCS2 inhibits further signaling, which results in the dampening of the activation of signaling pathways.Additionally, this protein also impacts the regulation of cell growth and differentiation, which explains the observed increase in cell proliferation over time and the reduced secretion of LDH (see Figures 7 and 8).A,C,E) illustrate the protein functional dependence network.Network nodes represent proteins: splice isoforms or post-translational modifications are collapsed, i.e., each node represents all the proteins produced by a single, protein-coding gene locus.Node color: colored nodes represent query proteins and the first shell of interactors.White nodes: the second shell of interactions.Empty nodes: proteins of unknown 3D structure.Filed nodes: some 3D structures are known or predicted.The edges represent protein-protein associations: associations are meant to be specific and meaningful, i.e., proteins jointly contribute to a shared function; this does not necessarily mean they are physically binding to each other.Known interactions: bluefrom curated databases; pink-experimentally determined.Predicted interactions: green-gene neighborhood; red-gene fusions; dark blue-gene co-occurrence.Others: yellow-text mining; black-co-expression; violet-protein homology.(B,D,F) illustrate co-expression predicts functional association.In the triangle matrices above, the intensity of color indicates the level of confidence that two proteins are functionally associated, given the overall expression data in the mouse organism.
Figure 10C,D depicts the functional relationships between proteins exhibiting a significant fold increase in expression on the seventh day of cell incubation with the BGO1 biomaterial.The experiment resulted in notable increases in the expression of the Cd80 and Il5 genes.These genes are closely linked to the production of proteins responsible for the activation of T lymphocytes [92,93].On the other hand, the increased expression of Socs1 and Socs2 genes (Figure 10B) can influence the regulation of these genes, helping to counteract inflammation in the targeted tissue, as these proteins directly interact with one another (Figure 10C).Additionally, the elevated expression of genes such as Smad3 and Tgfb1 (Figure 10B) has a positive impact on cell proliferation [94], as illustrated in Figures 7, 8 and 10B.This increase in the expression of Smad3 and Tgfb1 genes is likely associated with the presence of graphene, which forms the surface on which the cells are cultured.Other studies, including those by Shim NY et al. [95], Yang Y et al. [96] and Park S et al. [97], have demonstrated that the addition of graphene, a biocompatible material that serves as an excellent substrate for culturing stem cells, influences cell proliferation and Smad3-regulated pathways.Graphene addition also enhances interactions between cells and the extracellular matrix and intercellular junctions through signaling pathways involving TGFB1.These findings offer insights into the observed differences in fold change that determine the increased expression of the Smad3 and Tgfb1 genes over time when cells are exposed to BGO1.
The analysis of gene expression changes in cells cultured on the BGO1 biomaterial over time revealed significant positive fold changes and the increased expression of several genes: Agtr2, Cxcl10, Cd80, Hmox1, Cd40, Nos2 and Selp, as shown in Figure 10C.The expression of these genes significantly influences the activation of the immune system within cells, primarily by stimulating T lymphocytes.In Figure 10E,F, the interrelationships and the final gene expression outcomes are depicted.Most studies on immune responses have historically focused on the cells of the human immune system, where graphene has been shown to impact the activation of specific pathways involved in these processes.For instance, Cebadero-Dominguez et al. [86] investigated the immune response of reductive GO in monocytes and human T lymphocytes.They observed an increase in the levels of IL-6 as early as 4 h after exposing cells to rGO.In our study, an increase in the expression of IL-6 was observed on the seventh day of incubation of L-929 cells (Figure 10A-C).Moreover, a decrease in Tnf expression was observed on the first day of incubation (Figure 10A).This research group also assessed the levels of BAX and BCL-2, which exhibited reduced expression after one day of incubation, which is consistent with our experiments (Figure 10A).Additionally, the researchers determined that the initial day of the incubation of human immune system cells failed to induce a significant release of cytokines, a finding that also aligns with our observations (Figure 10A).Additionally, Yang Z et al. [85] conducted experiments to assess the immunotoxicity of GO on dendritic cells.The results revealed that the expression of selected genes responsible for immune system function, such as Fas, was reduced on the first day of incubation, with diminishing activity observed on the seventh day (Figure 10A,B).It is worth emphasizing that all these genes are known to play crucial roles in immune processes, inflammation and the regulation of homeostasis [49][50][51].
Conclusions
Recently, there has been a growing interest in the use of nanomaterials in biomedical engineering.Especially, the use of GO as an additive has generated significant attention due to its possible applications in the field of biomedicine.To fully exploit the potential of available nanomaterials in biomedical engineering, it is essential to determine the validity of their use.It usually requires extensive biological and physiochemical testing.The biggest challenge is to develop a biomaterial that is recognized by the immune system as harmless or self-like.In general, biomedical engineering research is focused on the development of a cell-safe biomaterial with significant utility in processing technologies such as 3D bioprinting.
Based on the work carried out, it can be shown that: 1. GO-enriched bioinks are characterized by higher viscosity than the corresponding hydrogels.The storage modulus of the studied bioinks is greater than the loss modulus, indicating that these materials are highly useful in extrusion bioprinting technology.
2.
Hydrogels containing GO absorb significantly more water than the bioink, and the greatest water absorption takes place during the first 48 h.The tested materials have significant biodegradability for 21 days.
3.
The tested materials show better print resolution and fiber stability compared to the reference sample.4.
The tested materials enriched with GO show significant elasticity of structure.5.
Based on the bioassays, it was concluded that biomaterials with 1% graphene oxide additives do not exhibit cytotoxicity against L-929 cells.6.
In addition, based on the analysis of LDH release and the Alamar Blue assay, it can be concluded that cells cultured on the graphene oxide biomaterial are not damaged; as a result; they do not produce lactate dehydrogenase and show an unimpaired degree of proliferation, except for the culture of cells on the BGO2 biomaterial on the seventh day of the experiment.7.
When analyzing the fold changes in the expression of genes responsible for immune processes, it was found that on the first day of the experiment, there was a decrease in the multiplicity of expression of genes related to the immune system.During the experiment, the expression profile changed.The addition of bioinks with 1% graphene oxide caused an increase in some genes, especially those responsible for the processes of the proliferation and activation of T lymphocytes.8.
The obtained characteristics of the tested materials prove their high utility in 3D bioprinting technology for applications in biotechnology and regenerative medicine.
Figure 1 .
Figure 1.The models of printable structures to evaluate printability in (A) fiber fusion test and (B) fiber collapse test.
Figure 1 .
Figure 1.The models of printable structures to evaluate printability in (A) fiber fusion test and (B) fiber collapse test.
Figure 2 .
Figure 2. Results of rheology testing, where (A) shows gelation point, (B) shows complex modulus and (C) shows viscosity of biomaterials.
Figure 2 .
Figure 2. Results of rheology testing, where (A) shows gelation point, (B) shows complex modulus and (C) shows viscosity of biomaterials.
Figure 3 .
Figure 3.The results for water absorbability: the weight of absorbed water (A) and the weight of absorbed water per mg of biomaterial (B).
Figure 3 .
Figure 3.The results for water absorbability: the weight of absorbed water (A) and the weight of absorbed water per mg of biomaterial (B).
Figure 4 .
Figure 4.The printability of graphene bioinks.(A) The fiber fusion test results along with printing parameters; (B) the photographs of prints from the fiber fusion test; (C) the fiber collapse test results on the platform-fiber collapse rate; (D) the photos of fibers printed on the platform.
Figure 4 .
Figure 4.The printability of graphene bioinks.(A) The fiber fusion test results along with printing parameters; (B) the photographs of prints from the fiber fusion test; (C) the fiber collapse test results on the platform-fiber collapse rate; (D) the photos of fibers printed on the platform.
Figure 5 .
Figure 5.The results from the static compression test.(A) Stress-strain behavior.(B) Mechanical parameters: I, mechanical strength; II, Young's modulus; III, elastic limit.
Figure 5 .
Figure 5.The results from the static compression test.(A) Stress-strain behavior.(B) Mechanical parameters: I, mechanical strength; II, Young's modulus; III, elastic limit.
Nanomaterials 2024 , 26 Figure 6 .
Figure 6.The effect of GO-enhanced biomaterial extracts on L-929 cell line in an indirect MTT assay.The blue line indicates the guidelines established by ISO 10993-5:2009(E), according to which >70% cell viability of the L-929 cell line determines the absence of cytotoxicity of the test biomaterial.
Figure 6 .
Figure 6.The effect of GO-enhanced biomaterial extracts on L-929 cell line in an indirect MTT assay.The blue line indicates the guidelines established by ISO 10993-5:2009(E), according to which >70% cell viability of the L-929 cell line determines the absence of cytotoxicity of the test biomaterial.
26 Figure 7 .
Figure 7.The influence of GO-based biomaterials on the viability of L-929 cell lines.The effect was measured using an LDH assay.(A) LDH assay, (B) statistical analysis; red: statistically significant differences, black: no statistically significant differences.
Figure 7 .
Figure 7.The influence of GO-based biomaterials on the viability of L-929 cell lines.The effect was measured using an LDH assay.(A) LDH assay, (B) statistical analysis; red: statistically significant differences, black: no statistically significant differences.
Nanomaterials 2024 , 26 Figure 8 .
Figure 8.The influence of GO-based biomaterials on the proliferation of L-929 cell lines.The effect was measured using the Alamar Blue assay.
Figure 8 .
Figure 8.The influence of GO-based biomaterials on the proliferation of L-929 cell lines.The effect was measured using the Alamar Blue assay.
Figure 9 .
Figure9.The relative expression of immune response-associated genes in mouse fibroblast cell after exposure to the BGO1 biomaterial.This analysis focuses on the measuring of the expression on the first day of incubation (A) and after seven days of incubation (B).Addition we assess the fold change in gene expression between the first and seventh day of incubation comparing these to the control cells from the first day of incubation.The image was generated u software available at www.software.broadinstitute.org/morpheus/(accessed on 14 August 2 Here, lower values correspond to a reduction in gene expression (bright blue color), while hi values indicate an increase in gene expression (dark blue color).
Figure 9 .
Figure9.The relative expression of immune response-associated genes in mouse fibroblast cell lines after exposure to the BGO1 biomaterial.This analysis focuses on the measuring of the gene expression on the first day of incubation (A) and after seven days of incubation (B).Additionally, we assess the fold change in gene expression between the first and seventh day of incubation (C) comparing these to the control cells from the first day of incubation.The image was generated using software available at www.software.broadinstitute.org/morpheus/(accessed on 14 August 2023).Here, lower values correspond to a reduction in gene expression (bright blue color), while higher values indicate an increase in gene expression (dark blue color).
Figure 10 .Figure 10 .
Figure 10.The correlation of proteins between immune response-associated genes.The assessment of immune response-associated genes to acquire protein-protein interaction network diagram was created on the STRING program.(A,C,E) illustrate the protein functional dependence network.Network nodes represent proteins: splice isoforms or post-translational modifications are collapsed, i.e., each node represents all the proteins produced by a single, protein-coding gene locus.Node color: colored nodes represent query proteins and the first shell of interactors.White nodes: the second shell of interactions.Empty nodes: proteins of unknown 3D structure.Filed nodes: some 3D structures are known or predicted.The edges represent protein-protein associations: associations are meant to be specific and meaningful, i.e., proteins jointly contribute to a shared function; this Figure 10.The correlation of proteins between immune response-associated genes.The assessment of immune response-associated genes to acquire protein-protein interaction network diagram was created on the STRING program.(A,C,E) illustrate the protein functional dependence network.Network nodes represent proteins: splice isoforms or post-translational modifications are collapsed, i.e., each node represents all the proteins produced by a single, protein-coding gene locus.Node color: colored nodes represent query proteins and the first shell of interactors.White nodes: the second shell of interactions.Empty nodes: proteins of unknown 3D structure.Filed nodes: some 3D structures are known or predicted.The edges represent protein-protein associations: associations are meant to be specific and meaningful, i.e., proteins jointly contribute to a shared function; this does not necessarily mean they are physically binding to each other.Known interactions: bluefrom curated databases; pink-experimentally determined.Predicted interactions: green-gene neighborhood; red-gene fusions; dark blue-gene co-occurrence.Others: yellow-text mining; black-co-expression; violet-protein homology.(B,D,F) illustrate co-expression predicts functional association.In the triangle matrices above, the intensity of color indicates the level of confidence that two proteins are functionally associated, given the overall expression data in the mouse organism.
|
v3-fos-license
|
2016-05-04T20:20:58.661Z
|
2013-02-27T00:00:00.000
|
14132421
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://hqlo.biomedcentral.com/track/pdf/10.1186/1477-7525-11-28",
"pdf_hash": "6fda516006012abe6f19175792d07c9fee5af07f",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2348",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"sha1": "5b8941ebd2340636d6481348b1ddfc459a914b76",
"year": 2013
}
|
pes2o/s2orc
|
Impact on quality of life in teachers after educational actions for prevention of voice disorders: a longitudinal study
Background Voice problems are more common in teachers due to intensive voice use during routine at work. There is evidence that occupational disphonia prevention programs are important in improving the quality voice and consequently the quality of subjects’ lives. Aim To investigate the impact of educational voice interventions for teachers on quality of life and voice. Methods A longitudinal interventional study involving 70 teachers randomly selected from 11 public schools, 30 to receive educational intervention with vocal training exercises and vocal hygiene habits (experimental group) and 40 to receive guidance on vocal hygiene habits (control group control). Before the process of educational activities, the Voice-Related Quality of Life instrument (V-RQOL) was applied, and 3 months after conclusion of the activities, the subjects were interviewed again, using the same instrument. For data analysis, Prox MIXED were applied, with a level of significance α < 0.05. Results: Teachers showed significantly higher domain and overall V-RQOL scores after preventive intervention, in both control and experimental groups. Nevertheless, there was no statistical difference in scores between the groups. Conclusion Educational actions for vocal health had a positive impact on the quality of life of the participants, and the incorporation of permanent educational actions at institutional level is suggested.
Background
Voice problems in professionals who use their voice as an instrument for work, may directly affect the quality of the individual's voice, interfering in social, emotional and physical aspects related to day-to-day life [1].
Studies with reference to vocal health and its impact on teachers' quality of life have been of interest to researchers during the last decade, because among other professions, they are considered those who present greater risk for developing voice disturbances [1][2][3]. Symptoms such as hoarseness, vocal breaks, vocal fatigue, burning in the throat, and temporary aphonia are frequent manifestations in the health of these professionals [4], and these problems may interfere in the performance of their work and social relationships, causing frustration and low self-esteem [2,5,6].
Therefore, educational programs directed towards the prevention of occupational disphonia have be recommended for the control of vocal alterations and improvement in the quality of life of professionals who frequently use their voice [7][8][9][10]. Objective and clinical tests are commonly used in evaluating the effectiveness of vocal health programs, such as acoustic and perceptive voice analyses, which are forms of analyzing and quantifying voice quality changes [8,10,11]. Nevertheless, objective evaluations do not show the individual's point of view of his/her psychoemotional, social and professional problems that may be related to the changes in health [11].
The majority of studies in the literature have evaluated vocal educational programs by means of objective measurement instruments with focus on vocal characteristics [8,10,11]. Few studies have evaluated the biopsychosocial quality of the voice of subjects after participating in educational programs [7,10] and these showed the evaluation of subjects' self-perception only in a quantitative manner, by means of scores, and did not present explored analysis of the responses.
In the field of vocal health, instruments to verify the inter-relationship between vocal problems and quality of life have been tested, such as, the Voice-Related Quality of Life (V-RQOL) [12]. A Brazilian version of V-RQOL was developed by Gasparini and Behlau [13]. The V-RQOL has been used by various researchers in the area of Phonoaudiology to investigate the relationships between quality of life and voice in teachers and subjects with and without vocal alterations, in addition to being pointed out as an important instrument for evaluating the impact of dysphonia on subjects' lives.
Analysis of the quality of life with regard to vocal health has been the focus of researches conducted in crosssectional and clinical studies [2,3,5]. However there is a need for studies that evaluate the impact of vocal health programs that are collective in scope, with regard to the quality of life of subjects in a longitudinal study.
Evaluating the effectiveness of vocal health programs by instruments after an intervention may be considered an important factor in planning public health policies.
The aim of this study was to make a longitudinal evaluation of the impact of voice educational activities on the quality of teachers' lives, by means of a Quality of life and voice questionnaire and analyzed the results in an exploratory manner.
Sample
The population of the present study was composed of teachers from the public school network in the municipality of Piracicaba, SP, Brazil. All public schools (66) were divided into administrative regions (5) and eleven schools were randomly selected take into consideration number of schools by region. The randomization process, using the school as the sampling unit, was chosen for two reasons: a) teachers had socioeconomical (socioeconomical status) and professional variables (workload in hours/week, number of years taught) very similar and did not differ significantly between schools (p > 0.05), b) teachers typically had a very high workload (most over 32 hours/week) and had only 2 hour meeting per week, in part, been met by the activities of the preventive program voice.
All teachers of the 11 schools selected were invited to participated and following inclusion criteria were used: participants should be non smokers, present no organic pathologies of the larynx previously diagnosed by a doctor, or report of complaining of persisting hoarseness for longer than 2 weeks, not doing speech therapy, not be over the age of 55 years. The age limit was established in order to prevent the characteristic of voice aging from being a study bias [14]. Seventy teachers which met the inclusion criteria signed a term of free and informed consent approved by the Research Ethics Committee of the Piracicaba School of Dentistry (Protocol nº 041/2009). The sample was composed by 30 subjects in the experimental group (26 women and 4 men with a mean age of 41.53 ± 7.01) and 40 in the control group (31 women and 9 men a mean age of 42.42 ± 7.71). The researchers decided a priori to random schools and allocation them into control and experimental group. The teachers from each school of the control and experimental groups participated of the program in the school in which they worked.
For sample calculation of the control and experimental group, a minimum of degrees of freedom was considered for the residue of the analysis of variance, with the minimum estimated size for each group being 13, which provided a test power of 0.8 for the level of significance of 0.05.
Questionnaire
All participants were required to complete a short demographic questionnaire at the beginning of the study to enable the researches to gain information about signs and symptoms of dysphonia and the vocal use patterns at work. The answers to the questions in the questionnaire were closed and varied within a Likert scale, which corresponded to the categories: never, rarely, sometimes, always, do not know. Responses were dichotomized between yes (sometimes and always) and not (never, rarely and do not know).
Procedures
The subjects in the control group participated in 2 lectures covering guidance on vocal hygiene habits. For the participants in the experimental group 1 lecture was held on vocal hygiene habits and 4 meeting with training exercises specifically for the voice. The guidance sessions had a 30-minute duration. The meetings were held at 15-day intervals.
Educational activities: vocal hygiene
Initially, the participants were informed about how the voice is produced, and which are the pathologies that may affect the vocal tract, harming the voice. Afterwards, the subject were instructed as regards the practice of healthy voice habits.
The scope of vocal hygiene habits focuses on the importance of drinking water during professional activity, and the beneficial effect of eating apples on the vocal tact, acting as astringent of the vocal tract mucosa, and vocal rest as an action that must be practiced in the interval after work [11,15,16]. Participants were instructed to avoid habits that are harmful to the voice, such as: speaking loudly, shouting, throats clearing, speaking over background noise, speaking loudly, speaking overloads in uncomfortable positions such as speaking while leaning down, the use of sprays and lozenges, constant ingestion of cold drinks, as well as exposing the body to abrupt temperature changes without self-protection by wearing a coat or suitable clothing for the situation, practicing physical activities or other activities including vocal use and poor sleep quality [15,16].
The speech therapist discussed information with the participants, on strategies to obtain students attention in the classroom. Among the strategies indicated were replacing the habit of shouting by other means, such as clapping or blowing a whistle to draw students attention in the classroom. In addition, they were instructed about the importance of facing the student when giving explanatory lessons, avoiding speaking while writing on the blackboard, and reducing overload on the vocal tract, caused by tension in the cervical region [4].
At the beginning of each meeting, a discussion was held among the participants of the group, with the purpose of reflecting about the subject that would be approached. Multimedia resources were used to transmit the content of lectures. The participants received a folder containing explanatory matter on the subject and a plastic bottle to get used to the habit of drinking water while they were teaching.
Educational activities: training exercises
In the experimental group 4 voice training exercise sessions were applied. The sessions separately approached the following topics: a. posture and cervical relaxation, b. respiration, c. phonation, frequency and intensity, d. resonance and articulation. In each session, 15 minutes was devoted to a theoretical approach to the subject, and 15 minutes to training the exercise. Three series of exercises were performed with intervals of 30 seconds. Each series, with 10 repetitions, totaling 30 repetitions of each approach.
a. Body Posture and Cervical Relaxation Exercises
Correct body posture related to the vertical axis of the spine and head during work activities was taught, demonstrating pictures of various teachers in classrooms, an participants were asked which of the characters were in a posture without an overload of tension during the activity of teaching. After reflecting about adequate posture in the work environment, training exercises were performed for relaxation of the cervical region and larynx with the object of diminishing local tension and favor loosening up voice production. Cervical relaxation exercises involved a sequence of rotating the shoulders backwards and forwards, movement of head flexion and extension, rotating the head to the left and right and vice versa. Subjects were instructed to make rotary self-massaging movements in the region of the larynx, accompanied by descending movements sliding down the vertical axis of the neck [7]. b. Breathing Exercises It was explained to the participants that voice emission demands the coordination of various muscles, particularly between the respiratory muscles themselves and the diaphragm. So that for voice production without tension and with control of speech intonation, balanced breathing is fundamental. The participants were instructed step by step to breathe moving the diaphragm and abdominal region, feeling the entry and output of air from this region. To increase the air flow, individuals were asked to inspire normally, hold their breath for 5 seconds, and then slowly breathe out the air through their mouths. This same procedure was performed while emitting the fricative phoneme/s/continuously and feeling the region of movement of the diaphragm [7,10]. c. Phonation, Frequency and Intensity Exercises With the aim of improving the vibration and amplitude of the vocal folds, favoring a balanced vocal production, it was proposed that participants make vibrant sounds of the tongue in the tone of habitual frequency of speech, in ascending and descending scales, and at both weak and strong intensity. Vibrant sound exercises of the tongue and/ or lips allow greater flexibility of the vocal chords, increase the wave like movement of the mucosa and sound projection without effort and tension [7,10,17].
d. Resonance and Articulation Exercises
The work of resonance was performed to favor the adequate use of some of the bone and supra-glottal cavities, such as the larynx and facial sinuses. The object of articulation was to favor an improvement in Articulatory precision of the words in speech and good harmony in vocal production. Sequences of resonance exercises with nasal sounds (/m/, /n/ and /nh/) were applied, involving association with all the vowels. The teachers were encouraged to feel the sensation of the paranasal resonators when emitting the sound, and were instructed to practice these exercises in the morning, in order to improve the balance of the voice resonators. Emission of the humming sound was also practiced with the same objective as that of achieving equilibrium of resonance. To improve Articulatory precision, the emission of each consonant with each vowel was requested, exploring the articulatory point projected, and ample masticatory movements associated with the nasal sound [7,10,18].
The teachers were instructed to practice the vocal exercises and healthy habits on a day-to-day basis, and to do so, were instructed by means of a weekly time schedule containing the quantity and frequency of activities to be practiced. A partnership was established with the schools with the fixation of murals in the access to teachers' meeting rooms, containing instructions as regards the practice of the program activities in professional routine. In addition, in schools where there were no water filters in a place easily accessible to the teachers, the speech therapist spoke to the coordinators about providing a receptacle for this purpose.
Quality of life and voice evaluation
The Brazilian version (adaptation and translation) of the Voice-Related Quality of Life instrument (V-RQOL-Hogikyan and Seturaman) was applied in both groups at baseline and three months after conclusion of the educational program. This instrument has the capacity to evaluate the perception of subjects with regard to the impact of voice on their quality of life and may be used to follow-up the development in the clinical area and in planning vocal health promotion actions [5]. V-RQOL involves 10 questions, to which quality of life and voice are related, involving the Physical (Questions 1, 2,3,6,7 and 9), Socio-emotional (4,5,8 and 10) and Global (questions from 1 to 10) domains. For each response, judgment on a Likert scale is used, ranging between the least severity to the greatest severity of the problem. The scale corresponds to:1 = never happens and it is not a problem; 2 = hardly happens and rarely is a problem; 3 = sometimes happens and is a moderate problem; 4 = often happens and almost always is a problem; 5 = it always happens and really is a serious problem.
To calculate the final score of theV-RQOL, the rules generally applied in the majority of quality of life instruments were used. The standard score is calculated from the gross score, with a higher value indicating greater correlation between the voice and quality of life. The maximum score is 100 (best quality of life) and the minimum score is zero, for both the physical and socio-emotional domains, as well as the global domain. To calculate the scores, the following formula is used, a standard algorithm of this type of questionnaire: Total Score : 100 À total score À 10 ð Þ Â 100 40 Physical Functioning Score : 100 À physical score À 6 ð Þ Â 100 24 Socio À emotional Score : 100 À socio À emotional score À 4 ð Þ Â 100 16
Statistical analysis
After exploratory analysis of the data and selecting the best structure of covariance, the V-RQOL (total, emotional and physical) score data were analyzed by the methodology of mixed models for repeated measures (PROC MIXED), considering the level of significance of α < 0.05. Chi Square and Fischer tests were applied considering the level of significance of α < 0.05 to analyze responses of questionnaire.
For statistical analysis was used SAS 2008 version 9.1 software. Table 1 showed that a high percentage of teachers complained of signs and symptoms for dysphonia such as hoarseness in the last 6 months (51.42%), vocal fatigue (60%), frequent clearing of the throat (52.85%), deep voice 37,14% and weak voice (38.57%). Regarding the use of voice refers to the act of teaching, 68.57% used the voice of an intense and continuous, and has the habit of shouting in the classroom. Table 2 shows the descriptive and percentage analysis of the answers to the questions of the V-RQOL instrument. There are no significance inter groups (control and experimental) for all questions. However, in Table 3, for all the V-RQOL scores there were statistically significant difference in the comparison between the initial and final evaluation for control and experimental groups, but no statistical difference inter groups.
Discussion
In the present study, the mean scores ranged between 75.6 and 92.5 in the control and experimental groups in the pre-educational program situation. In the study conducted by Spina et al. [19], when the quality of life and voice were correlated with levels of dysphonia and professional activity, scores from 71 to 100 points of the V-RQOL were found for individuals with better quality of life and from 0 to 35 points for the group with worse quality. In the V-RQOL validation study, used for dysphonic individuals, means of 53.5 for the total score, 55.9 for the socio-emotional domain and 51.9 for the physical domain were found, whereas for individuals with a normal voice all the scores were over 70 [12]. The mean scores of the present study suggest that the quality of life of the subjects was not being interfered with by dysphonia, since the V-RQOL scores were found to be relatively high and close to 100. Although subjects of this sample, both the control and experimental, have reported signs and symptoms for dysphonia, they do not associate these symptoms as negatively impacting on quality of life. The results corroborate the findings of Grillo and Penteado (2005) who studied the impact of voice on the quality of life of primary school teachers. This leads one to reflect on the need for self-perception of teachers as regards use of the voice in day-to-day routine, as well as the impact that vocal alterations and health problems may have on their quality of life.
Possibly there is greater need for these professionals to identify their respective voice problems, which may interfere in their day-to-day activities. Although the focus of the educational program did not contemplate training for auditory self perception of the voice and vocal psychodynamics, these aspects may be suggested for application in future vocal health programs for teachers.
After the educational activities, teachers showed significantly higher domain and overall V-RQOL scores after preventive intervention, in both control and experimental groups, showing that these activities had a positive impact on the participants' lives. This shows that both the activities provided with guidance on vocal hygiene, and those including practice of the exercises reflected positively on the quality of life of subjects.
It is important to point out the importance of educational actions on teachers' vocal health, reflecting on the individuals' quality of life. It is known that instructions such as taking care of hydration and perceptive measures such as, for example, not shouting in the classroom and not speaking with strong intensity in the presence of noise may improve the teacher's vocal quality [4]. Hydration promotes and maintains healthy functioning of the larynx, especially in individuals that use the voice professionally [15]. On the other hand, dehydration may increase phonatory effort, contributing to the manifestation of vocal fatigue, particularly for professionals who use the voice as an instrument for work [20]. Instructions about the habit of drinking water during the professional routine were worked on in this study in both groups, by means of lectures, discussions on the subject among the participants, explanatory folders and a 30 ml bottle offered to each Table 2 Comparison of the answers pre and post educational intervention in the control e experimental groups of the V-RQOL (Continued) 9. I have to repeat myself to be understood. never 19 47. 5 participant to use for drinking water during day-to-day work. Other educational measures were also transmitted, such as not shouting, but drawing the pupils attention by means of other resources such as clapping their hands or using a whistle. Emphasis on the practice of changing to healthy behaviors for the voice in both groups positively favored the quality of life of participants, observed in the global dominion score of the V-RQOL. The present study differs from other educational programs in which they verified improvements in the quality of life of the participants in educational programs, but only evaluated in situations of vocal training exercises [7,15,21]. The fact that 2 lecture sessions were developed on vocal hygiene habits in the control group, in addition to the resource of offering a botttle of water, differs from the methodology of other studies. These studies approached the subject of vocal hygiene habits in a single session only for the control group [7,9,10].
Various authors have mentioned the biopsychosocial impact in the face of voice problems that affect teachers [6,21]. Studies have shown that when evaluating the impact of educational programs for voice professionals by means of protocols with measurement of qualitative and quantitative measures, it was possible to observe a significant improvement with regard to physical and emotional aspects in general [7,21]. These effects were better observed in intervention programs related to voice training exercises associated with vocal hygiene habits [7,9,22]. In the present study there was improvement in the aspects of vocal health, intensifying the improvement of physical and psychic well being both in the control and experimental groups.
Studies have shown that educational actions of a preventive nature, when developed in groups and in the work environment may improve the quality of life of workers, particularly in physical and psychic well being [4,[22][23][24]. Researchers have indicated that the fact of an individual participating in group educational activities with persons who have similar problems and difficulties favors improvement in psychic well being, providing a reduction in stress and anxiety at work and improvement in communication [25]. A hypothesis for the findings of reduction in anxiety and frustration of individuals faced with voice difficulties, observed in the present study, may be that the dynamics of discussing the subjects raised in groups, provided a support network among the teachers. Timmermans et al. (2004) [26] observed significant chance in the emotional aspects of voice professionals who participated in an educational program with instructions about vocal hygiene and in situations of vocal training exercises, in addition to verifying an improvement after 18 months with regard to psycho-emotional aspects, both in the group given vocal hygiene instructions and the group with training exercises, concluding that this improvement, for both groups, reflected maturation as regards self perception and better control of feelings over the course of time.
In the present study, statistically significant difference was observed for the physical score of the V-RQOL for both the control and experimental group. The findings differ from those of the study of Duan et al. (2010), who evaluated the quality of life of subjects who participated in a vocal health program. There was a report of improvement in the physical and functional aspects of the voice only in the experimental group. These authors provided a lecture on vocal hygiene for the control and experimental groups, in addition to 4 sessions of training exercises for the latter group. Although the number of intervention sessions applied to the control and experimental group in the study of Duan et al. (2010) are compatible with those of the present study, the findings for the control and experimental groups obtained statistically significant results in the final evaluation of the physical score. It is suggested that in the present study, the physical improvement reported by the control group is due to the emphasis on healthy practices for the voice, reinforced in two lectures. The fact of instructing teachers about drinking water during the time they are giving lessons may result in beneficial effects when they are incorporated by the subjects, due to the reduction in friction between the vocal folds and in the reduction of the effort to speak. The same educational practices were discussed in the experimental group, and the teachers were encouraged to practice them together with the training exercises.It is suggested that the instructions transmitted were assimilated in good part by the teachers in both groups, which may result in positive effects on the improvement of the physical and functional aspects of the voice.
In general terms, although we found no difference intra and inter groups for each question of V-RQOL, we see a pattern of change to higher percentages for the categories never and hardly to the two groups, suggesting educations actions, can improve the quality of life of the subjects in relation to biopsychosocial aspects, such as improvement in psychological aspects, in communication and in the activities related to work.
One of the interesting points of this study was the randomization process. Initially, schools were randomly selected and then a general questionnaire, concerning socioeconomic and professional information, was applied to all teachers. Based on the results, we found no differences between the characteristics of teachers in different schools. Thus, there was the final draw of the schools, divided into experimental and control groups. This option was due to the fact that teachers have their workload too long (most with more than 32 hours/week) and they have only 2 hours/week available for meetings, part of this time in which the activities were carried out the study. In practical terms, it would be almost impossible to divide the sample into two study groups for ethical and logistical reasons. This form of randomization was similar to the study of Pasa et al. (2007), randomization of the sample of schools to compose groups, and different of other studieswhich the total sample of selected individuals was randomized between control and experimental group [7,8,21].
A limitation of this study is related to the number of male included in the experimental group (n = 7), and this is explained by low number of male teaching in public schools (less than 15% of the total) and low level of adhesion by male in the selected schools. However, due the fact men are less exposed to vocal problems than women due to the larynx and vocal folds conformation, this could exert a little impact on results. Another potential limitation was the limited number of vocal exercise sessions (4), possibly being a bias to identify gains in the voice quality longitudinal. The sessions of vocal health program was only possible to be realized in the form of separate schools, it was possible to gather all who were part of the same group at a single time and place.
Thus, it is important for future vocal health programs for teachers envisage the inclusion of both educational activities with vocal hygiene instructions and specific training exercises to obtain and improvement in the quality of life of subjects. This shows that there is a need for partnership between the public health area and the educational area, so that inter-sectorial actions promote quality of life at work, specifically for teachers.
Conclusion
The vocal health educational actions had a positive effect on the quality of life and voice of teachers both from the psycho-emotional aspects, and on improvement in the functional aspects of the voice.
|
v3-fos-license
|
2023-10-25T06:17:33.183Z
|
2023-10-23T00:00:00.000
|
264438726
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.nature.com/articles/s41598-023-43439-6.pdf",
"pdf_hash": "8b3774d29f297f6c9c0d8e21dc22044f5cc0ada7",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2349",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"sha1": "cbcc4756c0a945ca057b202c326f38af476c7907",
"year": 2023
}
|
pes2o/s2orc
|
Establishment of the microstructure of porous materials and its relationship with effective mechanical properties
In this study, a porous structure for a porous liquid storage medium is generated, and the homogenization theory based on displacement boundary conditions is used to predict the effective mechanical properties. The relationship between the porous material’s macroscopic mechanical properties and microstructure is next analyzed. In order to establish the relationship between the microstructure of porous materials and their macroscopic mechanical properties, assuming that the pores grow along the z direction, a method is proposed to generate 3D open-cell porous materials based on six design parameters (i.e., the number of pores, porosity, irregularity of pore distribution, the randomness of pore growth in the x and y directions, and randomness of pore size). Since the porosity of oil-bearing materials ranges from 20 to 30%, the porosity of the RVE (Representative Volume Element) was kept under control at about 25%, and the effect of the six design factors on the mechanical properties of the RVE was investigated. Utilizing SLA 3D printing technology, specimens were produced, and compression tests were used to show how useful the results of the numerical analysis were. The results demonstrated that after the number of RVE pores reaches 9, the numerical results have good repeatability. The irregularity of the initial pore distribution has little effect on the effective mechanical properties of the RVE. At the same time, the increase in the randomness of pore growth and the randomness of pore size increases the degree of weakening of the mechanical properties in the z-direction, while reducing the degree of weakening in the x and y directions, but the latter has a smaller impact. Furthermore, there is a superimposition effect of design parameters on the RVE.
Establishment of the microstructure of porous materials and its relationship with effective mechanical properties Kangni Chen , Hongling Qin * & Zhiying Ren
In this study, a porous structure for a porous liquid storage medium is generated, and the homogenization theory based on displacement boundary conditions is used to predict the effective mechanical properties.The relationship between the porous material's macroscopic mechanical properties and microstructure is next analyzed.In order to establish the relationship between the microstructure of porous materials and their macroscopic mechanical properties, assuming that the pores grow along the z direction, a method is proposed to generate 3D open-cell porous materials based on six design parameters (i.e., the number of pores, porosity, irregularity of pore distribution, the randomness of pore growth in the x and y directions, and randomness of pore size).Since the porosity of oil-bearing materials ranges from 20 to 30%, the porosity of the RVE (Representative Volume Element) was kept under control at about 25%, and the effect of the six design factors on the mechanical properties of the RVE was investigated.Utilizing SLA 3D printing technology, specimens were produced, and compression tests were used to show how useful the results of the numerical analysis were.The results demonstrated that after the number of RVE pores reaches 9, the numerical results have good repeatability.The irregularity of the initial pore distribution has little effect on the effective mechanical properties of the RVE.At the same time, the increase in the randomness of pore growth and the randomness of pore size increases the degree of weakening of the mechanical properties in the z-direction, while reducing the degree of weakening in the x and y directions, but the latter has a smaller impact.Furthermore, there is a superimposition effect of design parameters on the RVE.
The porous fluid storage medium is a solid-liquid biphasic complex inspired by biological articular cartilage, which can self-compensate for lubrication by precipitating fluid through the pores under the synergistic effects of external loads, frictional heat and siphoning.It is widely employed in the domains of oil-bearings, porous bionic bones, and self-lubricating ball linear guides due to its low manufacturing cost and self-circulating lubrication properties 1 .The solid phase of porous fluid storage media is typically made using techniques like cold pressing, hot sintering, and 3D printing 2 , and the part is also known as porous material.Studies have shown that increasing the porosity of porous materials improves their ability to store fluid, which improves their lubrication performance 3 , but it weakens the mechanical properties of the material, such as compressive strength [4][5][6] .As the requirements for equipment service life and reliability increase, oil-bearings need to optimize the internal pore microstructure to balance the contradiction between their load bearing and lubrication performance 7 , and selflubricating ball linear guides need to control the direction of pore openings to improve lubrication performance and lengthen service life 8 .Therefore, the solution lies in determining the mapping relationship between the pore microstructure and its macro mechanical properties and designing the microstructure of porous materials based on the service conditions of porous fluid storage media.
This line of research has its roots in the investigation of the mechanical properties of typical porous structures found in nature.Typically, the elastic strut network model created by Gent and Thoma in their investigation of the elastic deformation of foams 9 , the orthogonal cube constitutive model developed by Gibson and Ashby 10 .Later, Gibson and Ashby also developed the model's elastic bending deformation and plastic yielding mechanisms based on the hexagonal honeycomb structure's elastic modulus and Poisson's ratio under two mutually perpendicular loads.For open-cell aluminum foam materials, other researchers have developed a constitutive model and proposed a tetrakaidecahedral model [11][12][13] .These models are better suited to determining how the mechanical properties of uniformly regular thin-walled porous materials with high porosity relate to their microstructure.The effective mechanical properties of porous materials can also be calculated using a variety of numerical computational methods [14][15][16] .Typical examples include the Mori-Tanaka model (M-T) for calculating the effective mechanical properties of a material based on the pore slenderness ratio 17 and the Three-Phase Model (TPM) for calculating the effective shear modulus of a material based on porosity 18 .All these numerical models take into account the irregularity of porous materials to some extent, but most studies only characterize the pore characteristics in terms of porosity and the constants associated with the pore shape.The relationship between the microstructural features of porous materials and their macroscopic mechanical properties cannot be adequately described by these parameters.The microstructural can, however, be accurately represented by the finite element method (FEM), which is based on a model structure.
Finite element analysis based on RVE (Representative volume element) is an effective method for predicting the effective mechanical properties of porous materials.And among them, how to generate the RVE of porous materials becomes the key to the problem.Due to its ability to generate random and irregular polygons, the Voronoi diagram method is frequently used to generate random geometry models of porous materials.This method was proposed by Silva et al. 19 to generate a 2D Voronoi random model with uniform wall thickness using pore shape irregularity and porosity as design parameters, and discovered that the mechanical properties of high porosity porous materials were less dependent on pore shape irregularity.They then looked into the model more thoroughly and discovered that the removal of some cell walls, which had little impact on porosity, resulted in a sharp decrease in the material's useful mechanical properties 20 .Chen et al. 21considered six cells (pore particle) random defects based on the 2D Voronoi model, namely cell size variation, cell wall fracture, cell wall misalignment and cell absence, and found that cell edge fracture had the greatest effect on the yield strength of 2D foam materials.The random geometry models used in the above studies are based on the stochastic nature of one type of porous material (e.g., irregularities in pore shape and wall thickness).The microstructure of a porous material typically involves two or more random elements.Li et al. 22 discovered that the effective elastic modulus of 2D foams was affected by the irregularity of pore shape and wall thickness, and that this effect increased as the porosity decreased.They did this by using porosity, cell shape irregularity, and wall thickness inhomogeneity as design parameters based on the Voronoi diagram method.Guo et al., investigated the degree of anisotropy of 2D porous materials based on the 2D Voronoi model and the 2D randomly distributed circular pore model, using porosity and pore number as design parameters.According to the findings, 2D porous materials' degree of anisotropy decreases as the number of pores increases and increases as the porosity increases 5 .However, it is obvious that 2D RVE is insufficient to adequately describe the intricate microstructure of porous materials.As a result, it is still necessary to develop a 3D RVE that considers the microscopic random characteristics of porous materials.Shen et al. 23 investigated the dependence of random open-cell foam models on relative density by using the Voronoi tessellation technique to generate 3D random porous models using porosity and cell shape irregularity as design parameters.Porosity, cell shape irregularity, strut cross-sectional area, and strut cross-sectional shape were used as design parameters by Li et al., to generate a 3D Voronoi porous model based on the Voronoi tessellation technique.They then examined the effect of the design parameters on the effective mechanical properties of the open-cell foam material 24 .Unlike the previous approach of generating 3D RVEs by assigning a cross-sectional area to each edge of a Voronoi polygon, Yang et al. 4 propose to reduce the volume of a Voronoi polygon by using porosity as a design parameter and combine it with Boolean operations to generate 3D random porous RVEs.According to studies, the microstructure of porous materials has a significant effect on the mechanical properties of materials.By using specific parameter settings, the Voronoi diagram method can reflect the microscopic random structure of porous materials.However, the irregularity of pore shape is always correlated with the pore distribution of 3D RVE generated by traditional Voronoi diagram.By generating pore shapes and then utilizing a random technique to locate each pore's center points, this problem can be resolved.Li et al., generated a 2D square random porous model by using porosity and pore size irregularity as design parameters and a double-normal distribution algorithm to control the pore distribution and pore size, respectively.According to the findings, the effective Young's modulus of porous materials rises as the average distance between pores increases, while the randomness of pore size has little bearing on the material's Young's modulus 25 .Generatingrandomly distributed closed-cell spherical pore RVEs with random dimensions using the RSA (Random Sequential Adsorption) algorithm, Tarantino et al. discovered that the model is isotropic 26 .The degree of anisotropy of the RVEs was discovered to be correlated with the pore aspect ratio by Anoukou et al. 27 , who improved the RSA algorithm to generate randomly distributed non-overlapping ellipsoidal pore RVEs with random shapes and sizes.There are numerous studies that are similar to this one [28][29][30] , but this type of modeling primarily generates pores with regular shapes (spherical, square, ellipsoidal, etc.) and regulates the random distribution of pore particles by regulating the minimum distance between them.It is difficult to impose constraints on a randomly distributed collection of points based on pore shape size in order to create an open-cell random porous model with low porosity.Therefore, this method is mostly used to generate RVEs with a closed-cell structure, but for porous reservoir self-lubricating media such as oiled bearings, the open porosity is particularly important to enhance the lubricating properties of the material 5,31 , and a closed-cell structure is not suitable to describe the solid phase structure of oiled bearings 3 .Additionally, it was demonstrated that the irregularity of the cell shape has a negligible impact on the mechanical properties of porous materials at low porosity 4,24 .The majority of studies have only looked at how porosity, pore morphology (shape, size, and orientation), and cell wall morphology (wall thickness and cross-sectional area) affect the mechanical properties of porous materials, but they have not looked into how random the pore distribution is.
To that end, this study proposes a new modeling method for porous solid-phase structures used in oil-bearing porous fluid storage self-lubricating media based on the Voronoi diagram method.Six design parameters, including porosity, number of pores, and pore morphology, are defined to generate fully open-cell, 3D porous materials with random distribution.The study divides pore morphology into pore size and pore distribution.To establish the relationship between the microstructure and macroscopic mechanical properties of porous materials and to discuss the effects of microstructure such as pore size and pore distribution on the mechanical properties of porous materials, the effective mechanical properties of RVEs with different design parameters were predicted using a computational homogenization method.Finally, specimens were produced using SLA 3D printing technology, and uniaxial compression tests were run to ensure the accuracy of the numerical calculations.
RVE generation
The RVE is the smallest volume of a material at the microscopic level and must contain enough microstructural information while being sufficiently smaller in size than the macroscopic structure in order to accurately represent a material's properties at the macroscopic level.There are two common definitions of RVEs: (a) as a single cell in a periodic microstructure; and (b) as having enough microscopic components to satisfy statistical homogeneity and ergodicity 32 .The study makes the assumption that the material has a periodic structure.The steps involved in generating an RVE and the corresponding control parameters are as follows.
The initial pore distribution
This study generates 2D Voronoi diagrams using the Voronoi irregularity α and the number of pores N.Then, the Voronoi diagrams that meet the requirements are screened using the number of pores N. Finally, the Voronoi polygons are scaled using the porosity ρ to generate the initial pore distribution.The specific steps are as follows.First, the model is generated based on the Voronoi diagram.Voronoi diagrams are generated by setting a specified number of random points in the plane, i.e., nucleation points, then taking the vertical bisector of the line connecting two adjacent random points and trimming the resulting vertical bisector according to the principle of non-intersection of lines, thus dividing the plane into a series of convex polygons.In this case, the randomness of the distribution and shape of the Voronoi polygons is controlled by the minimum permissible distance d 0 between the nucleation points.Defining the distance between adjacent nucleation points of a perfectly regular two-dimensional Voronoi fovea as d s .The definition of the irregularity of a 2D Voronoi diagram is shown in Eq. (1).At the same time, to reduce the number of mesh and their singularity, the minimum side length of the Voronoi polygon is controlled to be 0.375d 0 .
As shown in Fig. 1, the smaller α means, the more regular the 2D Voronoi diagram.Second, In order to reduce the restriction of the plane edge on the shape of the Voronoi polygon at the edge, the plane area set when generating the Voronoi diagram was expanded, and the number of nucleation points was increased proportionally.2D Voronoi diagrams were continuously generated until the number of nucleation points within the desired range was equal to the required number of pores.Then take the Voronoi polygons whose nucleation points are within the expected range and use them to generate the RVE.For instance, if there are 25 nucleation points and the expected plane size is 1*1100 nucleation points are set up in the 2*2 plane to create the Voronoi graph, which is then used to extract the Voronoi polygons with nucleation points that fall within the middle of the 1*1 range.
Finally, the Voronoi polygons are scaled with the nucleation points as the center according to the given porosity, as shown in Eq. ( 2), so that the pore volume can meet the porosity requirements.Considering that the porosity of oiled bearings is generally 20-30%, the porosity of the RVEs generated in this paper is controlled to be around 25%.
where √ ρ is the porosity design parameter and ρ denotes the material porosity; i = x, y denotes the reference axis of the Cartesian coordinate system; j = 1, 2, . . ., n , n denotes the number of vertices of the Voronoi poly- gon; χ ij represents the coordinates of the vertices of the Voronoi polygon that is ultimately used to generate the RVE.;p i0 , p ij denotes the coordinates of the nucleation points and vertices of the Voronoi polygon, respectively, while specifying p z0 = χ zj = 0.
Design parameters in the x, y direction
The pores of porous materials used to make porous fluid storage self-lubricating bearings can be visualized as a collection of overlapping, irregularly shaped curving pipelines that are each composed of a number of stacked pore particles.In this investigation, the pores were divided into various pipes.Assuming that each pipe is made up of a stack of ten irregular prismatic pore particles and that the porous material grows linearly along the positive z-axis depending on the initial pore shape, let the governing equation for its growth trajectory be as stated in Eq. ( 3).
(2) where ℓ = 0, 1, . . ., 9,and χ ℓ+1 , p ℓ+1 denotes the coordinates of the vertex and nucleation of the ( ℓ + 1)th pore particle(from the bottom to the top of the RVE); l z , l e z denotes the size of the RVE in the z direction and the height of the pore particles respectively; ̟ i denotes the design parameters in the x and y directions, the larger ̟ i , the greater the randomness of the spatial distribution of the pore; a i is a set of random numbers uniformly distributed on (− 1, 1) ; while the corresponding coordinates of the pipe for ℓ = 0 are: The entire pore pipe model can be generated by connecting the corresponding points once each pore particle's top surface coordinates have been generated, as illustrated in Fig. 3.
Design parameters in the z direction
Considering that each section of each pore in a random porous material has a different radius, the vertex function of each pore particle can be determined by taking the nucleation point of each pore particle as the pole, setting up a local polar coordinate system, selecting the pole diameter r ℓ j0 corresponding to χ ℓ ij as the initial pore diameter, and varying the initial pore diameter within a specific range in accordance with the z directional design parameters.
̟ r is the z directional design parameter, with a larger ̟ r indicating greater randomness in the pore size of the pore pipe; a r is a set of random numbers that are evenly distributed on (− 1, 1) .As seen in Fig. 4, this reconstructs the pore wall to generate a pore pipe model that accounts for the z direction design parameters.
Periodicity of RVE
As previously mentioned, the directional design of the pore can be accomplished and, using Boolean operations, the target RVE can then be obtained given the RVE dimensions and the design parameters N, α, ρ, ̟ x , ̟ y , ̟ r .However, because the porous material in this study is assumed to be periodic, the RVE boundary must adhere Examples of pipes that have been produced using various x, y design parameters.The dark purple line depicts the growth curve of the pore nucleation points when the pipe is generated using the specified x and y design parameters; the dark purple points are the nucleation points, the lavender points are the vertices of each pore particle determined by the aforementioned equation, the lavender surface is the pore crosssection enclosed by the resulting vertices, and the nearly transparent lavender surface is the pore wall.(a) to the same standards for continuity and relative surface structure.The portion of a pore outside the RVE that passes through the RVE's boundary must be transferred to the relative boundary of the RVE without changing orientation.As a result, the RVE's period types are split into two categories, which are detailed below.
Periodicity in the x, y direction
The pore may go through four faces as it crosses the RVE boundary (positive x, negative x, positive y, and negative y faces).As shown in Fig. 5a, the portion of the pore beyond the RVE will enter the neighboring RVE by the negative x face as it passes through the positive x face of the RVE.As a result, the portion of the pore that is outside the original RVE is cut off and moved to the original RVE's negative x plane (as illustrated in Fig. 5b).
The portion of the pores that extends past the RVE is transferred to the negative y plane as they pass through the positive y plane, as seen in Fig. 6.Similarly, as the pores cross the RVE's edge, the pore simultaneously enters three adjacent RVEs.Trimming and transfer to the corresponding opposite face are done with the portion that extends past the RVE.For instance, (a-c) Examples of pipes that have been produced using various x, y design parameters ( ̟ x = ̟ y = 1 ).The dark purple line depicts the growth curve of the pore nucleation points when the pipe is generated using the specified x and y design parameters; the dark purple points are the nucleation points, the lavender points are the vertices of each pore particle determined by the aforementioned equation, the lavender surface is the pore cross-section enclosed by the resulting vertices, and the nearly transparent lavender surface is the pore wall.when the pore traverses the negative x-y edge, the portion of the pore that traverses the positive x, positive y, and positive the x-y edge is clipped and moved to the respective negative x, negative y, and negative x-y edges, as shown in Fig. 7.
Periodicity in the z direction
The positive z plane and the negative z plane of the RVE must both have the same pore distribution in order for the RVE to achieve periodicity in both the positive z plane and the negative z plane.As a result, the following constraint is placed on a x , a y , and a r in this study.Figure 8 illustrates the pores once the constraint is in place.
Corrected porosity
Although the single pore model is obtained by volume scaling based on a given porosity, the staggering and overlapping of pores, the randomness of pore size, and the realization of periodic structures will all have an impact (11) on the volume of the overall pores generated.Therefore, after generating the overall pores, the pore model is reconstructed according to Eq. ( 13) to achieve the purpose of correcting the porosity of the RVE.
where rℓ j represents the polar radius corresponding to the pore vertex after correction.
Calculation implementation
This study uses a computational homogenization method to perform a finite element analysis of the mechanical properties of the porous material RVE.
Computational homogenization method
At sufficiently tiny sizes, any material can be thought of as being non-homogeneous, although at macroscopic dimensions, one often considers the statistical homogeneity of the material.The homogenization method uses the fine-scale strain field of a material to solve for the macroscopic effective properties of that material.The homogenization calculations below are based on biphasic composites since porous materials can be thought of as composites of a base material plus air.
According to homogenization theory 33 , the average stress σ ij and the average strain ε ij are defined as: where V is the volume of the RVE, σ ij (x) and σ ij (x) is the stress and strain states at any point, respectively.The effective stiffness C ijkl and the effective flexibility S ijkl is defined as: In order to calculate the average stress and average strain in a multiphase material, Hill 34 introduced a phase average concentration for the various phases in the material.The particular calculation is displayed below.( 13) where f ,m denotes air and matrix material respectively, and v f , v m denotes the volume fraction of air and matrix material, respectively.Similarly, the average strain can be expressed as: Equation ( 18) can be stated as follows based on the constitutive relationship of the composite phases: where each phase's stiffness matrix C ijkl is regarded as a constant term.Therefore: The displacement boundary condition is applied in this investigation.Thus, the average strain of the RVE can be expressed using the divergence theorem, as shown in Eq. ( 23).And Eq. ( 24) will establish a relationship between the local strain and the average strain for any point in the RVE.
where, Ŵ represents the surface of the RVE, η is the normal direction of the RVE boundary, u is the displace- ment, and A ijkl is the tensor of the strain concentration factor.Equation ( 25) is obtained by integrating Eq. ( 24) and taking the volume average.
where A is the phase's average strain concentration factor.Equations ( 22) and ( 25) can be substituted for Eq.(19) to determine the global average stress as follows: Equation ( 26), when compared to Eq. ( 15), yields the following as the composite's effective stiffness matrix:
Computational implementation of RVE
The pores and a square of a specific size were generated in Abaqus using a Python script in accordance with the method for generating pores described in sections "The initial pore distribution"-"Design parameters in the z direction".The pores penetrate the square.Subsequently, the corresponding pores were treated periodically in accordance with the method in section "Periodicity of RVE".The pore Instances were then combined.Finally, a Boolean operation was used to obtain the target RVE, as shown in Fig. 9a.
Finite element analysis of the RVE
All of the finite element analyses in this study were carried out by combining Python and Abaqus.Assume that the base material of the solid is homogeneous, isotropic and linearly elastic.Using the normalization method, take the RVE size to be 1 × 1 × 1, Young's modulus E s = 1 ( E s is linearly proportional to the effective stiffness) and Poisson's ratio ν s = 0.34.The finite element analysis was performed using a ten-node tetrahedral element (C3D10), as shown in Fig. 9b.(18) The superposition principle was used to impose displacement boundary conditions as a combination of six pure strain components, and finite element calculations were carried out for each boundary condition.Then, from the ODB file, the strain components {ε 11 , ε 22 , ε 33 , ε 23 , ε 13 , ε 12 } T at each element's integration point, the element volume, and the local orientation of each element were extracted.Finally, the effective performance of the RVE was calculated using this data (as explained in "Computational homogenization method").
Results and discussion
Because a large portion of previous research [4][5][6]35 , concentrated on how porosity affected the mechanical properties of porous materials, the findings almost universally indicated that mechanical properties of porous materials decreased as porosity ρ increased. Asa result, this study will use a controlled variable approach to discuss the effects of the number of pores N, the initial pore distribution uniformity α , the pore growth randomness ̟ x , ̟ y and the pore size randomness ̟ r on the effective mechanical properties of porous materials.
Mesh convergence study
By examining the numerical results for the same RVE with various element numbers, the mesh convergence study was carried out.For the RVE, shown in Fig. 10a, with design parameters ρ = 0.25, α = 0.25, ̟ x = ̟ y = ̟ r = 0, N = 16 .The number of elements are varied from 6 × 10 4 to 1 × 10 6 .The results are displayed in Fig. 10b-d.When there are more than 5 × 10 5 elements, it is clear that the numerical results converge.As a result, to discretize the remaining RVE models, mesh sizes with element counts of 5 × 10 5 and higher are used.
Effects of the number of pores
The effect of the number of pores N on the mechanical properties of the RVE was analyzed for a columnar pore RVE (i.e., ̟ x = 0, ̟ y = 0, ̟ r = 0 ) with porosity design parameter ρ = 0.25 and Voronoi diagram irregularity α = 0.25 .Five independent sets of nucleation points p i0 ( i = x, y ) were used to generate five porous material models for each value of N, i.e., N = 4, 9, 16 and 25.In addition, the porosity of the RVE generated in the study is all within the range of (25 ± 0.5)%.
Figure 11 shows one of the RVEs generated in this section and the average value of the effective engineering constants at different N values.The corresponding relative standard deviation (RSD) is shown in Table 1. Figure 11 indicates that for a prismatic RVE, its Young's modulus in the x and y direction E 1 , E 2 are close (maximum relative difference of 8%), the Poisson's ratio in the XZ and YZ planes v 13 , v 23 are close (maximum relative dif- ference of 8%) and shear modulus in the XZ and YZ planes G 13 , G 23 are close (maximum relative difference of 8%).The effect of the number of pores N on the Young's modulus E 3 in the z direction of the RVE can be ignored.
From Table 1, it can be seen that when N = 16 and N = 25, the relative standard deviation of the effective engineering constants of the RVE is less than 15%.This indicates that the effective engineering constants of the RVE are stable when N = 9, N = 16 and N = 25.However, when N = 4, the effective modulus of the RVE fluctuates greatly, with the maximum relative standard deviation of its effective engineering constants being 29%.Babu discovered that the mechanical properties of a material are influenced by the projected area of the fibers in the corresponding direction when he looked into the effective mechanical properties of fiber composites 36 .When there are few pores, the position of the pores affects the projected area of the pores on the XZ and YZ planes.The relative projected area of the pores on the XZ and YZ planes approaches 100% as the number of pores rises.As a result of the randomness of the pore locations, when the number of pores is low, the effective engineering constants E 1 , E 2 , G 12 , G 13 , G 23 , v 12 , v 13 , and v 23 fluctuate greatly.
Effect of inhomogeneity of initial pore distribution
Based on the research in section "Effects of the number of pores", in order to reduce the effect of the number of pores on the results, the effect of the initial pore distribution inhomogeneity α on the mechanical properties of the RVE was analyzed for a columnar pore RVE (i.e.̟ x = 0, ̟ y = 0, ̟ r = 0 ) with N = 16 .For each value of α , five independent sets of nucleation points p i0 ( i = x, y ) were used to generate five porous material models, respectively.Figure 12 shows one of the RVEs generated in this section and the average value of the effective engineering constants of the RVE at different α values.Table 2 shows the relative standard deviation of the cor- responding effective properties.
Figre 12 shows that α has little effect on the effective mechanical properties of the RVE.At the same time, similar conclusions can be drawn for the mechanical properties of a columnar RVE as in section "Effects of the number of pores".Observing Table 2, it can be seen that the relative standard deviation of the effective properties predicted at different α values is all less than 15%.This indicates that the numerical discretization is small and the numerical results are stable.In addition, it can be seen from Figs. 11 and 12 that the Young's modulus in the z direction of a columnar RVE is independent of the distribution of pores in the XY plane and remains around 0.75.
Li discovered that the inhomogeneity of the strut cross-sectional area decreased the mechanical properties of foams 24 .At the same time, changes in the relative projection area of the pores and the implementation of periodic structures will also have an impact on the mechanical properties of the RVE.These factors caused fluctuations in the mechanical properties of RVE, but Table 3 shows that this fluctuation is acceptable.
Effect of randomness in pore size
To reduce the effect of other factors on the mechanical properties of the RVE, the effect of pore size randomness ( ̟ r ) was studied for a porous material model with α = 0.25 , N = 16 , ρ = 0.25 , ̟ x = 0 , ̟ y = 0 .Five porous material models were generated using five independent sets of nucleation points p i0 i = x, y and random values a ℓk r (ℓ = 0, 1, ..., 9; k = 1, 2, ..., N) for each ̟ r value, i.e.,̟ r = 0.2, 0.4, 0.6, 0.8, 1 .Figure 13 shows one of the RVEs generated in this section and the average value of the effective engineering constants at different ̟ r values.The corresponding relative standard deviation (RSD) is shown in Table 3.
Figure 13 shows that, on average, the effect of ̟ r on the mechanical properties of the RVE is small.As ̟ r increases further, E 3 and v 12 decrease, E 1 ,E 2 ,G 12 ,v 13 and v 23 increase.The effect of ̟ r on G 13 and G 23 is negligible.This occurs because, as ̟ r increases, the pore size inhomogeneity of the porous material increases and the projected area of the pores decreases on the XZ and YZ planes (as shown in Fig. 14) and increases on the XY.Therefore, the randomness of the pore size has an enhancing effect on the mechanical properties of the RVE in the x and y directions and a weakening effect on the mechanical properties of the pores in the z direction, and this effect increases with the randomness of the pore size increases.Equation (9) shows that ̟ r leads to proportional scaling of the pore cross-section with respect to the z direction, resulting in a close change in the projected area of the pore on the XZ and YZ planes, leading to a close effect of ̟ r on the mechanical properties of the porous material in the x direction and y direction In addition, the effect of ̟ r on the effective engineering constants of the RVE is not significant.When ̟ r = 1 , the average values of E 1 ,E 2 ,G 12 ,v 13 and v 23 are 6%, 5%, 8%, 12%, and www.nature.com/scientificreports/13% larger than when ̟ r = 0 , respectively.E 3 and v 12 are relatively reduced by 12% and 6%, respectively.In terms of Young's modulus, as ̟ r increases, the increase in E 1 and E 2 is smaller than the decrease in E 3 .From Table 3, it can be observed that the relative standard deviation of the effective stiffness of the RVEs generated at different ̟ r values is less than 15%, indicating that the method of generating RVEs and predicting effective stiffness in this study is repeatable. .
Effect of pore growth randomness
In order to reduce the influence of other factors on the mechanical properties of RVE, the effect of pore growth randomness ( ̟ x , ̟ y ) on the mechanical properties of RVE was analyzed for a porous material model with α = 0.25 , N = 16 , ρ = 0.25 , ̟ r = 0 .For each pair of ̟ x and ̟ y values, i.e., ̟ x = 0,1,2, and ̟ y = 0,1,2,3,4, five porous material models were generated using five sets of independent nucleation points p i0 (i = x, y ) and random numbers a ℓk i (ℓ = 0, 1, . . ., 9; k = 1, 2, . . ., N) , respectively.The average values of the corresponding effective engineering constant predictions and their relative standard deviations are shown in Fig. 15 and Table 4, respectively.From Fig. 15, it can be seen that on average, as ̟ y increases, E 1 , G 12 , v 13 and v 23 increase, while G 13 , G 23 and E 3 decrease, v 12 and E 2 fluctuates slightly.The increase of ̟ x leads to the increase of E 2 , G 12 , v 13 and v 23 , the decrease of G 13 , G 23 and E 3 , but has little effect on E 1 .In addition, as ̟ x increases, v 12 decreases.As ̟ x further increases, v 12 remains within a stable range.At the same time, for each ̟ x value, as ̟ y increases from 0 to 4, the change in the average value of E 1 is similar.For each ̟ y value, as ̟ x increases from 0 to 3, the change in the average value of E 2 is similar.This means that when ̟ x and ̟ y change, the interaction between them has little effect on the elastic modulus in the x and y directions.The effect of pore growth randomness on E 3 decreases as pore growth randomness increases.At the same time, from ̟ x = ̟ y = 0 to ̟ x = 3,̟ y = 4 , E 1 , E 2 ,G 12 ,v 13 and v 23 increased by 18%, 16%, 26%, 58%, 55%, respectively.E 3 , G 13 , G 23 ,v 12 decreased by 181%, 55%, 53%, 17%, Figure 15.Effective engineering constants of the RVE for different ̟ x and ̟ y , where X-DP i denotes the x-directional design parameter ̟ x = i, (i = 0, ..., 3).
respectively.This indicates that the sensitivity of the mechanical properties of RVE in the z direction to pore growth randomness is higher than that of the mechanical properties of RVE in the x and y directions.This is because when ̟ y (or ̟ x ) increases while ̟ x (or ̟ y ) remains unchanged, the projection area of the pores on the XY plane increases significantly, and the projection area of the pores on the YZ (or XZ) plane decreases as ̟ y (or ̟ x ) increases, but the projection area on the XZ (or YZ) plane remains unchanged.Therefore, as ̟ y (or ̟ x ) increases, the mechanical properties of porous materials in the x (or y) direction increase, while the mechanical properties in the z direction decrease.At the same time, an increase in pore growth randomness leads to an increase in the irregularity of cell wall cross-sectional area.However, from Fig. 15, it can be seen that for low porosity RVEs with a given porosity, the relative projection area of pores dominates the effect on RVE's effective mechanical properties.In other words, as ̟ x or ̟ y increases, the randomness of pore particle distribu- tion on the XZ or YZ plane increases, and at the same time, the randomness of pore growth in the x or y direction increases.This indicates that the mechanical properties of porous materials are affected by the randomness of pore particle growth.As pore growth randomness increases, the relative projection area of pores on the XY plane increases and decreases on the XZ and YZ planes; and the unevenness of cell wall cross-sectional area increases.As a result, the randomness of the growth of porous particles increases, making the mechanical properties of porous materials weaker in the z direction and stronger in the x and y directions.
Table 4 shows that the relative standard deviations of the effective mechanical properties predicted by the five sets of RVEs generated by different ̟ y and ̟ x are all less than 15%.This once again proves the stability of the numerical results and the repeatability of the method used in this study.
The synergy of the three spatial design parameters
To reduce the impact of other factors on the mechanical properties of RVE, the effects of pore growth randomness ( ̟ x , ̟ y ) and pore size randomness ( ̟ r ) on the mechanical properties of RVE were analyzed for a porous material model with α = 0.25, N = 16,ρ = 0.25 For each pair of ̟ x , ̟ y , and ̟ r values, i.e., ̟ x = 2,3,̟ y = 2,4 , and ̟ x = 0, 0.4, 1 , five porous material models were generated respectively.The average values of the predicted effective engineering constants and their relative standard deviations are shown in Tables 5 and 6 respectively.
As shown in Table 5, there is a superimposition effect when the three spatial design parameters work together.As ̟ r increases from 0 to 1, the maximum relative change rate of the average value of the effective mechanical properties of RVE corresponding to the four groups of ̟ x , ̟ y is 10%.When the three spatial parameters work together, the impact of ̟ r on the effective mechanical properties of RVE is still not significant, and the trend of the impact of spatial design parameters on the effective mechanical properties of RVE remains unchanged.In addition, the interaction of the three spatial design parameters has little effect on the equivalent mechanical impact of RVE in the x and y directions.The effective mechanical properties of RVE in the z direction decrease as the randomness of pore space increases, but when the randomness of pore space reaches a certain level, the rate of decline in the mechanical properties of RVE in the z direction slows down as the spatial design parameters continue to increase.For example: when ̟ x changes from 2 to 3 and ̟ y changes from 2 to 4, the relative changes in E 3 corresponding to three ̟ r values (0, 0.4, and 1) are 72%, 70%, and 51%, respectively.When ̟ x = 2 and www.nature.com/scientificreports/̟ y changes from 2 to 4, the relative changes in E 3 corresponding to three ̟ r values (0, 0.4, and 1) are 31%, 52%, and 31%, respectively.When ̟ x = 3 and ̟ y changes from 2 to 4, the relative changes in E 3 corresponding to three ̟ r values (0, 0.4, and 1) are 40%, 42%, and 31%, respectively.Table 6 shows that the relative standard deviations of the effective mechanical properties of the five groups of RVEs generated by different design parameters are all less than 15%, proving the reliability and repeatability of the results.
Experimental equipment and methods
To verify the reliability of the numerical method used in the previous section, the RVE with design parameter ρ = 0.242, α = 0.25, N = 16, ̟ x = ̟ y = ̟ r = 0, (sample b) and the RVE with design parameter ρ = 0.24, α = 0.25, N = 16, ̟ x = 0, ̟ y = 1, ̟ r = 0 (sample c) were prepared using SLA 3D printing tech- nology.The 3D printed material was photosensitive resin.The specimen's E 3 was then determined by uniaxial compression tests.The model was then numerically solved again using Poisson's ratio of the material supplied by the manufacturer.Finally, the values between the two were compared in order to verify the reliability of the numerical method.
According to ASTM D1621-16 (Standard Test Method for Compressive Properties of Rigid Cellular Plastics), the sample is designed as a cube with a side length of 6 cm.Three samples are made for each structure to assure the objective and reliability of the experimental results (As shown in Fig. 16).Additionally, specimens of the same size with a porosity of 0% (labeled as specimen a) were produced for the study in order to determine the matrix material's Young's modulus.The actual porosity of the specimens is shown in Table 7.The experimental procedure was displacement-controlled, and the specimens were quasi-statically loaded using the test machine's indenter moving uniformly at a rate of 1 mm/min.
Experimental results and validation
Stress-strain curves for each specimen under compression were generated by applying uniaxial compression in the z direction to several specimens, as illustrated in Fig. 17.The three curves for specimens b and c, as well as the curves for specimens a2 and a3, can be observed in the graphs and are all quite consistent, demonstrating the reliability of the experiment results.Table 7 shows the specimens' effective Young's modulus E 3 from the experi- ments.According to Table 7, there are significant differences between the experimental results for a1 and, a2, a3.Therefore, the mean value E 3 = 282.995MPa corresponding to specimens a2, a3 and Poisson's ratio v = 0.39 provided by the manufacturer were taken as the material properties of the base material.Table 7 shows the effective Young's modulus E 3 that was determined by homogenizing the RVE for specimens b and c.The relative error between the numerical and experimental results was found to be less than 5% after comparing the average of the three experimental results for the same structure with the numerically computed results (as shown in Table 8).
Conclusion
For the porous solid phase of oil-bearings, a pore structure modeling approach is suggested to generate a porous material RVE with geometrical periodicity.The computational homogenization method is used to predict the RVE's effective properties.In order to investigate the relationship between the microstructure and the macroscopic mechanical properties of porous materials, the effective properties of RVEs generated from various design parameters are calculated, the numerical results are analyzed.Finally, the availability of the numerical results was verified by performing compression tests on specimens prepared by 3D printing technology to measure the effective modulus of elasticity of the specimens.The following are the study's principal findings.
1.The predicted values of the effective mechanical properties of the RVE generated by the given design parameters are stable, proving the repeatability of the method.This means the effectiveness of the RVE generation method.2. The 3D printing technology can be used to produce the modeling method proposed in this study.However, due to the manner in which the RVE is generated, there are localized stress concentrations in the structure that must be smoothed and other treatments applied before they can be used in industrial preparation.And for RVEs with complex structures, there are situations where the support is not easy to remove.3.With 25% porosity, the columnar pore RVE has an elastic modulus in the z direction of around 0.75 E s .At the same time, when the number of pores is high, Young's modulus in the x and y directions is close, and the shear modulus and Poisson's ratio in the XZ and YZ planes are closed.4. For the columnar pore RVE, when the number of pores is 4, the mechanical properties of the RVE are unstable.At the same time, the irregularity of the pores has little effect on the effectiveness of the RVE. 5.The increase in the randomness of pore growth leads to a decrease in the weakening of the mechanical properties of the pores in the x and y directions, and an increase in the degree of weakening of the mechanical properties in the z direction.At the same time, the shear modulus in the XY plane increases, the shear modulus in the out-of-plane direction decreases.The Poisson's ratio in the XY plane decreases first and then tends to stabilize as the randomness of pore growth in the x direction increases.6.The effect of pore size randomness on the mechanical properties of RVE is similar to the effect of pore growth randomness.The difference is that the former can ignore the effect on the shear modulus in the out-of-plane direction.In addition, the effect of this factor on the effective mechanical properties of RVE is smaller than the latter.7. The three spatial design parameters have a superimposed effect on RVE.The increase in pore space randomness (randomness of pore growth and randomness of pore size) reduces the mechanical properties of RVE in the z direction and increases the mechanical properties in the x and y directions.But the former is greater than the latter.In addition, as the randomness of pore space increases further, the rate of decline in the mechanical properties of RVE in the z direction slows down.
Figure 2 .
Figure 2. The process of generating the initial pore coordinates ( ρ = 0.25, N = 25, α = 0.25 ).(a) Voronoi diagram generated by scaling the planes according to the randomness of the pore distribution and the number of pores (where the red square indicates the desired plane range); (b) Voronoi polygon with the nucleation points within the desired plane range; (c) initial pore distribution obtained by scaling the Voronoi polygon according to the target porosity.
Figure 4 .
Figure 4.(a-c) Examples of pipes that have been produced using various x, y design parameters ( ̟ x = ̟ y = 1 ).The dark purple line depicts the growth curve of the pore nucleation points when the pipe is generated using the specified x and y design parameters; the dark purple points are the nucleation points, the lavender points are the vertices of each pore particle determined by the aforementioned equation, the lavender surface is the pore cross-section enclosed by the resulting vertices, and the nearly transparent lavender surface is the pore wall.(a) ̟ r = 0 ; (b) ̟ r = 0.3 ; (c) ̟ r = 0.6.
Figure 4.(a-c) Examples of pipes that have been produced using various x, y design parameters ( ̟ x = ̟ y = 1 ).The dark purple line depicts the growth curve of the pore nucleation points when the pipe is generated using the specified x and y design parameters; the dark purple points are the nucleation points, the lavender points are the vertices of each pore particle determined by the aforementioned equation, the lavender surface is the pore cross-section enclosed by the resulting vertices, and the nearly transparent lavender surface is the pore wall.(a) ̟ r = 0 ; (b) ̟ r = 0.3 ; (c) ̟ r = 0.6.
Figure 5 .
Figure 5. Pores pass through the boundary surface in the x direction.(a) Two RVEs share a pore, with Cube1 denoting the initial RVE and Cube2 denoting the adjacent RVE; a is the portion of the pore that lies within the initial RVE; b denotes the portion of the pore that lies within the adjacent RVE; (b) the pore beyond the boundary is transferred to the corresponding relative surface of the initial RVE.
12 ) a 10 r = 0 Figure 6 .
Figure 6.Pores pass through the boundary surface in the y direction.(a) Two RVEs share a pore, with Cube1 denoting the initial RVE and Cube2 denoting the adjacent RVE; a is the portion of the pore that lies within the initial RVE; b denotes the portion of the pore that lies within the adjacent RVE.(b) The pore beyond the boundary is transferred to the corresponding relative surface of the initial RVE.
Figure 7 .
Figure 7.The pore passes over one edge and two faces.(a) Four RVEs share a pore.Cube1 denotes the initial RVE; Cube2, Cube3, and Cube4 denote adjacent RVEs; a is the portion of the pore located within the initial RVE; b, c, and d denote portions of the pore located within adjacent RVEs, respectively; (b) the pore beyond the boundary is transferred to the corresponding relative surface of the initial RVE.
Figure 17 .
Figure 17.Stress-strain curve of a porous material specimen under uniaxial compression.
Table 1 .
Relative standard deviation in effective properties for porous materials with different N (unit :%).
Table 2 .
Relative standard deviation in effective properties for porous materials with different α (unit: %).
α E
Table 3 .
Relative standard deviation in effective properties for porous materials with different ̟ r (unit : %).
Table 4 .
Relative standard deviation in effective properties for porous materials with different ̟ y and ̟ x (unit: %).
Table 5 .
The average value of the effective engineering constants for different ̟ r ,̟ y and ̟ x (unit : %).
Table 6 .
Relative standard deviation in effective properties for porous materials with different ̟ r ,̟ y and ̟ x (unit: %).
Table 8 .
Experimental and simulation results for porous material specimens.
|
v3-fos-license
|
2016-05-12T22:15:10.714Z
|
2013-12-01T00:00:00.000
|
6463653
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/1756-0500-6-522",
"pdf_hash": "0e64656fcbdd856bacf819d8d0f98b3ec6bc7f07",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2350",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "34b43673865ed42b84806f5657e85586697f1ecb",
"year": 2013
}
|
pes2o/s2orc
|
A practical method for preparation of pneumococcal and nontypeable Haemophilus influenzae inocula that preserves viability and immunostimulatory activity
Background Convenience is a major reason for using killed preparations of bacteria to investigate host-pathogen interactions, however, host responses to such preparations can result in different outcomes when compared to live bacterial stimulation. We investigated whether cryopreservation of Streptococcus pneumoniae and nontypeable Haemophilus influenzae (NTHi) permitted investigation of host responses to infection without the complications of working with freshly prepared live bacteria on the day of experimental challenge. Findings S. pneumoniae and NTHi retained >90% viability following cryopreservation in fetal calf serum for at least 8 weeks. Host responses to live, cryopreserved (1 week and 4 weeks), heat-killed or ethanol-killed S. pneumoniae and NTHi were assessed by measuring cytokine release from stimulated peripheral blood mononuclear cells (PBMCs). We found that cryopreserved bacteria, in contrast to heat-killed and ethanol-killed preparations, resulted in comparable levels of inflammatory cytokine release from PBMCs when compared with fresh live bacterial cultures. Conclusion Cryopreservation of S. pneumoniae and NTHi does not alter the immunostimulatory properties of these species thereby enabling reproducible and biologically relevant analysis of host responses to infection. This method also facilitates the analysis of multiple strains on the same day and allows predetermination of culture purity and challenge dose.
Background
Many research laboratories, by necessity, use heat-killed, ethanol-killed, UV-irradiated or paraformaldehyde-fixed bacterial preparations to investigate host-bacterial interactions. However, stimulation of host immune cells with such inactivated preparations can result in very different outcomes when compared with live bacterial stimulation [1][2][3]. We have shown that in comparison with live Staphylococcus epidermidis, heat-killed and ethanol-killed S. epidermidis preparations have a reduced capacity to activate key innate immune pathways, especially those associated with cytosolic/endosomal bacterial recognition [3]. In addition, Mogensen and colleagues have demonstrated that live but not heat-killed preparations of Streptococcus pneumoniae and Neisseria meningitidis stimulated the host inflammatory response through Toll-like receptor 9 [2].
Convenience is a major reason for using killed preparations of bacteria to investigate host-pathogen interactions. Working with live bacteria usually requires growth to mid-log phase on the day of the stimulation experiment to ensure consistent and reproducible host responses. The time required for mid-log growth on the day of experimentation varies for different bacteria and can take up to 8 hours, which restricts the number of strains that can be assessed on one day and the time available for experimental challenge. In addition, culture contamination is usually only apparent on the day after experimental challenge of the host cells/animal models by checking purity of the culture on an agar plate incubated overnight. An alternative to broth cultures is to harvest bacteria from an overnight agar plate and resuspend in media to the desired optical density, which roughly correlates with bacteria/ml [4]. However, this means that the majority of bacteria are either in stationary phase or indeed dead when used to assess the host response, and results can vary accordingly. We herein describe a simple cryopreservation method using fetal calf serum (FCS) to store mid-log phase S. pneumoniae and NTHi for at least 8 weeks without a significant reduction in viability. We have used a PBMC stimulation assay to demonstrate that preparations of S. pneumoniae and NTHi frozen for up to 4 weeks retain the immunostimulatory properties of freshly prepared live bacterial preparations, whereas heat-killed and ethanol-killed preparations do not.
Findings
There was a dip in viability when S. pneumoniae and NTHi were initially frozen, however, both species retained over 90% viability following 8 weeks cryopreservation (92.6 and 97.0% respectively when compared with 1 day of cryopreservation), Figure 1. Viability at 16 weeks cryopreservation was also measured and found to still be >90% for both species (data not shown).
With assurance that S. pneumoniae and NTHi remained viable following cryopreservation, we then challenged PBMCs from 5 adult donors with preparations of bacteria that were either frozen for 1 or 4 weeks and compared this with PBMCs challenged with heat-killed, ethanol-killed or live preparations prepared on the day of challenge. PBMC release of 5 inflammatory cytokines was measured as an indication of the immunostimulatory properties of the bacterial preparations. We found that there was no difference in the immunostimulatory properties of frozen S. pneumoniae and NTHi compared with live bacteria regardless of whether the bacteria were stored at −80°C for 1 or 4 weeks ( Figure 2). In contrast, stimulation of PBMCs with ethanol-killed preparations resulted in production of significantly lower levels of IL-6, IL-10, TNFα and IL-1β for S. pneumoniae, and IFNγ and IL-1β for NTHi, when compared with live or frozen preparations ( Figure 2, P < 0.05). Heat-killing retained slightly more immunostimulatory properties than ethanol-killing but there was still reduced immunostimulation in comparison with live or frozen bacteria. No IFNγ was released from PBMCs stimulated with either heat or ethanol-killed S. pneumoniae preparations, whereas an average of 200 pg/ml IFNγ was released upon stimulation with live or frozen S. pneumoniae. For NTHi, the IL-6, IL-10 and TNFα response was not dependent upon the bacterial treatment with high levels of cytokine production from live, frozen and killed preparations.
Conclusions
We have described a simple and practical method that enables investigation of live host-pathogen interactions without the restrictions that are normally associated with working with live bacteria such as experimental time, contamination, intra-assay variation and scalability. Serum is a known microbial cryoprotectant [5] and although a similar storage method has been used with S. pneumoniae for challenge of mice [6] we have provided a detailed methodology and clearly demonstrated that cryopreservation of S. pneumoniae and NTHi with FCS preserves the immunostimulatory properties of these species. We have also confirmed that cryopreservation is superior to other methods for standardisation and storage of bacteria that involve inactivation.
Different methods of killing bacteria can alter the immunostimulatory profile of the pathogen either by exposing or destroying PAMPs [1][2][3]. This was evident in our study where heat and ethanol treatment of S. pneumoniae but not NTHi attenuated the IL-6 and IL-10 response from PBMCs. This is most likely to be due to the killing treatments destroying key pneumococcal virulence factors such as pneumolysin [1] but not lipooligosaccharide from NTHi, which is heat-stable. This highlights how using killed preparations of bacteria can result in an under or over-stated host immune response to the remaining immunostimulatory components and may result in misleading conclusions about host-pathogen interactions. In summary, we have developed a straightforward and Figure 1 Viability of S. pneumoniae and NTHi over 8 weeks of storage at −80°C. The count on day 0 is a total bacteria count using a chamber (bacteria/ml), thereafter viable counts were conducted with the frozen bacteria to give LOG cfu/ml at 1 day, 1 week, 4 weeks and 8 weeks post-freezing. convenient storage method for bacteria and demonstrated that cryopreserved bacteria remain viable for at least 8 weeks and maintain their stimulatory capacities for at least 4 weeks (later time points were not tested). This technique facilitates the analysis of multiple bacterial species on the same day, allows predetermination of culture purity and viability, and most importantly enables accurate investigation of the host response to live bacterial infection.
Bacterial strains, culture and cryopreservation
All reagents were from Sigma Aldrich, New South Wales, Australia unless otherwise stated. Glycerol stocks of S. pneumoniae D39 (ATCC#7466) and NTHi 289 [7] were streaked out onto blood agar or chocolate agar plates respectively for single colonies and incubated at 37°C. S. pneumoniae was incubated anaerobically using the BD GasPak™ EZ Anaerobic Pouch System [BD Diagnostics, Australia]. Following overnight incubation, three colonies were selected from each agar plate and used to inoculate culture media. S. pneumoniae was grown statically in brain heart infusion (BHI) broth at 37°C and NTHi was grown with shaking in BHI supplemented with 44 mL/L glycerol, 30 mg/L hemin and 10 mg/L nicotinamide adenine dinucleotide (NAD). Both strains were grown to mid-log phase (OD 600nm 0.55 -0.65), counted at 100 × magnification using a Helber bacteria counting chamber [ProSciTech, Queensland, Australia] and then either subjected to the different treatments (heat, ethanol or freezing) or used fresh on the day of preparation. All cultures were streaked onto agar plates and incubated at 37°C overnight to check purity. For cultures that were to be frozen, heat-inactivated FCS [SAFC Biosciences, New South Wales, Australia] was added to the mid-log phase culture to give a final concentration of 20% FCS and 1 mL aliquots were stored in cryovials at -80°C. FCS was the only cryoprotectant used for storage of these bacterial stocks.
Viable counts of frozen bacteria
Vials of frozen bacterial stocks were thawed at 37°C for 2 min in a water bath, 900 μL was transferred to a fresh tube and centrifuged in a benchtop centrifuge at maximum speed for 3 min. The supernatant was discarded and pellet resuspended in 900 μL of sterile phosphate buffered saline (PBS) pH 7.4 [Gibco, New South Wales, Australia]. Ten-fold dilutions of each bacteria ranging from 10 -1 to 10 -6 were prepared with PBS in a 96-well polystyrene round bottom plate. Agar plates were divided into 6 sectors, using 2 plates per strain, and three 20 μL aliquots of the dilutions were spotted onto each sector. Plates were allowed to dry then incubated overnight at 37°C. The following day, the number of colony forming units (cfu) per 20 μl spot were counted in the sector with approximately 20 -80 cfu, averaged and multiplied by the dilution factor to give cfu/mL. Viable counts of cryopreserved bacteria were conducted at 1 day, 1 week, 4 weeks, 8 weeks and 16 weeks postfreezing.
Heat-killing and ethanol-killing of bacteria
Mid-log phase cultures of S. pneumoniae and NTHi were centrifuged at 3200 g for 10 min, washed in PBS, and viable counts were conducted as described above. For heat-killing, the bacteria in PBS were incubated in a water bath at 60°C for 1 h. For ethanol-killing, the bacteria were resuspended in 70% ethanol and incubated at 4°C for 1 h with rotation. After killing, bacteria were washed in PBS, aliquoted and stored at 4°C. Chamber counts were conducted on the killed preparations to determine bacteria/mL and viable counts were conducted to confirm successful killing.
PBMC collection, processing and stimulation with live, frozen, heat-killed and ethanol-killed preparations of bacteria Whole blood was collected from five healthy adult donors by peripheral venepuncture and mixed 1:1 with heparinised (2%) RPMI 1640 media supplemented with 1% sodium pyruvate, 1% glutamax, 10 mM HEPES buffer [All Gibco]. The PBMCs were processed and stored as previously described [3]. On the day of stimulation, PBMCs were thawed at 37°C for 2 min in a water bath, added to 2% FCS RPMI 1640 and centrifuged at 500 g for 10 min at room temperature. The supernatant was discarded and cells resuspended in media and counted. The PBMCs were seeded in triplicate for each stimuli at 2.5 × 10 5 cells per well in 96-well polypropylene roundbottom plates and incubated for 24 h at 37°C in a humidified 5% CO 2 environment. PBMCs were stimulated with either live, heat-killed, ethanol-killed, frozen for 1 week or frozen for 4 week preparations of S. pneumoniae D39 and NTHi 289 at a multiplicity of infection of 10:1 bacteria to cells. Live and frozen bacteria viable counts were determined as described above and total bacterial chamber counts were conducted for the heat-and ethanol-killed preparations. Cells in control wells were stimulated with either PBS (cells only), 1 ng/mL lipopolysaccharide (LPS, from E. coli R515) [Alexis Biochemicals, Sapphire Biosciences, NSW, Australia] or 1 μg/mL staphylococcal enterotoxin B (SEB, from S. aureus). At 24 h poststimulation, plates were centrifuged for 5 min at 200 g and supernatants were harvested, triplicate wells combined and then stored in aliquots at -80°C for subsequent measurement of inflammatory mediators.
Measurement of inflammatory mediators
IL-6, IL-10, IFNγ and TNFα levels (pg/mL) were measured in cell culture supernatants using a previously described multiplex cytokine bead assay [8]. IL-1β levels in cell supernatants were measured using an ELISA kit [Bender MedSystems, California, USA] and following the manufacturer's instructions, except wells were coated with an alkaline coating buffer (40 mM NaCo 3 ; 70 mM NaHCO 3 ) instead of PBS.
Statistical analyses
PBMC cytokine release following different treatments was compared using a Kruskal-Wallis test with Dunn's post-test analysis using GraphPad Prism [GraphPad Software Inc, California, USA], where P < 0.05 was considered statistically significant. For IL-1β one outlier was excluded from analysis due to an out of range maximum value.
|
v3-fos-license
|
2017-05-14T03:59:01.904Z
|
2017-04-01T00:00:00.000
|
26843609
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1422-0067/18/4/809/pdf",
"pdf_hash": "6e6052b0960c1e0a0add7fe0b20ec410d86ceb0c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2352",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "6e6052b0960c1e0a0add7fe0b20ec410d86ceb0c",
"year": 2017
}
|
pes2o/s2orc
|
Potential Coagulation Factor-Driven Pro-Inflammatory Responses in Ovarian Cancer Tissues Associated with Insufficient O2 and Plasma Supply
Tissue factor (TF) is a cell surface receptor for coagulation factor VII (fVII). The TF-activated fVII (fVIIa) complex is an essential initiator of the extrinsic blood coagulation process. Interactions between cancer cells and immune cells via coagulation factors and adhesion molecules can promote progression of cancer, including epithelial ovarian cancer (EOC). This process is not necessarily advantageous, as tumor tissues generally undergo hypoxia due to aberrant vasculature, followed by reduced access to plasma components such as coagulation factors. However, hypoxia can activate TF expression. Expression of fVII, intercellular adhesion molecule-1 (ICAM-1), and multiple pro-inflammatory cytokines can be synergistically induced in EOC cells in response to hypoxia along with serum deprivation. Thus, pro-inflammatory responses associated with the TF-fVIIa–ICAM-1 interaction are expected within hypoxic tissues. Tumor tissue consists of multiple components such as stromal cells, interstitial fluid, albumin, and other micro-factors such as proton and metal ions. These factors, together with metabolism reprogramming in response to hypoxia and followed by functional modification of TF, may contribute to coagulation factor-driven inflammatory responses in EOC tissues. The aim of this review was to describe potential coagulation factor-driven inflammatory responses in hypoxic EOC tissues. Arguments were extended to clinical issues targeting this characteristic tumor environment.
Introduction
Epithelial ovarian cancer (EOC) is a general term representing neoplasms in the pelvic or peritoneal cavity, as the origin of most EOC cases is not the ovary [1]. EOC accounts for approximately 3% of cancers in women and thus represents the most lethal gynecologic malignancy worldwide [2]. EOC is associated with relatively poor prognosis because of the lack of sufficient diagnostic and therapeutic methods. The 5-year survival of this type of cancer is 46.2% [2]. Moreover, there are difficulties in treating this disease when patients have relapsed or are diagnosed at a late stage. EOC can be classified based on multiple histologic [1,3,4], genetic [3], and functional [5] subtypes, such as high-and low-grade serous and clear-cell carcinoma (CCC) [1,[3][4][5]. Low-grade serous and CCC subtypes are known to be relatively chemoresistant [3,4,6,7]. Accordingly, greater understanding of the relationship between the heterogeneity of EOC and the underlying biology should result in promising new treatment strategies.
Tissue factor (TF) is a transmembrane glycoprotein that is expressed in various normal and cancerous tissues (Figure 1). Blood coagulation factor VII (fVII) is a precursor serine protease produced Figure 1. Extrinsic coagulation cascade initiated by tissue factor (TF)-activated coagulation factor VII (fVIIa) dimer formation on the surface of extravascular cells. The TF-fVIIa complex triggers calcium ion-dependent sequential enzymatic reactions in response to injury of blood vessels composed of endothelial cells and the vessel wall. Formation of fibrin polymers together with platelets and red blood cells leads to clot formation to halt bleeding. This fibrin deposition process is supported by von Willebrand factors (vWFs). Blood coagulation completes by clot formation with other factors, such as platelets and red blood cells. Schematics of fibrin(ogen) with characteristic domains D and E are also shown. fXa: activated factor X; flXa: activated fIX; : cleavage; : conversion.
Pro-coagulant-driven inflammatory responses play key roles in the progression of various types of cancer [12][13][14][15] including EOC [16][17][18]. The TF-fVIIa complex can contribute to cellular inflammatory responses via direct activation of PAR2 and indirect activation of PAR1 and PAR2 via fXa [11,19] in a coagulation-independent manner, followed by activation of cellular signaling cascades [20]. In addition, fibrin(ogen), an end product of the coagulation cascade (Figure 1), is involved in the inflammatory response in various diseases, including cancer, as this coagulation factor mediates recruitment of immune cells via adhesion molecules [21].
In general, cancer tissues have poorly organized blood [22][23][24] and lymphatic [23][24][25] capillary networks, leading to reductive conditions called hypoxia. Cancer cells have various molecular mechanisms to overcome and adapt to such harsh conditions. The gene encoding TF can be transcriptionally upregulated in response to hypoxia, potentially via the early growth response-1 (EGR-1) transcription factor, in various cancer cells including EOC cells [26][27][28][29][30]. In addition, the produced in the liver and then secreted into the bloodstream. TF functions as a receptor for fVII, and binding initiates the extrinsic blood coagulation cascade, including generation of activated factor X (fXa) via production of the active form of fVII (fVIIa) (Figure 1). The serine protease activity of fVIIa within the TF-fVIIa complex triggers a series of enzymatic reactions, finally leading to clot formation composed of platelets and red blood cells covered with fibrin polymers (Figure 1) [8][9][10]. The fVII pro-coagulant protease, together with fXa, is also responsible for the activation of a cellular signaling cascade by cell surface cleavage of protease-activated receptors (PARs), in normal and cancer cells, including EOC cells [9,10]. In addition to coagulation activity, the cytoplasmic domain of TF also plays critical roles in cancer biology [9][10][11]. Pro-coagulant-driven inflammatory responses play key roles in the progression of various types of cancer [12][13][14][15] including EOC [16][17][18]. The TF-fVIIa complex can contribute to cellular inflammatory responses via direct activation of PAR2 and indirect activation of PAR1 and PAR2 via fXa [11,19] in a coagulation-independent manner, followed by activation of cellular signaling cascades [20]. In addition, fibrin(ogen), an end product of the coagulation cascade (Figure 1), is involved in the inflammatory response in various diseases, including cancer, as this coagulation factor mediates recruitment of immune cells via adhesion molecules [21].
In general, cancer tissues have poorly organized blood [22][23][24] and lymphatic [23][24][25] capillary networks, leading to reductive conditions called hypoxia. Cancer cells have various molecular mechanisms to overcome and adapt to such harsh conditions. The gene encoding TF can be transcriptionally upregulated in response to hypoxia, potentially via the early growth response-1 (EGR-1) transcription factor, in various cancer cells including EOC cells [26][27][28][29][30]. In addition, the produced in the liver and then secreted into the bloodstream. TF functions as a receptor for fVII, and binding initiates the extrinsic blood coagulation cascade, including generation of activated factor X (fXa) via production of the active form of fVII (fVIIa) (Figure 1). The serine protease activity of fVIIa within the TF-fVIIa complex triggers a series of enzymatic reactions, finally leading to clot formation composed of platelets and red blood cells covered with fibrin polymers (Figure 1) [8][9][10]. The fVII pro-coagulant protease, together with fXa, is also responsible for the activation of a cellular signaling cascade by cell surface cleavage of protease-activated receptors (PARs), in normal and cancer cells, including EOC cells [9,10]. In addition to coagulation activity, the cytoplasmic domain of TF also plays critical roles in cancer biology [9][10][11]. Pro-coagulant-driven inflammatory responses play key roles in the progression of various types of cancer [12][13][14][15] including EOC [16][17][18]. The TF-fVIIa complex can contribute to cellular inflammatory responses via direct activation of PAR2 and indirect activation of PAR1 and PAR2 via fXa [11,19] in a coagulation-independent manner, followed by activation of cellular signaling cascades [20]. In addition, fibrin(ogen), an end product of the coagulation cascade (Figure 1), is involved in the inflammatory response in various diseases, including cancer, as this coagulation factor mediates recruitment of immune cells via adhesion molecules [21].
Pro-coagulant-driven inflammatory responses play key roles in the progression of various types of cancer [12][13][14][15] including EOC [16][17][18]. The TF-fVIIa complex can contribute to cellular inflammatory responses via direct activation of PAR2 and indirect activation of PAR1 and PAR2 via fXa [11,19] in a coagulation-independent manner, followed by activation of cellular signaling cascades [20]. In addition, fibrin(ogen), an end product of the coagulation cascade (Figure 1), is involved in the inflammatory response in various diseases, including cancer, as this coagulation factor mediates recruitment of immune cells via adhesion molecules [21].
In general, cancer tissues have poorly organized blood [22][23][24] and lymphatic [23][24][25] capillary networks, leading to reductive conditions called hypoxia. Cancer cells have various molecular mechanisms to overcome and adapt to such harsh conditions. The gene encoding TF can be transcriptionally upregulated in response to hypoxia, potentially via the early growth response-1 (EGR-1) transcription factor, in various cancer cells including EOC cells [26][27][28][29][30]. In addition, the gene expression of fVII is inducible in response to hypoxia in EOC cells. However, its regulatory transcriptional mechanisms are distinct from those controlling TF expression [28,29,31,32].
Given that tissue hypoxia is a result of aberrant tissue vascularization, cancer cells in tumors should also undergo starvation of plasma components in addition to O 2 deficiency. Indeed, recent studies have revealed that the limited supply of plasma lipids, in addition to O 2 , is responsible for the induction of expression of key genes required for compensation of the cellular lipid deficiency in cancer cells [33]. A previous study showed that transcriptional induction of the FVII gene in CCC cells under hypoxia is synergistically enhanced when cells are cultured without serum [32]. Moreover, the gene encoding intercellular adhesion molecule-1 (ICAM-1), a cell surface mediator of the immune response, is robustly expressed in CCC cells in response to simultaneous exposure to hypoxia and serum deprivation conditions [34]. This experimental evidence implies that inflammatory responses mediated via the cancer cell-derived TF-fVIIa complex and ICAM-1 play vital roles in EOC progression. However, it is expected that the functions of these proteins are modulated under such harsh conditions given that characteristics of tumor components, such as the extracellular matrix, stromal cells, and tissue interstitial fluid, may also be altered in response to hypoxia. The main aim of this review is to discuss potential coagulation factor-driven pro-inflammatory responses within EOC tissues that are insufficiently supplied with O 2 and plasma components.
TF-fVIIa-Dependent Phenotypes of EOC Cells
Previous studies demonstrated that the extrinsic but not intrinsic blood coagulation mechanism initiated by the TF-fVII interaction ( Figure 1) [8] is closely involved in the biology of cancer cells [9][10][11]19,20], including EOC cells [10]. The extrinsic coagulation cascade consists of sequential enzymatic reactions, finally resulting in fibrin formation ( Figure 1). Fibrin monomers are tethered to each other to form homopolymers, leading to clot formation with the assistance of other blood components such as platelets, red blood cells, and von Willebrand factors (vWFs) [8,35] (Figure 1).
Enzymes responsible for this coagulation cascade are known to cause malignant cell phenotypes. TF is overexpressed in CCC cells [10]. Additionally, fVII can be ectopically induced in some EOC cells, including CCC cells, in response to hypoxia via the specificity protein 1 (Sp1)-hypoxia inducible factor-2α (HIF-2α)interaction [28,31,32]. Unlike general transcription mechanisms, this hypoxia-driven transcriptional activation is associated with characteristic epigenetic changes, namely the deacetylation of histones within the promoter region of the FVII gene [32].
PARs (PAR1-PAR4) are major G protein-coupled receptors that are potentially responsible for transmitting TF-fVIIa-dependent cellular signals [19]. Whether all PARs are involved in the biology of EOC cells is not clear. However, several studies have already investigated the role of PAR1, PAR2, and PAR4 in EOC cell biology [10,29,31,36,37]. Both the motility and invasiveness of CCC cells are increased by ectopic expression of fVII, followed by cell surface TF-fVIIa formation [31]. These phenotypes were considered to be dependent on PAR1, presumably via ectopically synthesized fX, as a ternary TF-fVIIa-fXa complex but not binary TF-fVIIa complex can activate PAR1 [31] (Figure 2). Indeed, recent studies have shown that fX is expressed in CCC cells [29], providing support for this phenomenon. A recent report has also shown that PAR1 facilitates proliferation of non-CCC EOC cells while PAR2 enhances cell motility in an fVIIa-dependent manner [37]. A database search revealed that PAR1 transcript levels are significantly higher in EOC tissues compared with those in normal ovarian tissues [37]. In addition, immune cells treated with TF-fVIIa complex potentially included in ascites can augment secretion of cytokines such as interleukin (IL)-8 (CXCL8), thereby increasing the motility and invasiveness of EOC cells [38]. EOC cells can secrete extracellular vesicles (EVs) associated with high levels of TF or TF-fVIIa associated with procoagulant activity, potentially leading to venous thromboembolism (VTE) [28,29,39]. Shedding of EVs and incorporation of TF into EVs are regulated by the actin-binding protein filamin-A, while only shedding is regulated by PARs [29]. Cell-surface TF-fVIIa activity can be regulated by anti-or pro-coagulants such as anti-thrombin III, phospholipids, and tissue factor pathway inhibitor-1 (TFPI-1) [10]. Intriguingly, a recent report showed that TFPI-2 is highly expressed in CCC cells, suggesting a new biomarker for CCC cells [40,41]. Overall, these findings suggest that components of extrinsic coagulation pathway contribute to aggressive phenotypes of EOC under peritoneal environments.
III, phospholipids, and tissue factor pathway inhibitor-1 (TFPI-1) [10]. Intriguingly, a recent report showed that TFPI-2 is highly expressed in CCC cells, suggesting a new biomarker for CCC cells [40,41]. Overall, these findings suggest that components of extrinsic coagulation pathway contribute to aggressive phenotypes of EOC under peritoneal environments.
TF-fVIIa Pathway and Inflammation in Cancer Tissue
The relationship between cancer and inflammation has been widely studied and extensively reviewed [12][13][14][15]. Indeed, several studies have demonstrated that the TF-fVIIa pathway closely correlates with the immune response in cancer tissues [19,20,[42][43][44][45][46], as expression of the TF (F3) gene can be augmented by pro-inflammatory transcription factors such as NFκB and AP-1 [10,26]. Briefly, TF-fVIIa signaling via PARs augments the production of pro-inflammatory proteins such as tumor necrosis factor-α (TNF-α), interleukins, and adhesion molecules in cancer cells [19]. In contrast, pro-inflammatory factors can enhance transcriptional upregulation of the F3 gene to increase cellular TF levels [19] via activation of NFκB [26]. The TF-fVIIa complex can directly cleave and activate PAR2 [10,19]. PAR1 is a receptor for thrombin and PAR1 activation requires prothrombin cleavage ( Figure 2). However, PAR1 can be cleaved by the TF-fVIIa-fXa complex, as described in the previous section ( Figure 2), potentially followed by activation of various signaling cascades including Ca 2+ mobilization via association with multiple G-proteins ( Figure 2) [47] Also, immune cells are components of the tumor microenvironment (TME) and can cause tumor-promoting inflammation [13]. Previous studies showed that TF-fVIIa signaling in immune cells such as lymphocytes and tumor-associated macrophages (TAMs) contributes to tumor progression [42][43][44][45]. Fibrinogen is composed of two outer D domains tethered with a coiled-coil structure, with an E domain in its center [48] (Figure 1). In addition to its critical role in clot formation (Figure 1), Figure 2. Activation pathways of PAR1 associated with the plasma membrane. In addition to thrombin, the TF-fVIIa-fXa complex can also cleave and activate the G protein-coupled receptor PAR1 to transmit cellular signals. PAR1 can couple multiple G-proteins to activate various signaling mechanisms. PAR: protease-activated receptor; ERK: extracellular signal-regulated kinase; PKC: protein kinase C; 18,809 produced in the liver and then secreted into the bloodstream. TF functions as binding initiates the extrinsic blood coagulation cascade, including generati (fXa) via production of the active form of fVII (fVIIa) (Figure 1). The serine p within the TF-fVIIa complex triggers a series of enzymatic reactions, finally l composed of platelets and red blood cells covered with fibrin polymers (Fi pro-coagulant protease, together with fXa, is also responsible for the activatio cascade by cell surface cleavage of protease-activated receptors (PARs), in n including EOC cells [9,10]. In addition to coagulation activity, the cytoplas plays critical roles in cancer biology [9][10][11]. Pro-coagulant-driven inflammatory responses play key roles in the types of cancer [12][13][14][15] including EOC [16][17][18]. The TF-fVIIa complex ca inflammatory responses via direct activation of PAR2 and indirect activation fXa [11,19] in a coagulation-independent manner, followed by activatio cascades [20]. In addition, fibrin(ogen), an end product of the coagulation involved in the inflammatory response in various diseases, including can reted into the bloodstream. TF functions as a receptor for fVII, and d coagulation cascade, including generation of activated factor X form of fVII (fVIIa) (Figure 1). The serine protease activity of fVIIa ers a series of enzymatic reactions, finally leading to clot formation ood cells covered with fibrin polymers ( Figure 1) [8][9][10]. The fVII ith fXa, is also responsible for the activation of a cellular signaling f protease-activated receptors (PARs), in normal and cancer cells, dition to coagulation activity, the cytoplasmic domain of TF also y [9][10][11]. ascade initiated by tissue factor (TF)-activated coagulation factor VII urface of extravascular cells. The TF-fVIIa complex triggers calcium atic reactions in response to injury of blood vessels composed of wall. Formation of fibrin polymers together with platelets and red n to halt bleeding. This fibrin deposition process is supported by von d coagulation completes by clot formation with other factors, such as ematics of fibrin(ogen) with characteristic domains D and E are also lXa: activated fIX; : cleavage; : conversion. matory responses play key roles in the progression of various EOC [16][17][18]. The TF-fVIIa complex can contribute to cellular t activation of PAR2 and indirect activation of PAR1 and PAR2 via ependent manner, followed by activation of cellular signaling hospholipids, and tissue factor pathway inhibitor-1 (TFPI-1) [10]. Intriguingly, a recent report ed that TFPI-2 is highly expressed in CCC cells, suggesting a new biomarker for CCC cells 1]. Overall, these findings suggest that components of extrinsic coagulation pathway contribute gressive phenotypes of EOC under peritoneal environments.
F-fVIIa Pathway and Inflammation in Cancer Tissue
The relationship between cancer and inflammation has been widely studied and extensively wed [12][13][14][15]. Indeed, several studies have demonstrated that the TF-fVIIa pathway closely lates with the immune response in cancer tissues [19,20,[42][43][44][45][46], as expression of the TF (F3) gene e augmented by pro-inflammatory transcription factors such as NFκB and AP-1 [10,26]. Briefly, VIIa signaling via PARs augments the production of pro-inflammatory proteins such as tumor osis factor-α (TNF-α), interleukins, and adhesion molecules in cancer cells [19]. In contrast, inflammatory factors can enhance transcriptional upregulation of the F3 gene to increase cellular evels [19] via activation of NFκB [26]. The TF-fVIIa complex can directly cleave and activate 2 [10,19]. PAR1 is a receptor for thrombin and PAR1 activation requires prothrombin cleavage re 2). However, PAR1 can be cleaved by the TF-fVIIa-fXa complex, as described in the previous on ( Figure 2), potentially followed by activation of various signaling cascades including Ca 2+ ilization via association with multiple G-proteins ( Figure 2) [47] Also, immune cells are ponents of the tumor microenvironment (TME) and can cause tumor-promoting inflammation Previous studies showed that TF-fVIIa signaling in immune cells such as lymphocytes and r-associated macrophages (TAMs) contributes to tumor progression [42][43][44][45]. Fibrinogen is composed of two outer D domains tethered with a coiled-coil structure, with an E ain in its center [48] (Figure 1). In addition to its critical role in clot formation (Figure 1), hospholipids, and tissue factor pathway inhibitor-1 (TFPI-1) [10]. Intriguingly, a recent report ed that TFPI-2 is highly expressed in CCC cells, suggesting a new biomarker for CCC cells 1]. Overall, these findings suggest that components of extrinsic coagulation pathway contribute gressive phenotypes of EOC under peritoneal environments.
F-fVIIa Pathway and Inflammation in Cancer Tissue
The relationship between cancer and inflammation has been widely studied and extensively wed [12][13][14][15]. Indeed, several studies have demonstrated that the TF-fVIIa pathway closely lates with the immune response in cancer tissues [19,20,[42][43][44][45][46], as expression of the TF (F3) gene e augmented by pro-inflammatory transcription factors such as NFκB and AP-1 [10,26]. Briefly, IIa signaling via PARs augments the production of pro-inflammatory proteins such as tumor osis factor-α (TNF-α), interleukins, and adhesion molecules in cancer cells [19]. In contrast, inflammatory factors can enhance transcriptional upregulation of the F3 gene to increase cellular vels [19] via activation of NFκB [26]. The TF-fVIIa complex can directly cleave and activate 2 [10,19]. PAR1 is a receptor for thrombin and PAR1 activation requires prothrombin cleavage re 2). However, PAR1 can be cleaved by the TF-fVIIa-fXa complex, as described in the previous on ( Figure 2), potentially followed by activation of various signaling cascades including Ca 2+ ilization via association with multiple G-proteins ( Figure 2) [47] Also, immune cells are ponents of the tumor microenvironment (TME) and can cause tumor-promoting inflammation Previous studies showed that TF-fVIIa signaling in immune cells such as lymphocytes and r-associated macrophages (TAMs) contributes to tumor progression [42][43][44][45]. Fibrinogen is composed of two outer D domains tethered with a coiled-coil structure, with an E ain in its center [48] (Figure 1). In addition to its critical role in clot formation (Figure 1), Fibrinogen is composed of two outer D domains tethered with a coiled-coil structure, with an E domain in its center [48] (Figure 1). In addition to its critical role in clot formation ( Figure 1), fibrin(ogen) can mediate inflammatory responses in cancer tissues [21]. For example, cancer cells can associate with platelets within the TME. Cancer cells in blood vessels initially need to associate with the vessel wall when they metastasize [49]. In EOC models, studies showed that platelets can augment cell proliferation [50], pro-angiogenic and -survival [51] signaling, and survival under shear stress conditions [52]. Other studies further demonstrated that platelets correlate with EOC tumor growth [49], chemoresistance [51], and thromboembolism [16]. Cancer cells can associate with platelets to enhance their tissue infiltration [49,53]. These cell-cell interactions are known to protect cancer cells from immunosurveillance to promote metastasis [49]. Platelets and leukocytes such as polymorphonuclear leukocytes can associate with cancer cells by association to fibrin to promote metastasis [54,55] (Figure 3). Fibrin(ogen) can be a key regulator of these molecular processes, as cell surface glycoproteins, ICAM-1, and integrins on the cell surface function as receptors of this pro-coagulant [54,56]. These molecular events are expected to influence the pathophysiology of EOC. Indeed, the plasma levels of platelets, lymphocytes, and fibrinogen are significantly associated with the prognosis of EOC patients [57].
Relationship between ICAM-1 and EOC
ICAM-1 is a transmembrane glycoprotein classified in the immunoglobulin superfamily and is expressed in endothelial and immune cells [54,56,58,59]. ICAM-1 consists of five immunoglobulin-like domains ( Figure 3A) [56]. ICAM-1 functions in cell-cell and cell-extracellular matrix (ECM) interactions and plays multiple roles in the tissue immune response [56]. Interactions between cell surface ICAM-1 and platelets and leukocytes are mediated by its direct binding to heterodimeric complexes of integrins (LFA-1 and Mac-1) [60-62] ( Figure 3B). Many studies have shown that various cancer cells, including breast, bladder, pancreatic, and oral cancer cells and cells within the tumor environment, overexpress ICAM-1 [58,[63][64][65][66][67][68]. Aberrant expression of this cell surface protein results in elevated cell motility [58][59][60][61][62][63], invasiveness [64], angiogenesis [56], and leukocyte infiltration [69]. fibrin(ogen) can mediate inflammatory responses in cancer tissues [21]. For example, cancer cells can associate with platelets within the TME. Cancer cells in blood vessels initially need to associate with the vessel wall when they metastasize [49]. In EOC models, studies showed that platelets can augment cell proliferation [50], pro-angiogenic and -survival [51] signaling, and survival under shear stress conditions [52]. Other studies further demonstrated that platelets correlate with EOC tumor growth [49], chemoresistance [51], and thromboembolism [16]. Cancer cells can associate with platelets to enhance their tissue infiltration [49,53]. These cell-cell interactions are known to protect cancer cells from immunosurveillance to promote metastasis [49]. Platelets and leukocytes such as polymorphonuclear leukocytes can associate with cancer cells by association to fibrin to promote metastasis [54,55] (Figure 3). Fibrin(ogen) can be a key regulator of these molecular processes, as cell surface glycoproteins, ICAM-1, and integrins on the cell surface function as receptors of this pro-coagulant [54,56]. These molecular events are expected to influence the pathophysiology of EOC. Indeed, the plasma levels of platelets, lymphocytes, and fibrinogen are significantly associated with the prognosis of EOC patients [57].
Relationship between ICAM-1 and EOC
ICAM-1 is a transmembrane glycoprotein classified in the immunoglobulin superfamily and is expressed in endothelial and immune cells [54,56,58,59]. ICAM-1 consists of five immunoglobulin-like domains ( Figure 3A) [56]. ICAM-1 functions in cell-cell and cell-extracellular matrix (ECM) interactions and plays multiple roles in the tissue immune response [56]. Interactions between cell surface ICAM-1 and platelets and leukocytes are mediated by its direct binding to heterodimeric complexes of integrins (LFA-1 and Mac-1) [60][61][62] (Figure 3B). Many studies have shown that various cancer cells, including breast, bladder, pancreatic, and oral cancer cells and cells within the tumor environment, overexpress ICAM-1 [58,[63][64][65][66][67][68]. Aberrant expression of this cell surface protein results in elevated cell motility [58][59][60][61][62][63], invasiveness [64], angiogenesis [56], and leukocyte infiltration [69]. Roles of ICAM-1 in EOC cell biology have also been reported. Unlike the effects on cancer cells described above, ICAM-1 expression is inversely correlated with malignancy of EOC cells. ICAM-1 expression is reduced in various EOC cells and tissues compared to normal ovarian surface epithelial cells [70]. Furthermore, forced expression of ICAM-1 inhibited the proliferation of multiple Roles of ICAM-1 in EOC cell biology have also been reported. Unlike the effects on cancer cells described above, ICAM-1 expression is inversely correlated with malignancy of EOC cells. ICAM-1 expression is reduced in various EOC cells and tissues compared to normal ovarian surface epithelial cells [70]. Furthermore, forced expression of ICAM-1 inhibited the proliferation of multiple EOC cells [71]. The promoter region of the ICAM1 gene in some EOC cells is highly methylated at CpG sites, and the expression of ICAM1 is recovered upon treatment of cells with a DNA demethylating agent [71] or methyltransferase inhibitor [72]. These experimental results suggest that ICAM-1 suppresses the malignancy of EOC cells.
In contrast, immunoassays revealed that ICAM-1 levels are elevated in ascites of EOC patients [73]. Single nucleotide polymorphisms within the ICAM1 gene are closely associated with risk and prognosis of EOC [74,75], suggesting that ICAM-1 correlates with EOC progression. Platelet adhesion to EOC cells could induce ICAM-1 expression in these cells to promote angiogenesis and cell survival, suggesting that ICAM-1 promotes malignant phenotypes [36]. Indeed, ICAM-1 is inducible in CCC cells under starvation of both O 2 and long chain fatty acids (LCFAs), thereby facilitating cell survival and tumor growth [34]. Furthermore, a very recent study showed that an analogue of curcumin, a component of the turmeric spice, can suppress EOC progression in association with reduced expression of ICAM-1, presumably by NFκB inhibition [76]. These experimental results suggest that ICAM-1 bidirectionally functions in the progression of EOC.
Relationship between Integrins and EOC
As described above, integrins function in the association of EOC cells to platelets and immune cells via fibrinogen to facilitate tumor progression. Differential dimeric complex formations between αand β-subunit of integrins are responsible for the diverse cell-cell interactions [77]. The α II b-β 3 complex mediates the interaction between cancer cells and platelets [55], while the α L -β 2 (LFA-1) and α M -β 2 (Mac-1) complexes are responsible for the association of cancer cells to leukocytes [55], enabling EOC cells to exert immune responses. In addition, various studies have shown that interactions between EOC cells with ECM components initiate cellular signaling mechanisms to augment cell migration, invasion, and survival [78], particularly via the α 5 -β 1 complex [79][80][81].
Cell surface integrins may also contribute to EOC biology by regulation of the TF-fVIIa pathway. Membrane-associated full-length TF and alternatively spliced membrane-free TF (asTF) can bind to multiple integrin dimers to activate different cellular signaling pathways, thereby facilitating breast cancer progression [9]. Furthermore, β1-integrin on the surface of endothelial cells can associate with asTF to augment the expression of adhesion molecules, enabling monocytes to undergo trans-endothelial migration into tumor tissue [82]. These events may be more important for CCC cells compared to other EOC cells, as this histological subtype highly expresses TF and fVII [10,28,29,32]. However, TF-integrin interactions in EOC cell models have not been reported so far.
A previous report showed that statin inhibits CCC cell growth by reduction of the ECM protein osteopontin, which can bind multiple integrin dimers to promote cell invasiveness [83]. Furthermore, integrin-linked kinase, which binds the cytoplasmic domain of integrins and modulates their function, is overexpressed in EOC cells, including CCC cells [84]. These results indicate that integrins can greatly affect the biology of EOC cells.
Potential TF-fVIIa-Driven Inflammatory Responses within Hypoxic EOC Tissue
The TME consists of a solid phase composed of many stromal cells, such as immune cells and fibroblasts [85], as well as proteins and carbohydrates, such as collagen, glucosaminoglycan, and hyarulonan [25]. The TME is also composed of a fluid phase called the tissue interstitial fluid (TIF), which can be a vehicle of many plasma-derived substances. As described above, the majority of solid tumor tissues are characterized by hypoxic environments. Measurement of hypoxia status by various monitoring and imaging techniques indicated that this is true for EOC tissues [10,[86][87][88]. Indeed, investigations of hypoxia in EOC have been increasing [89][90][91].
Unlike normal cells, metabolism in cancer cells is mainly regulated by aerobic glycolysis, which is known as the Warburg effect [92] (Figure 4, number 1). This metabolic change accentuates lactate production, especially under hypoxia, leading to an acidic TME. In addition, cellular lipid metabolism can be reprogrammed and lipid anabolism rather than catabolism is predominant with synthesis of lipid droplets (LDs) [33] (Figure 4, number 2). This altered lipogenesis under hypoxia contributes to progression of cancers including EOC [33]. Given the poor supply of plasma lipids, reprogrammed lipid metabolism may affect TF-fVIIa-mediated inflammatory responses in hypoxic EOC tissues. Thus, in this section, potential alterations in TF-fVIIa functions in hypoxic EOC tissues will be presented. Protein disulfide isomerase (PDI), with 491 amino acids, contains the endoplasmic reticulum (ER)-retention signal peptide sequence, KDEL [93], and is thus a resident protein of the ER (Figure 4, number 3). PDI is composed of four distinct domains and catalyzes disulfide bond formation between inter-or intra-molecular cysteine residues in an O 2 -dependent manner to regulate physiological protein functions [93]. PDI can also be secreted and associate with the cell surface ( Figure 4, number 4) Thus, PDI is also expected to play multiple roles in normal cell function by regulation of the folding process of cell surface proteins. However, the mechanism of this cell surface process is unclear. Protein disulfide isomerase (PDI), with 491 amino acids, contains the endoplasmic reticulum (ER)-retention signal peptide sequence, KDEL [93], and is thus a resident protein of the ER (Figure 4, number 3). PDI is composed of four distinct domains and catalyzes disulfide bond formation between inter-or intra-molecular cysteine residues in an O2-dependent manner to regulate physiological protein functions [93]. PDI can also be secreted and associate with the cell surface ( Figure 4, number 4) Thus, PDI is also expected to play multiple roles in normal cell function by regulation of the folding process of cell surface proteins. However, the mechanism of this cell surface process is unclear. Figure 4. Potential contribution of protein disulfide isomerase (PDI) and various environmental factors in regulation of the TF-fVIIa complex on the surface of epithelial ovarian cancer (EOC) cells. Due to hypoxia, followed by acidification of tumor microenvironment (TME) by lactate production, the cell surface PDI is expected to be inhibited. In this case, TF shifts to its reduced inactive form (designated with the bold black arrow), although encrypted TF may still transmit signals [94]. Additionally, TF-fVIIa activity can be affected by various hypoxia-related cellular and environmental factors such as regulation of the PDIA2 gene, lipid metabolism, endoplasmic reticulum (ER) stress, proton exchanger molecules, phosphatidylserine, and metal ions. Dashed arrows and T-bars indicate the activation process and suppression process, respectively. LD: lipid droplet; TFPI: tissue factor pathway inhibitor-1; LCFAs: long chain fatty acids; HIF: hypoxia inducible factor. See text for numbers (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17). . Potential contribution of protein disulfide isomerase (PDI) and various environmental factors in regulation of the TF-fVIIa complex on the surface of epithelial ovarian cancer (EOC) cells. Due to hypoxia, followed by acidification of tumor microenvironment (TME) by lactate production, the cell surface PDI is expected to be inhibited. In this case, TF shifts to its reduced inactive form (designated with the bold black arrow), although encrypted TF may still transmit signals [94]. Additionally, TF-fVIIa activity can be affected by various hypoxia-related cellular and environmental factors such as regulation of the PDIA2 gene, lipid metabolism, endoplasmic reticulum (ER) stress, proton exchanger molecules, phosphatidylserine, and metal ions. Dashed arrows and T-bars indicate the activation process and suppression process, respectively. LD: lipid droplet; TFPI: tissue factor pathway inhibitor-1; LCFAs: long chain fatty acids; HIF: hypoxia inducible factor. See text for numbers (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17).
The function of TF can be post-translationally regulated by a conformational change, called "encryption-decryption". This process is regulated by a plasma membrane component, an anionic phospholipid such as phosphatidylserine (PS) [95,96]. Unlike phosphatidylcholine and shingomyelin, PS mainly resides in the inner leaflet of the plasma membrane and under normal conditions cannot be responsible for regulation of TF function (Figure 4, number 5). However, in response to various cellular stimuli associated with Ca 2+ influx, the asymmetrical PS could be exposed to the extracellular space [97,98], thereby modulating TF function [95,96] (Figure 4, number 6). Much experimental evidence has shown that PDI-driven thiol-disulfide exchange contributes to thrombosis in vivo [97,98] ( Figure 4, number 7). Disulfide bond formation between Cys 186 -Cys 209 residues is critical for the expression of TF function [93] (Figure 4). However, it is still an issue of debate over recent years as to whether cell surface PDI can regulate TF function. A recent review noted that extracellular PS alone is not able to fully activate cell surface TF, but PS and PDI can cooperatively activate cell surface TF [96].
Regarding cancer, many studies have shown that PDI is overexpressed in cancer cells, including EOC cells [99]. In addition, proteome analysis of plasma membrane proteins revealed the presence of PDI on the surface of multiple cancer cells including EOC cells [100]. Thus, PDI on the cell surface may function in some cancer types, potentially via TF-fVIIa complex formation, although the cell surface expression of PDI may be cell type-dependent.
Cancer cells undergo aerobic glycolysis to obtain ATP ( Figure 4, number 1) [92]. This metabolism can be enhanced when cells are exposed to hypoxia, as the transcription factor HIF-1α is accumulated in cells due to inhibition of proteasome function ( Figure 4, number 8) [33]. The TME tends to become acidic because of the production of lactate during this metabolic process [33]. Cancer cells can regulate intra-or extracellular pH levels by carbonic anhydrase 9 (CA9) and sodium/proton exchanger 1 (NHE1) on the cell surface ( Figure 4, number 9). Indeed, CA9 is overexpressed in EOC tissues and its aberrant expression correlates with significant clinical parameters, such as disease prognosis [101,102]. The predominance of NHE1 over CA9 in controlling pH levels within EOC cells is unclear [103]. The pH levels within extracellular and intracellular space are maintained within 6.2~6.9 and 7.1~7.7, respectively, for proper cell functioning [103]. Measurement of pH within the tumor TIF revealed that acidity in the TIF (pH =~6.9) is indeed higher than that in subcutaneous interstitial fluid (SIF) (pH =~7.3) [104]. These data are consistent with higher concentrations of lactic acid in the TIF (~12 or 20 mg/L) compared with those in the SIF (~5 mg/L) [103]. These differences should be distinct if these parameters derived from local hypoxic areas can be compared to those within normal tissues.
Given the O 2 -dependence and optimum pH for PDI functioning, PDI is expected to efficiently function within relatively well-perfused tumor tissues associated with medium pH conditions [105]. Full functioning of PDI is unlikely, given poor O 2 concentrations and an acidic TME. However, PDI within EOC tumor tissues may still function under low O 2 concentration, as it can still function at pH values lower than 5 [106]. Additionally, PDI can be upregulated in cancer cells in response to hypoxia [107,108] (Figure 4, number 10). TF function under severe hypoxia may also be impaired by a PS-mediated mechanism. Negative charges on the PS molecule are essential in the TF decryption process, followed by formation of an active TF-fVIIa complex [95,96]. It is expected that the higher concentrations of protons (H + ) (Figure 4, number 11) under hypoxia neutralize negative charges and then inhibit cell surface PS activity. Collectively, PDI-and PS-driven regulation of cell surface TF under hypoxia may be possible; however, their relative contributions would be context-dependent. Additionally, pharmacological inhibition of PDI suppresses malignant phenotypes, indicating that an anti-PDI strategy is promising for some cancer types [99]. However, whether the inhibition of ER-and/or cell surface-associated PDI is responsible for this anti-cancer effect is unclear.
Lipid-Mediated Regulation
The activity of TF is closely associated with cellular lipids other than PS. Lipid rafts (Figure 4, number 12) are a microdomain of the plasma membrane containing high amounts of cholesterol [95,96]. Cryptic and inactivated TF associates with lipid rafts. Indeed, disruption of lipid rafts by removing cholesterol from the plasma membrane promotes decryption, resulting in active TF. LCFAs (palmitic and stearic acids) are likely to modulate cell surface TF activity (Figure 4, number 13) [95]. In particular, palmitoylation of TF at Cys 245 directs TF to the membrane lipid rafts. This mechanism may involve phosphorylation of TF, because phosphorylation at the Cys 258 site reciprocally correlates with the palmitoylation of TF [95,109].
Generally, de novo lipogenesis followed by LD generation (Figure 4, number 2) is accelerated in cancer cells, particularly under hypoxia [33]. This is true for EOC cells [33]. Some reports have shown altered lipid metabolism in EOC cells. Thus, it is likely that cancer-specific lipid metabolism regulates cell surface TF activity. Stored lipids (LCFAs and cholesterols) in LDs can be utilized as needed by lipolysis for energy source and/or materials for membrane synthesis (Figure 4) [33]. Also, expressions of genes responsible for LCFA biosynthesis in EOC cells are enhanced, whereas those associated with de novo cholesterol synthesis are suppressed [110]. A very recent report demonstrated that the ARID1A gene mutation, which is closely associated with the malignancy of CCC cells, results in downregulation of the mevalonate pathway responsible for cholesterol biosynthesis [111]. This experimental evidence raises the possibility that TF on the surface of EOC cells may be activated by disruption of lipid rafts from insufficient de novo synthesis of cholesterol. In this case, increased LCFAs may negatively regulate cell surface TF as they can direct TF to lipid rafts.
Potential Regulation by MicroRNAs
MicroRNAs (miRs) are regulatory RNAs that are frequently dysregulated in cancer cells, including EOC cells, and can contribute to cancer progression [112]. Previous reports showed that miR-126 and -223 can post-transcriptionally downregulate TF expression in monocytes [113] and endothelial cells [114], respectively. These miRs can be upregulated in EOC cells [112]. Moreover, other studies demonstrated that hypoxia can decrease expression of these miRs in some non-cancer cells [115,116]. Thus, it is likely that miRs influence the expression level of TF in EOC cells in response to altered supply of O 2 and plasma components.
Potential Involvement of TFPIs
Tissue factor pathway inhibitor-1 (TFPI-1), also known as TFPI, is a serine protease inhibitor with two isoforms and candidate regulator of cell surface TF-fVIIa [117]. A recent study reported hypoxia-driven suppression of TFPI-1 expression in breast cancer cells [118]. This transcriptional repression can be mediated via an authentic HIF-1α-dependent mechanism [118]. In addition, TFPI-1 downregulation can occur in endothelial cells via HIF-2α [119]. Conversely, TFPI-1 expression was found to induce HIF-1α expression in neuroblastoma cells even under normoxia conditions, thereby enabling cells become drug resistant [120]. It is intriguing that this serine protease inhibitor is also expressed in EOC cells and plays multiple roles in the regulation of TF-fVIIa activity. A previous study showed that EOC cells express TFPI-1 and the expression levels did not differ between histological subtypes [121].
TFPI-2, a protease inhibitor with weaker activity against TF-fVIIa complex than TFPI-1, is more constitutively and highly expressed in CCC cells compared to other EOC cells [40,41]. Whether TFPI-2 can be upregulated in response to hypoxia in EOC cells and contribute to regulation of cell surface TF function remains unclear. However, one study showed that the transcript level of TFPI-2 is increased in a von Hippel-Lindau (VHL) gene disruption-dependent manner in renal cancer cells [122], suggesting that HIFs may contribute to this transcriptional activation. Indeed, the VHL gene is frequently lost in EOC, especially in CCC cells, supporting a potential HIF-driven TFPI-2 expression [122][123][124]. Overall, a hypoxia-driven induction of TFPIs in EOC cells has not been reported yet. However, it is likely, given characteristics of EOC tissues such as hypoxia and VHL dysfunction in EOC cells (Figure 4, number 14).
TFPI-1 activity seems to be influenced by lipid rafts in the plasma membrane. As described above, TF-fVIIa activity can be repressed by binding to lipid rafts. TF-fVIIa may be activated once lipid rafts are disrupted by altered lipid composition, such as decreased cholesterol level. One study showed that TFPI-1 activity on the surface of Chinese hamster ovary cells could be enhanced by its binding to caveolae, a kind of lipid raft [125] (Figure 4, number 15). This suggests that TFPI-1 can indirectly suppress TF-fVIIa activity by lipid rafts. In contrast, another report showed that the ability of TFPI-1 to suppress the TF-fVIIa complex is independent of lipid rafts [126]. Together, this suggests the possibility that TF-fVIIa activity on the EOC cell surface is affected by multiple TFPI-driven mechanisms.
Possible Involvement of Integrins
As described above, integrin α5 complexed with β1 subunit plays key roles in the biology of EOC cells. To date, various studies have shown that this integrin subunit is inducible in various non-EOC cancer cells in response to hypoxia, non-hypoxia HIF expression conditions, and Sp1-dependent mechanisms [127][128][129]. In addition, coagulation factors can function as ligands for integrins [130]. The α5-β1 subunit can interact with multiple coagulation factors such as fX, fibrinogen, and vWF [130]. Thus, the α5 subunit is expected to largely influence TF-fVIIa-dependent inflammatory responses if hypoxia-driven expression of the α5 subunit is possible in EOC cells. One study in an endothelial cell line showed that the cell surface full-length form of TF can bind integrin dimers, especially those composed of α3 and α6 subunits, but could not bind the α5 subunit [131]. The precise nature of the physical interaction between TF and the α5 subunit in EOC cells is currently unclear. Thus, the relationship between hypoxia-driven expression of integrins and TF-fVIIa function in EOC cells remains obscure. However, EOC cells could aberrantly produce trypsin [132], thereby potentially resulting in PAR2-dependent activation of the cell surface α5-β1 complex [133]. Thus, TF-fVIIa possibly augments activity of this integrin complex via PAR2 on the surface of EOC cells.
Potential Involvement of Matriptase and Metalloproteinases
Matriptase (MTP) is a member of the type II transmembrane serine protease family. Recently, this protease, in addition to PAR2, was found to be cleaved and activated by the TF-fVIIa(-fXa) complex to transmit cellular signals [134]. TF on the cancer cell surface within tumors can contact fVII via TIF and/or its ectopic synthesis in cancer cells. This raises the possibility that MTP-mediated cellular signaling reactions can cause malignant phenotypes of EOC cells if cancer cells produce sufficient levels of MTP. Indeed, MTP is highly expressed in EOC cells and tissues, and this aberrant expression correlates with disease prognosis [135][136][137][138]. In vitro experiments showed that MTP contributes to the facilitation of motility and invasiveness of EOC cells [138]. The further significance of MTP in EOC biology and its potency as a therapeutic target await future investigation.
Metalloproteinases, such as matrix metalloproteinases (MMPs) and a disintegrin and metalloproteinase with thrombospondin motifs (ADAMTSs), are other candidate mediators of the TF-fVIIa pathway. MMP-2 and MMP-9 can be upregulated in small cell lung cancer cells in a TF-dependent manner [139]. Thus, the same transcriptional activation pathways may be involved in the biology of EOC cells exposed to hypoxia. Indeed, previous studies reported that MMP-2 and MMP-9 are highly expressed in ovarian tumor tissues [140]. These MMPs potentially augment immune responses via activation of PAR1 [141,142] through enhanced secretion by thrombin [143]. Moreover, expression of these MMPs may be augmented in EOC cells in response to hypoxia as increased HIF-1α [144] and Sp1 [145] can target MMP2 and MMP9 genes.
ADAMTSs are extracellular proteases responsible for diverse physiological functions such as inflammation and vascular biology [146]. ADAMTS13 cleaves the multimeric vWF precursor to exert a proper blood coagulation process [146]. In addition, TFPI-2 is a binding partner and substrate of ADAMTS1 [147]. ADAMTSs were found to be upregulated in EOC tissues [148]. Thus, it is feasible that ADAMTS as well as MMPs influence coagulation factor-driven inflammatory responses in EOC tissue.
Potential Inflammatory Responses under Deprivation of Both O 2 and Serum
As described above, we showed that expression of ICAM-1 is synergistically and strongly enhanced in CCC cells when cells are exposed to hypoxia along with LCFA starvation via interplay among transcription factors Sp1, HIFs, and NFκB. Thus, ICAM-1-mediated inflammatory responses are expected under this severe TME condition. Additionally, our complementary DNA (cDNA) microarray analysis using OVSAYO cells revealed that many genes are synergistically activated in response to simultaneous deprivation of O 2 and serum encoded pro-inflammatory factors such as many interleukins (ILs, also called as CXCLs) [34,149] (Figure 5) and TNF-α [34,149]. These factors, together with vascular endothelial growth factor (VEGF) secreted from CCC cells exposed to hypoxia likely enhance the permeability of blood vessels, thereby promoting extravasation of coagulation factors and pro-inflammatory cells to augment ICAM-1-mediated inflammatory responses. However, a hypoxic environment is not necessarily favorable for the ICAM-1 mediated response, as vessel permeabilization factors released from cancer cells may also lead to generation of aberrant vasculature. This vascular abnormality may result in inefficient perfusion, followed by inhibition of proper TIF flow.
Potential Inflammatory Responses under Deprivation of Both O2 and Serum
As described above, we showed that expression of ICAM-1 is synergistically and strongly enhanced in CCC cells when cells are exposed to hypoxia along with LCFA starvation via interplay among transcription factors Sp1, HIFs, and NFκB. Thus, ICAM-1-mediated inflammatory responses are expected under this severe TME condition. Additionally, our complementary DNA (cDNA) microarray analysis using OVSAYO cells revealed that many genes are synergistically activated in response to simultaneous deprivation of O2 and serum encoded pro-inflammatory factors such as many interleukins (ILs, also called as CXCLs) [34,149] (Figure 5) and TNF-α [34,149]. These factors, together with vascular endothelial growth factor (VEGF) secreted from CCC cells exposed to hypoxia likely enhance the permeability of blood vessels, thereby promoting extravasation of coagulation factors and pro-inflammatory cells to augment ICAM-1-mediated inflammatory responses. However, a hypoxic environment is not necessarily favorable for the ICAM-1 mediated response, as vessel permeabilization factors released from cancer cells may also lead to generation of aberrant vasculature. This vascular abnormality may result in inefficient perfusion, followed by inhibition of proper TIF flow. Figure 5. cDNA microarray analysis [34] revealed that interleukins (CXCLs) are upregulated at the highest levels in OVSAYO cells when cells are simultaneously exposed to hypoxia and serum starvation conditions. Cells were cultured under normoxia (N) and hypoxia (H) for 16 h in the presence (+) or absence (-) of fetal calf serum (FCS), and then total RNA was isolated for transcriptome analysis. (A) Line graph representation of gene expression levels under four different culture conditions. Genes synergistically and most activated under "H/FCS-" condition (688 total measurements) are shown as red lines. Among them, nine genes correspond to CXCLs, as highlighted in black. (B) Scatter plot representation of genes shown in (A). Expression levels (processed signal) under "H/FCS-" conditions were plotted as a function of raw signal according to the heat map on the right. Highly expressed CXCLs were assigned to each dot.
In addition, lymph angiogenesis and/or lymphatic vessel dysfunction are likely to affect plasma-derived protein levels within the tumor TIF. Indeed, a previous study demonstrated that protein concentrations in the tumor TIF are higher than those in normal tissues [104]. Furthermore, due to the vascular dysfunction described above and increased solid stress associated with the high growth rate of tumors [150], the TIF pressure is higher in cancer tissues than in normal tissues [104,150]. This suggests that dissemination of plasma-and lymph-derived proteins and pro-inflammatory cells within cancer tissues is restricted. In addition, tissue dissemination of plasma-derived molecules and cells is likely to be an important determinant for successful Figure 5. cDNA microarray analysis [34] revealed that interleukins (CXCLs) are upregulated at the highest levels in OVSAYO cells when cells are simultaneously exposed to hypoxia and serum starvation conditions. Cells were cultured under normoxia (N) and hypoxia (H) for 16 h in the presence (+) or absence (-) of fetal calf serum (FCS), and then total RNA was isolated for transcriptome analysis. In addition, lymph angiogenesis and/or lymphatic vessel dysfunction are likely to affect plasma-derived protein levels within the tumor TIF. Indeed, a previous study demonstrated that protein concentrations in the tumor TIF are higher than those in normal tissues [104]. Furthermore, due to the vascular dysfunction described above and increased solid stress associated with the high growth rate of tumors [150], the TIF pressure is higher in cancer tissues than in normal tissues [104,150]. This suggests that dissemination of plasma-and lymph-derived proteins and pro-inflammatory cells within cancer tissues is restricted. In addition, tissue dissemination of plasma-derived molecules and cells is likely to be an important determinant for successful ICAM-1-driven inflammatory responses [104]. Molecular weight, lipophilicity, hydration, and charge of components of ECM and TIF [104,151] are likely critical factors for the dissemination of plasma-derived factors.
Overall, hypoxia along with LCFA starvation can dramatically induce ICAM-1 expression, potentially driving CCC cells to exert inflammatory responses via coagulation factors such as fibrin(ogen). However, it is currently difficult to identify which tumor area is exposed to both hypoxia and limited supply of plasma lipids, as this process would be dependent on a number of factors.
Relationship to Intra-Tumoral Albumin Level
In the advanced stage of EOC, the disease tends to associate with cachexia, leading to poor nutrient status of patients. Malnutrition of cancer patients including EOC patients is common and is related to poor survival. Thus, cachexia is regarded as an important prognostic factor for EOC [152,153]. Cachexia can be estimated by measuring plasma albumin levels [152,153], indicating that plasma albumin concentration can significantly vary depending on the progression of EOC. Hypoxia associated with an insufficient supply of serum lipid (LCFA-albumin complex) causes synergistic activation of the ICAM1 gene to promote CCC cell survival activity [34]. Thus, we surmised that there must be hypoxic tumor areas particularly associated with low albumin levels. However, how and where such characteristic tumor regions can be generated is unclear. Thus, it is worth discussing here how albumin functions within ovarian tumor environments.
Albumin can be considerably present in human peripheral afferent lymph, and its concentration in lymph is approximately 40% of that in plasma [154]. Component analysis of TIF isolated from EOC tissue and normal ovarian tissue revealed that albumin concentrations are higher in EOC TIF than those of healthy ovary tissues associated with reduced collagen levels [155]. This result was unexpected because investigators initially anticipated that steric exclusion and the negative charge effect due to increased collagen levels in ECM should restrict the available distribution volume, leading to decreased albumin levels within EOC tumor tissues [155]. This phenomenon may be largely due to increased hydration measured on the basis of Na + concentration. Enhanced hydration within EOC tumors compared to that in healthy ovarian tissues increases the available distribution volume, allowing efficient dissemination of albumin [155]. In conclusion, the absolute distribution volume of albumin within EOC tissues can vary depending on density of ECM, degree of hydration, and ionization of ECM components. Thus, the ECM structure may be a crucial determinant of tissue albumin levels. ECM components can vary depending on histological subtype of EOC [155]. For example, a previous study reported that stromal tissues of CCC patients are highly hyalinized, with increased basement membrane materials such as laminin and type IV collagen [156].
Molecular size-dependent exclusion is likely a major determinant of TIF components. Several studies reported equal diffusion of relatively low molecular weight dextran (10~40 kDa) within rat fibrosarcoma, whereas diffusion clearly decreases as molecular weight increases in the range between 40~70 kDa [155,157]. Diffusion of albumin within EOC tissues may follow this principle, as the molecular size of albumin (66 kDa) is within the higher range. In addition, albumin molecules should be neutralized or positively charged due to tissue acidification as it becomes distant from the capillary. Collectively, these arguments imply that tissue dissemination of TIF components is not equivalent between low molecular weight compounds such as glucose and higher molecular weight compounds such as albumin.
Overall, tissue albumin levels vary depending not only on the liquid phase of the TME, such as the hydration and acidity of TIF, but also on characteristics of TIF components, such as molecular size, steric structure, charge, and lipophilicity. Diverse interactions among these factors likely reveal a complex picture regarding distribution of TIF components within aberrantly vascularized EOC tissues. Finally, albumin availability for cancer tissues can be altered depending on individual EOC patients, as it could be associated with different plasma albumin levels.
Potential Involvement of Zinc Ion
Zinc is an abundant metal in the human body and is essential for physiological functions [158]. A number of proteins require association with the zinc ion (Zn 2+ ) to properly regulate critical functions, such as DNA damage response and erasure of harmful reactive oxygen species [158]. Zn 2+ is vital for innate and adaptive immunities [158][159][160][161]. Zn 2+ homeostasis is often dysregulated in cancer patients [160,161]. Cancer patients with diminished Zn 2+ levels suffer from impaired immune function [158]. Indeed, some evidence suggested that Zn 2+ metabolism within tumor tissues is dysregulated, and Zn 2+ levels often become lower in plasma and scalp hair of cancer patients compared to normal healthy individuals [159]. Indeed, Zn 2+ availability in cancer patients could be reduced because albumin, a major vehicle of plasma Zn 2+ , is decreased in association with cachexia [160,161]. Additionally, Zn 2+ levels are higher in healthy human serum than that of lung cancer patients [161]. However, Zn 2+ levels become higher in breast and lung cancer tissues although plasma Zn 2+ levels are reduced [158]. Thus, Zn 2+ homeostasis can vary depending on cancer types and altered tumor Zn 2+ levels likely influence coagulation factor-dependent immune responses in EOC tissues.
The ubiquitously expressed transcription factor Sp1 contains a three finger-like structure complexed with Zn 2+ , called a zinc-finger motif (ZFM). The ZFMs at the C-terminal of Sp1 protein facilitate its binding to consensus DNA binding sites [162]. Zn 2+ is thus essential for Sp1-driven regulation of many genes including most housekeeping genes. Sp1 is also responsible for basal and/or hypoxia-driven expression of TF, fVII, and ICAM-1 genes [32,34]. Moreover, Zn 2+ is also essential for proper functioning of MMPs [163]. Thus, TF-fVIIa-dependent inflammatory responses may be enhanced if Zn 2+ levels within EOC tissues are increased. EOC cells can express zinc transporters such as LIV1 [164,165] and hZIP1 [166] to uptake extracellular Zn 2+ , potentially leading to accumulation of Zn 2+ within cells. However, one study reported that Zn 2+ levels within EOC tissues and plasma of EOC patients were lower than those in benign tissues and normal controls, respectively [167]. Furthermore, zinc treatment of cancer cells can activate the cell death process, resulting in apoptosis or necrosis [167]. Thus, lower Zn 2+ levels within EOC tissues are not necessarily unfavorable for the proper function of Sp1.
Taken together, dysregulation of zinc homeostasis within EOC cells is likely to affect coagulation factor-dependent inflammatory responses. The Sp1 transcription factor can be a major candidate regulator of this process through its ZNF motifs. However, knowledge regarding zinc metabolism in EOC cells is currently limited and awaits future exploration.
Potential Involvement of Calcium Ion
In addition to Zn 2+ , H + and calcium ion (Ca 2+ ) are key components of the TIF [168]. These factors may be more influential within hypoxic tumor tissues, as cancer cells tend to produce large amount of H + with lactate [33]. This is a vital issue for TF-fVIIa-dependent inflammatory responses given that Ca 2+ is essential for successful progression of the coagulation cascade and activation of cell surface receptors (Figures 1 and 2). The calcium ion is also essential for the interaction between integrins and coagulation factors [130] and function of MMPs [163]. Furthermore, as already discussed in this review, Ca 2+ influx contributes to the activation of cell surface TF. PAR1 activation can contribute to cellular Ca 2+ mobilization ( Figure 2). Also, PAR2 activation can increase intracellular Ca 2+ levels to transmit downstream signaling [19]. Several studies demonstrated that the tissue availability of Ca 2+ is closely and inversely associated with extracellular H + concentrations [168][169][170]. This reciprocal regulation of cellular H + and Ca 2+ levels can be mediated by the G protein-coupled receptors ovarian cancer gene receptor 1 (OGR1) (Figure 4) and calcium-sensing receptor (CaSR), respectively [168]. In addition, extracellular H + can activate Ca 2+ channels [169,170]. OGR1, a cell surface H + receptor ubiquitously expressed in human tissues [170], was initially identified from an EOC cell line (Hey) [171]. OGR1 acts as a sensor of extracellular pH (Figure 4, number 16) [168,169]. OGR1 functions in a pH-sensitive manner and is active around pH 6, a commonly observed extracellular acidity within tumor tissues [170]. Thus, it is expected that OGR1 contributes to acidification of the TME of EOC tissues. Activation of OGR1 in cancer cells by H + exposure can augment NHE1 activity [169] (Figure 4) with increased intracellular Ca 2+ and inositol triphosphate levels, resulting in facilitation of proton extrusion. Importantly, Zn 2+ is a known inhibitor of OGR1 (Figure 4) [169].
The ER is an important Ca 2+ storage organelle [172,173]. ER stress, followed by the unfolded protein response, is a cellular response mechanism associated with incorrect folding of newly synthesized proteins, leading to survival under severe conditions. Hypoxia causes ER stress, thereby releasing Ca 2+ from the ER (Figure 4, number 17) [172,173]. Ca 2+ elicits various signaling mechanisms to exert cellular stress responses [168][169][170][171][172][173]. Thus, this ER-mediated regulation of intracellular Ca 2+ levels likely contributes to extracellular acidification ( Figure 4). Hypoxia-driven ER stress can invoke redox imbalance, as O 2 is required for the catalytic reaction by PDI [33] (Figure 4). In addition, a recent study demonstrated that Ca 2+ depletion from the ER affects the mobility of ER luminal PDI, also resulting in redox imbalance (reductive shift) of disulfide isomerization in the ER (Figure 4) [174]. This is possibly applicable to cell surface PDI. Collectively, extracellular Ca 2+ homeostasis within hypoxic EOC tissues could be regulated in association with intra-and extracellular acidity because Ca 2+ receptors and channels can be intimately linked to H + sensors on the surface of cancer cells. This process is likely influenced by cachexia, as a considerable amount of Ca 2+ in the blood associates with albumin [175].
Potential Coagulation Factor-Driven Immune Responses within EOC Tissues Insufficiently Supplied with O 2 and Plasma Components
Solid tumor tissues, including EOC tissues, are composed of many factors other than cancer cells, such as various immune cells, fibroblasts, cancer-associated fibroblasts ( Figure 6) and tumor-associated macrophages (TAMs) (Figure 6), to generate the characteristic TME structure beneficial for cancer progression [85,156,[176][177][178]. ICAM-1 may play primary roles in the immune response in CCC tissue exposed to severe hypoxia ( Figure 6, number 1). Furthermore, hypoxia can induce expression of TF and VEGF at the transcriptional level in EOC cells ( Figure 6, numbers 2 and 3). TF induction under hypoxia can be dependent on VEGF signaling [179] but is not a direct target of HIF [10]. VEGF induction can be dependent on both HIF-1α and TF signaling [180]. Furthermore, fVII, CXCLs ( Figures 5 and 6, number 2 and 3), and TNFα ( Figure 6, number 4) are also induced in response to hypoxia, particularly when EOC cells are simultaneously exposed to serum starvation, as in the case of ICAM1 gene [34]. Increased secretion of VEGF, cytokines and chemokines by cancer cells are indicative of enhanced permeability of capillaries and prompt penetration of plasma coagulation factors into tumor tissues ( Figure 6). Furthermore, given the possible dysfunction of the lymph system, followed by failed migration of stromal factors into lymph capillaries ( Figure 6, number 5) [104], tumor tissues are potentially pro-coagulant-rich conditions compared to normal tissues. It is further expected that the TF-fVIIa complex is increased on the surface of EOC cells ( Figure 6, number 2), resulting in increased fibrin levels.
Previous immunohistochemical experiments have implicated hypercoagulability within EOC tissues, as evidenced by the presence of TF, fV, fVII, and fibrin deposition [181]. Later studies with a specific antistatin probe revealed that fX is expressed in various cancer tissues [182]. Indeed, we recently found that fX presents in CCC cells [29]. Furthermore, this ectopic expression is increased in response to hypoxia [29]. Activated platelets can secrete prothrombin, fibrinogen, and vWF [183] ( Figure 6, number 6). Also, macrophages can synthesize TF [184], fVII [45], and fX [182] (Figure 6, number 7). TF synthesis in macrophages can be facilitated in response to ATP exposure via P2X7 receptor signaling in mice. [185]. This inflammatory process can be regulated by PDI to generate TF-positive EVs, thereby leading to thrombosis [185]. These plasma-independent pro-coagulants potentially enhance TF-fVIIa-driven inflammatory responses via PARs ( Figure 6 Figure 6, number 11). A recent study demonstrated that tumor-derived lactate can augment the expression of pro-angiogenic genes in macrophages [186], suggesting that macrophage-derived pro-coagulants may also be influenced by EOC cell-derived lactate ( Figure 6, number 12). Some studies have shown that non-hepatocytic cancer cells can autonomously synthesize fibrinogen [187,188], although ectopic synthesis of prothrombin in cancer cells has not been demonstrated. These results suggest that multiple coagulation factors can be ectopically synthesized within tumor tissue. cancer cells can autonomously synthesize fibrinogen [187,188], although ectopic synthesis of prothrombin in cancer cells has not been demonstrated. These results suggest that multiple coagulation factors can be ectopically synthesized within tumor tissue. Currently, there are no reports on hypoxia-driven expression of PAR1 and MTP, although PAR2 was shown to be increased in response to hypoxia in glioma cells [189]. In addition, no reports have examined hypoxia-driven expression of integrins in EOC cells. The TF-fVIIa complex can be secreted into the TIF as a component of EVs ( Figure 6, number 13) [28,29]. It is possible that this extracellular TF-fVIIa may interact with stromal cells and transmit intracellular signals ( Figure 6, number 14), as demonstrated in a glioma model [189], to contribute to the aggressiveness of EOC.
Pro-inflammatory proteins within the TME are unlikely to fully function given the high concentrations of H + . In addition, higher TIF pressure within tumor tissues may block efficient penetration of plasma components into tumor tissues ( Figure 6, number 15). Prothrombin, fibrinogen, vWF, and albumin-LCFA complex are expected to extravasate into tumor tissues ( Figure 6, number 16). These factors are expected to differentially disseminate within tissue, as they vary in molecular size [8]. Also, their dissemination can be modulated in relation to histological diversity, tissue components, and physicochemical characteristics of EOC tumors. cancer cells can autonomously synthesize fibrinogen [187,188], although ectopic synthesis of prothrombin in cancer cells has not been demonstrated. These results suggest that multiple coagulation factors can be ectopically synthesized within tumor tissue. Figure 6. Potential coagulation factor-driven inflammatory responses within TME of EOC exposed to hypoxia. The coagulation factors derived from circulation, stromal cells, and cancer cells can connect EOC cells to platelets and leukocytes. Cell surface receptors and protease can also be activated under hypoxia, potentially transmitting cellular signaling. These molecular events are expected to facilitate survival and metastatic potential of EOC cells. Currently, there are no reports on hypoxia-driven expression of PAR1 and MTP, although PAR2 was shown to be increased in response to hypoxia in glioma cells [189]. In addition, no reports have examined hypoxia-driven expression of integrins in EOC cells. The TF-fVIIa complex can be secreted into the TIF as a component of EVs ( Figure 6, number 13) [28,29]. It is possible that this extracellular TF-fVIIa may interact with stromal cells and transmit intracellular signals ( Figure 6, number 14), as demonstrated in a glioma model [189], to contribute to the aggressiveness of EOC.
Pro-inflammatory proteins within the TME are unlikely to fully function given the high concentrations of H + . In addition, higher TIF pressure within tumor tissues may block efficient penetration of plasma components into tumor tissues ( Figure 6, number 15). Prothrombin, fibrinogen, vWF, and albumin-LCFA complex are expected to extravasate into tumor tissues ( Figure 6, number 16). These factors are expected to differentially disseminate within tissue, as they vary in molecular size [8]. Also, their dissemination can be modulated in relation to histological diversity, tissue components, and physicochemical characteristics of EOC tumors. cancer cells can autonomously synthesize fibrinogen [187,188], although ectopic synthesis of prothrombin in cancer cells has not been demonstrated. These results suggest that multiple coagulation factors can be ectopically synthesized within tumor tissue. Figure 6. Potential coagulation factor-driven inflammatory responses within TME of EOC exposed to hypoxia. The coagulation factors derived from circulation, stromal cells, and cancer cells can connect EOC cells to platelets and leukocytes. Cell surface receptors and protease can also be activated under hypoxia, potentially transmitting cellular signaling. These molecular events are expected to facilitate survival and metastatic potential of EOC cells. Currently, there are no reports on hypoxia-driven expression of PAR1 and MTP, although PAR2 was shown to be increased in response to hypoxia in glioma cells [189]. In addition, no reports have examined hypoxia-driven expression of integrins in EOC cells. The TF-fVIIa complex can be secreted into the TIF as a component of EVs ( Figure 6, number 13) [28,29]. It is possible that this extracellular TF-fVIIa may interact with stromal cells and transmit intracellular signals ( Figure 6, number 14), as demonstrated in a glioma model [189], to contribute to the aggressiveness of EOC.
Pro-inflammatory proteins within the TME are unlikely to fully function given the high concentrations of H + . In addition, higher TIF pressure within tumor tissues may block efficient penetration of plasma components into tumor tissues ( Figure 6, number 15). Prothrombin, fibrinogen, vWF, and albumin-LCFA complex are expected to extravasate into tumor tissues ( Figure 6, number 16). These factors are expected to differentially disseminate within tissue, as they vary in molecular size [8]. Also, their dissemination can be modulated in relation to histological diversity, tissue components, and physicochemical characteristics of EOC tumors. cancer cells can autonomously synthesize fibrinogen [187,188], although ectopic synthesis of prothrombin in cancer cells has not been demonstrated. These results suggest that multiple coagulation factors can be ectopically synthesized within tumor tissue. Figure 6. Potential coagulation factor-driven inflammatory responses within TME of EOC exposed to hypoxia. The coagulation factors derived from circulation, stromal cells, and cancer cells can connect EOC cells to platelets and leukocytes. Cell surface receptors and protease can also be activated under hypoxia, potentially transmitting cellular signaling. These molecular events are expected to facilitate survival and metastatic potential of EOC cells. Currently, there are no reports on hypoxia-driven expression of PAR1 and MTP, although PAR2 was shown to be increased in response to hypoxia in glioma cells [189]. In addition, no reports have examined hypoxia-driven expression of integrins in EOC cells. The TF-fVIIa complex can be secreted into the TIF as a component of EVs ( Figure 6, number 13) [28,29]. It is possible that this extracellular TF-fVIIa may interact with stromal cells and transmit intracellular signals ( Figure 6, number 14), as demonstrated in a glioma model [189], to contribute to the aggressiveness of EOC.
Pro-inflammatory proteins within the TME are unlikely to fully function given the high concentrations of H + . In addition, higher TIF pressure within tumor tissues may block efficient penetration of plasma components into tumor tissues ( Figure 6, number 15). Prothrombin, fibrinogen, vWF, and albumin-LCFA complex are expected to extravasate into tumor tissues ( Figure 6, number 16). These factors are expected to differentially disseminate within tissue, as they vary in molecular size [8]. Also, their dissemination can be modulated in relation to histological diversity, tissue components, and physicochemical characteristics of EOC tumors. Currently, there are no reports on hypoxia-driven expression of PAR1 and MTP, although PAR2 was shown to be increased in response to hypoxia in glioma cells [189]. In addition, no reports have examined hypoxia-driven expression of integrins in EOC cells. The TF-fVIIa complex can be secreted into the TIF as a component of EVs ( Figure 6, number 13) [28,29]. It is possible that this extracellular TF-fVIIa may interact with stromal cells and transmit intracellular signals ( Figure 6, number 14), as demonstrated in a glioma model [189], to contribute to the aggressiveness of EOC.
Pro-inflammatory proteins within the TME are unlikely to fully function given the high concentrations of H + . In addition, higher TIF pressure within tumor tissues may block efficient penetration of plasma components into tumor tissues ( Figure 6, number 15). Prothrombin, fibrinogen, vWF, and albumin-LCFA complex are expected to extravasate into tumor tissues ( Figure 6, number 16).
These factors are expected to differentially disseminate within tissue, as they vary in molecular size [8]. Also, their dissemination can be modulated in relation to histological diversity, tissue components, and physicochemical characteristics of EOC tumors.
In addition, intra-tumoral concentrations of H + , Zn 2+ , and Ca 2+ should be determined depending on the balance of their influx and efflux via cell surface transporter molecules (Figures 4 and 6, numbers 16 and 17), reprogrammed metabolism, and release from cellular ion stores. Also, Zn 2+ and Ca 2+ homeostasis can be affected by reduced plasma albumin levels due to cachexia ( Figure 6, number 16). A previous study also revealed that protein levels of CA-125, VEGF, and osteopontin in the TIF vary depending on histological subtypes and disease stages of EOC [190]. Therefore, successful recruitment of platelets and leukocytes to EOC cells ( Figure 6) can depend on the diverse pattern of interaction between tumor constituents. Finally, in addition to this cell-to-cell contact, survival [10] and metastatic potential [10] of EOC cells may be enhanced due to intracellular signaling via the activation of proteases [10], receptors [34], and adhesion molecules [191] on the cell surface ( Figure 6, number 18).
Clinical Implications
Accumulating experimental evidence has suggested that an anti-coagulant strategy is promising for ovarian cancer patients in addition to generally applied therapeutics. Indeed, use of the recently developed anticoagulation agents, generally called direct oral anticoagulants (DOACs), may be promising, as these drugs that target fXa and thrombin can suppress inflammatory responses [192,193]. However, this strategy is currently applied only to cancer patients who develop VTE [194]. The present review suggests that the EOC tumor region associated with insufficient supply of O 2 and albumin-LCFA complex potentially tends to undergo coagulation factor-dependent inflammatory reactions. Identification of such tissue areas is expected to be beneficial, as it could generate promising prognosis markers and therapeutic targets. Recent advances in spectroscopic methodology have enabled real-time monitoring of hypoxia regions within tumors [195]. For example, nitroimidazole compound treatment of tumor-bearing mice followed by positron emission tomography (PET) revealed severely hypoxic tumor regions [196]. Specific molecules including proteins, lipids, and lactate within EOC tumors can be directly detected using magnetic resonance spectroscopy (MRS) [197,198]. Furthermore, extracellular acidity can be measured using similar spectroscopy techniques [198]. Therefore, these less invasive detections by PET and MRS may enable us to identify TME associated with both hypoxia and insufficient lipid supply; however, additional methodological improvements are likely required in future studies to detect such small tissue areas. Targeting such specific tumor regions is likely to improve EOC therapy by reducing toxic side effects. However, this strategy may also be disadvantageous for drug dissemination due to the dysregulated vasculature and increased TIF pressure [151]. Future development of highly permeable hypoxia-activated prodrugs such as nitroimidazole derivatives [199] may overcome this problem. In addition, hypoxic tumors are known to be resistant to radiation therapy [200]. Currently, heavy particle beam therapy is considered beneficial to circumvent the radioresistance of hypoxic tumors [201].
Low level of plasma albumin is associated with cachexia, a malnutrition status. Cancer patients with cachexia are known to show poor prognosis. The data described in this review suggest that poor plasma albumin levels may facilitate synergistic induction of multiple genes in hypoxic EOC cells, leading to disease progression. In addition, the albumin-bound paclitaxel preparation (nab-paclitaxel, Abraxane ® ), at 130 nm in size, is promising for EOC therapy [202]. Thus, the efficacy of nab-paclitaxel in EOC should be impaired if the tissue availability of albumin is restricted due to poor vascularization. On the other hand, another study reported that nab-paclitaxel treatment combined with anti-angiogenesis therapy improved tumor blood perfusion of non-small cell lung cancer [203], suggesting that nab-paclitaxel may contribute to tumor vascularization, followed by an improvement of the oxygenation status of EOC tissue. Together this suggests that low albumin levels in cancer patients may cause a high mortality rate not only by malnutrition but also via augmented malignant tumor phenotypes and inefficient drug delivery.
In addition to the anti-coagulation strategy, a therapeutic potential strategy targeting Sp1 may be possible, as this transcription factor plays key roles in the induction of pro-coagulants in response to hypoxia. To date, multiple chemical inhibitors of Sp1 function are available, such as mithramycins, curcumin, and doxorubicin [204]. However, care should be taken for unwanted side effects, as Sp1 is critical for many normal cell functions and the known Sp1 inhibitors lack target specificity.
Summary and Perspectives
This review compiled potential coagulation factor-dependent pro-inflammatory reactions within EOC tissues exposed to deficiency of O 2 and plasma components. Cellular expression levels of proteins discussed in this review such as coagulation factors and ICAM-1 would differ depending on cancer cell types and tumor components. Thus, it is unlikely that this pro-inflammatory response occurs within the TME of all cancer types. Overall, it is possible that the TF-fVIIa complex and ICAM-1 are significantly involved in this mechanism under these harsh conditions. To date, in vitro experiments have shown that the expressions of fVII and ICAM-1 were synergistically increased when CCC cells are simultaneously exposed to hypoxia and serum starvation conditions. Specifically, the albumin-LCFAs complex was identified as a serum component responsible for the ICAM1 induction. Serum factors involved in FVII expression have not been reported. Thus, tumor tissue associated with deficiency of both O 2 and LCFAs may frequently undergo an inflammatory response associated with coagulation factors and ICAM-1. However, this process is not necessarily advantageous for EOC progression as the supply of plasma factors should be limited. The diffusion of plasma coagulation factors within the TIF of EOC is likely diverse due to the differential composition of ECM components, stromal cells, tissue hydration, and TIF pressure. Physical characteristics of plasma factors, such as charge, steric effect, and molecular mass, are also critical for their distribution within tumors. In addition, it has been reported that in addition to cancer cells, non-cancerous tumor constituent cells also potentially secrete coagulation factors. Thus, these ectopically synthesized pro-coagulants may compensate for insufficient plasma-derived coagulation factors to exert TF-fVIIa-driven inflammatory responses. Tumor tissues exposed to hypoxia are generally accompanied by extracellular acidification and reprogramming of cell metabolism, thereby modulating TF-fVIIa function via PDI and cellular lipids. These mechanisms may be influenced via the influx-efflux of metal ions. Furthermore, albumin levels within EOC tissues can be altered in association with cachexia, potentially affecting the hypoxia-driven gene expressions required for coagulation factor-driven inflammation reactions. That is, this review revealed that TF-fVIIa-driven pro-inflammatory responses within EOC tissues can be affected by multiple cellular and environmental factors in association with characteristic vasculatures, thereby augmenting malignancy. Future studies will lead to greater understanding of the aggressiveness of EOC, followed by the generation of promising clinical applications.
|
v3-fos-license
|
2023-10-05T13:11:55.036Z
|
2023-09-29T00:00:00.000
|
263623173
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/09/29/2023.09.28.559945.full.pdf",
"pdf_hash": "5ffc3db3c81f45edd9976799c34ab47f2f1a041b",
"pdf_src": "BioRxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2353",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "5ffc3db3c81f45edd9976799c34ab47f2f1a041b",
"year": 2023
}
|
pes2o/s2orc
|
Attention and microsaccades: do attention shifts trigger new microsaccades or only bias ongoing microsaccades?
Brain circuitry that controls where we look also contributes to attentional focusing of visual contents outside of current fixation or contents held within the spatial layout of working memory. A behavioural manifestation of the contribution of oculomotor brain circuitry to selective attention comes from modulations in microsaccade direction that accompany attention shifts. Here, we address whether such modulations come about because attention itself triggers new microsaccades or whether, instead, shifts in attention only bias the direction of ongoing microsaccades – i.e., naturally occurring microsaccades that would have been made whether or not attention was also shifted. We utilised an internal-selective-attention task that has recently been shown to yield clear spatial microsaccade modulations and compared microsaccade rates following colour retrocues that were matched for sensory input, but differed in whether they invited an attention shift or not. If shifts in attention trigger new microsaccades then we would expect more microsaccades following attention-directing cues than following neutral cues. In contrast, we found no evidence for an increase in overall microsaccade rate following attention-directing cues, despite observing robust modulations in microsaccade direction. This implies that shifting attention biases the direction of ongoing microsaccades without changing the probability that a microsaccade will occur. These findings provide relevant context for complementary and future work delineating the links between attention, microsaccades, and upstream oculomotor brain circuitry, such as by helping to explain why microsaccades and attention shifts are often correlated but not obligatorily linked.
When considering the link between microsaccades and attention, a relevant question is whether shifting attention will itself trigger new microsaccades or whether, instead, shifting attention merely biases the direction of ongoing microsaccades -i.e., naturally occurring microsaccades that would have been made whether or not attention was also shifted.
These alternative scenarios have often remained tacit in the literature.Yet, disambiguating these scenarios is likely to be informative for our understanding of the links between attention, microsaccades, and upstream oculomotor brain circuitry.Foremost, the answer to our question may help delineate the probabilistic (e.g., (Horowitz et al., 2007;Liu et al., 2022;Yu et al., 2022)) versus deterministic (e.g., (Lowet et al., 2018)) nature of the link between microsaccades and covert attention shifts.If attention only biases the direction of ongoing microsaccades, without triggering new microsaccades, then this would explain a probabilistic link whereby attention can also be shifted without a concomitant microsaccade (corroborating the findings reported in (Liu et al., 2022;Yu et al., 2022)).In addition, our findings may guide future neurophysiological studies targeting the role of upstream oculomotor brain circuitry, such as the superior colliculus -a brain structure implicated in both selective covert attention (Krauzlis et al., 2013;Lovejoy & Krauzlis, 2010;Muller et al., 2005) and microsaccade generation (Hafed et al., 2009;Hafed & Krauzlis, 2012).For example, based on our results, we may derive distinct hypotheses that attention adds new activity to the pool of superiorcolliculus neurons (triggering new microsaccades) or instead mainly acts by biasing the balance of neuronal activity (yielding only a bias in the direction but not the rate of microsaccades).
To address whether attention shifts add new microsaccades or bias ongoing ones, we compared microsaccade rates following attentional cues that were carefully matched for sensory input, but that differed whether they invited an attention shift or not.We note how the comparison between informative (attention-directing) and neutral cues was also available in (Engbert & Kliegl, 2003;Laubrock et al., 2005) though in these studies the spatial biasing by voluntary attention itself was weak and the cues were not perfectly matched, making it hard to address the question that we put central here.Our logic was straightforward: if voluntary attention shifts trigger new microsaccades -that account for the observed spatial modulation in microsaccade direction -then overall microsaccade rates should be higher following attention-directing than following neutral cues.In contrast, if attention merely biases ongoing microsaccades, then we should see a biasing effect on microsaccade direction without a concomitant increase in overall microsaccade rate.METHODS Ethics.Experimental procedures were reviewed and approved by the local Ethics Committee at the Vrije Universiteit Amsterdam.Each participant provided written informed consent before participation and was reimbursed 10 euros/hour.
Participants.Twenty-five healthy human volunteers participated in the study (age range: 18 -44; 5 male and 20 female; 25 right-handed; 5 corrected-to-normal vision: 1 glasses and 4 lenses).Sample size of 25 was determined a-priori based on previous publications from the lab with similar experimental designs that relied on the same outcome measure (e.g., (van Ede et al., 2019(van Ede et al., , 2020(van Ede et al., , 2021))).One participant was excluded for all analyses due to chance-level performance.
Stimuli and procedure.To investigate microsaccade modulations by voluntary shifts of attention, we employed an internal selective-attention task (Fig. 1) for which we previously established robust spatial modulations in microsaccades (see e.g., (Liu et al., 2022;van Ede et al., 2019)).In short, participants encoded two visual items into working memory in order to compare the orientation of either memory item to an upcoming test stimulus.In a random half of the trials, a retrocue presented during the retention interval informed which memory item would become tested by briefly changing the colour of the central fixation marker to match the colour of the target memory item (attention-directing cue).In the other half of the trials, we also presented a colour cue, but this time cue colour did not match either item in memory, and hence did not invite a shift of attention to either memory content (neutral cue).
Each trial began with a brief (250 ms) encoding display in which two bars (size: 2° × 0.4° visual angle) appeared at 5° to the left and right of the fixation.After an initial retention delay of 1250 ms, the fixation dot (0.07° radius) changed colour for 1000 ms serving as a retrocue that prompted participants to select the colour-matching target item in memory.After another retention delay of 500 ms, the test display appeared in which a target bar appeared at the center of the screen.The target bar matched the colour of the target memory item but was rotated between 10 to 20 degrees clockwise or counter-clockwise from its original orientation.Participants were required to report the tilt offset of the test stimulus using the keyboard ('j' for clockwise, 'f' for counter-clockwise).Participants received feedback immediately after the response by a number ("0" for wrong, or "1" for correct) appearing for 250 ms slightly above the fixation dot.After the feedback, inter-trial intervals were randomly drawn between 500 and 1000 ms.
In the experiment, bars could be four potential colours: green (RGB: 133,194,18), purple (RGB: 197,21,234),orange (RGB: 234,74,21),and blue (RGB: 21,165,234]).For each participant, bars were always chosen from a random subset of three of these colours.In each encoding display, bars were randomly assigned two distinct colours from the available colour pool and two distinct orientations ranging from 0° to 180° with a minimum difference of 20° between each other.
To address our central question whether attention triggers additional microsaccades, we included both attention-directing and neutral retrocues.In half the trials, the retrocue colour matched either memory item, inviting a shift of attention to the to-be-tested memory content.Participants were encouraged to use these informative, attention-directing retrocues that were 100% valid.In the other half of the trials, the retrocue was drawn from either colour that was not in the encoding display, thus not inviting a shift of attention among the contents of working memory.In these trials, participants would know which memory item was the target memory item only upon the presentation of the coloured test stimulus.
In total, the study consisted of 2 sessions, each containing 5 blocks of 48 trials, resulting in a total of 480 trials.Both conditions (attention-directing cues and neutral cues) were randomly intermixed within each block as were attention directing cues to left and right memory items, resulting in 240 attentiondirecting trials (120 directing attention to the left memory item, 120 directing attention to the right memory item), and 240 neutral trials.At the start of the experiment, participants practiced the task for 48 trials.We did not include practice trials in our analyses.
Eye-tracking acquisition and pre-processing.Using an EyeLink 1000 with a sampling rate of 1000 Hz, we continuously tracked gaze along the horizontal and vertical axes from the right eye.The eye tracker was placed ∼5 cm in front of the monitor and ∼65 cm away from the eyes.Before recording, we calibrated the eye tracker through the built-in calibration and validation protocols from the EyeLink software.Gaze data was originally recorded in .edfformat and was converted to .ascformat to be further analyzed after recording.
We analysed the data in Matlab with help of the Fieldtrip analysis toolbox (Oostenveld et al., 2011) and custom code.To clean the data from blinks, we marked blinks by detecting clusters of zeros in the timeseries eye data.To eliminate residual blink artifacts, all data from 100 ms before to 100 ms after the detected blink clusters were set to Not-a-Number (NaN) and thereby ignored in further analysis.After blink removal, data were epoched relative to retrocue onset.
Saccade detection.To detect saccades, we employed a velocity-based method that we established previously (Liu et al., 2022), and that builds on other established velocity-based methods for microsaccade detection (e.g., (Engbert & Kliegl, 2003)).Since the items in the current experiment were always horizontally arranged (i.e., left and right), our current analyses focused exclusively on the horizontal channel of the eye data.Note that although we only use horizontal data to detect saccades, we previously confirmed the validity and sensitivity of this approach for our task-setup by comparing this method to a well-established method (as described in (Engbert & Kliegl, 2003)) that considered both horizontal and vertical gaze (see (Liu et al., 2022) for the relevant comparison).
We first calculated the gaze velocity by taking the distance between temporally successive gaze positions.Then, to reduce noise, we smoothed velocity in the temporal dimension with a Gaussianweighted moving average filter with a 7-ms sliding window (using the built-in function "smoothdata" in MATLAB).We then identified the first sample when the velocity exceeded a trial-based threshold of 5 times the median velocity as the onset of a saccade.To avoid counting the same saccade multiple times, we imposed a minimum delay of 100 ms between successive saccades.Saccade magnitude and direction were calculated by estimating the difference between pre-saccade gaze position (-50 to 0 ms before threshold crossing) vs. the post-saccade gaze position (50 to 100 ms after threshold crossing).Finally, depending on saccade direction (left/right) and the side of the cued memory item (left/right), we labeled every detected saccades as "toward" or "away".
After identifying and labelling the saccades, we quantified the time courses of saccade rates (in Hz) using a sliding time window of 50 ms, advanced in steps of 1 ms.To map the size of the modulated saccades (without setting an arbitrary saccade-size threshold), we additionally decomposed saccade rates into a time-size representation (as in (Liu et al., 2022)), showing the time courses of saccade rates, as a function of the saccade size.For saccade-size sorting, we used successive magnitude bins of 0.5 visual degrees in steps of 0.05 visual degree.
To directly quantify the number of saccades during the attentional window of interest we averaged saccade rates in the 200-600 ms window after cue onset.This window was set a-priori based on our prior study that revealed this to be the critical window after cue onset in which we found more microsaccades toward vs. away from the memorized location of the cued memory target (see (Liu et al., 2022)).
Statistical analysis.To evaluate the reliability of statistical patterns we observed in the time-series data, we employed a cluster-based permutation approach (Maris & Oostenveld, 2007).This method is ideal for evaluating significance while circumventing the problem of multiple comparisons.
We first acquired a permutation distribution of the largest cluster size by randomly permuting the trialaverage data at the participant level 10,000 times and identifying the size of the largest clusters after each permutation.To obtain the probability (P value) of the clusters observed in the original data, we calculated the proportion of permutations for which the size of the largest cluster after permutation was larger than the size of the observed cluster in the original, non-permuted data.The permutation analysis was conducted using Fieldtrip with default clustering settings.That is, after a mass univariate t-test at a two-sided alpha level of 0.05, we identified and grouped the adjacent same-signed data points that were significant and then defined cluster size as the sum of all t-values in the cluster.
In addition to the cluster-based permutation approach that considered the full time range, we also extracted the data over the pre-defined 200-600 ms window after cue onset, and compared the relevant conditions using paired-sample t-tests.
Data and code availability.Data and analysis code will be made publicly available before publication.Participants encoded two visual items into working memory in order to later compare the orientation of one of the items to a test stimulus that was tilted 10 or 20 degrees clockwise or counter-clockwise relative to the colour-matching memory item.During the delay, the colour of the central fixation dot changed colour serving as a cue.In a random half of the trials, the retrocue matched the colour of either item in working memory, informing with 100% reliability that this item would become tested.In the other trials, the cue also involved a colour change, but this time the colour did not match either item in working memory.Colours of memory items and cues were counterbalanced, such that a physically identical colour cue would be attention directing in some trials while neutral in other trials.
RESULTS
Human volunteers performed a selective-attention task in which attention was directed to one of two visual representations in working memory (Fig. 1).In half of the trials, a central colour cue directed attention to the colour-matching visual item in working memory (attention-directing cues).In the other half of the trials, we also presented a colour cue but this time the cue did not match either memory item and therefore did not invite a shift of attention (neutral cues).This served as the critical control condition to assess whether shifting attention triggered new microsaccades: in both cases a central colour cue appeared but only in the former condition a goal-directed shift of attention could be made.
As a roadmap to our results, we first report behavioural performance to confirm that participants used the cue when it invited a shift of attention to the target memory item.We then outline the spatial modulation of microsaccade direction when cues directed attention to either the left or right memory item.Having established the above, we finally turn to our key question whether this spatial microsaccade modulation is driven by the addition of new, attention-driven, microsaccades or, instead, by a biasing of ongoing microsaccades that would have been made anyway.For this, we compared overall microsaccade rate between trials with attention-directing cues versus neutral cues.The logic is straightforward: if attention adds new microsaccades then overall rate should increase following attention-directing compared to neutral cues.In contrast, if attention merely biases the direction of ongoing microsaccades then we should observe similar rates following attention-directing and neutral cues.
Attentional cues during working memory lead to a robust spatial modulation in microsaccades
Having confirmed that participants used the informative (attention-directing) cues to improve performance, we next assessed how informative cues -that directed attention to memory items that had been presented to the left or right at encoding -modulated the direction of microsaccades.Building on our prior studies (de Vries et al., 2023;Liu et al., 2022;van Ede et al., 2019van Ede et al., , 2020) ) as well as related studies deploying external covert-attention tasks (Corneil & Munoz, 2014;Engbert & Kliegl, 2003;Fernández et al., 2023;Hafed et al., 2011;Hafed & Clark, 2002;Lowet et al., 2018), we observed robust biasing of saccade directions as a function of whether cues directed attention to memory items that were presented to the left or to the right of fixation at encoding (Fig 3a).When statistically evaluating the full-time course, we observed two consecutive significant clusters (horizontal lines in Fig. 3a; cluster P values: 0.01 and 3.9996e-04).Likewise, when zooming in on the a-priori defined time window from 200-600 ms after the cue (based on (Liu et al., 2022)), we observed a highly robust modulation (Fig 3b; n=24).The black horizontal lines denote significant clusters following cluster-based permutation analysis (Maris & Oostenveld, 2007).Dashed vertical lines indicate the a-priori defined time window of interest, from 200 to 600 ms after cue onset.b) Bar graph of toward and away saccade rates from panel a, averaged over the a-priori defined window from 200-600 ms after cue onset (based on (Liu et al., 2022)).Error bars indicate ± 1 SEM calculated across participants (n=24).Grey lines denote individual participants.c) Overall saccade rates as a function of time after cue onset following attention-directing and neutral cues.Note how saccade rates are than in panel a, given that overall saccade rates include both toward and away saccades.d) Bar graph of overall saccade rates in panel c, averaged over the a-priori defined window from 200-600 ms post cue onset.
To establish the nature of this spatial saccade modulation, we repeated the above analysis as a function of saccade size (Fig. 4a).As can be seen, the vast majority of saccades occurred in the microsaccade range, below 1 degree visual angle.This is perhaps not surprising given that in this time period of interest, there was nothing on the screen apart from the fixation dot.Critically, when directly comparing toward and away saccades (Fig. 4a, bottom panel), we also found that the spatial modulation was confined to the microsaccade range, replicating our previous findings (de Vries et al., 2023;Liu et al., 2022;van Ede et al., 2019). .This shows how the vast majority of saccades occur below 1 degree visual angle (i.e. in the microsaccade range) and how the directional bias (bottom) is also largely confined to the microsaccade range (as also in (Liu et al., 2022)).During encoding, items were centred at ± 5 degrees to the left and right of fixation.b) Overall saccade rates as a function of saccade size and time after cue onset for trials with attention-directing cues (top), neutral cues (middle), and their difference (attention-direction minus neutral; bottom).
In our task, the spatial biasing of microsaccades must reflect a shift of attention to the colour-matching memory item that was held within the spatial lay-out of working memory.It cannot reflect sensory processing of the cue or anticipation of the probe, as both cue and probe were always presented centrally (i.e., left and right were exclusively defined in the memorised visual space).
The attentional modulation of microsaccades is not driven by the addition of new microsaccades
We finally turn to the central question of the current study: whether the above-described modulation of microsaccades is driven by the addition of new, attention-driven, microsaccades or whether this spatial modulation is driven by a directional biasing of ongoing microsaccades that would have been made anyway.For this, we compared overall microsaccade rates following informative, attentiondirecting, cues versus following neutral cues.Our logic was straightforward: if attention introduces new microsaccades then the overall rate should increase following attention-directing compared to neutral cues.
Overall saccade rates following attention-directing and neutral cues are shown in Figure 3c-d.In contrast to the above prediction, we found no evidence for an increase in overall microsaccade rate following attention-directing cues compared to following neutral cues (that were matched in terms of bottom-up sensory stimulation).In fact, if anything, we found a slight decrease in overall microsaccade rate following attention-directing cues, though this did not reach significance -neither when considering the full-time axis (no significant clusters; Fig. 3c), nor when zooming in on the a-priori defined window of interest (Fig. 3d; t(23) = -1.994,p = 0.058, d = -0.407).This implies attention shifts do not generate new microsaccades.
To complement this main result, Figure 4b shows the relevant data as a function of saccade size.This confirmed a similar prevalence of saccades below 1 degree following both attention-directing and neutral cues, with no evidence for more saccades in this microsaccade range following attentiondirecting cues (Fig. 4b, lower panel), despite the clear spatial modulation that we observed following these cues.
DISCUSSION
We observed robust modulations in microsaccade direction, with more microsaccades toward versus away from the memorised location of a cued visual memorandum (replicating our previous work, e.g., (Liu et al., 2022;van Ede et al., 2019)).Our aim was to assess whether this spatial modulation is driven by the addition of new microsaccades that are triggered directly by a spatial shift in attention .To this end, we compared overall microsaccades rates following attention-directing cues to a control condition with neutral cues that did not invite any shift of attention, but that were matched in sensory properties otherwise.Our data showed no evidence for an increase in overall microsaccade rate following attention-directing cues (if anything we observed a slight, albeit non-significant decrease).This lack of a rate increase in the face of a clear directional modulation implies that the attentional modulation of microsaccades is not driven by the injection of "new" microsaccades.Instead, these data suggest that attention merely biases the direction of ongoing microsaccades that would have been made whether or not attention was also shifted.In other words, shifting attention does not change the probability that a microsaccade will occur, but it does change the probability where a microsaccade will go -if one will be made.
By studying microsaccades as an accessible peripheral signature of upstream oculomotor brain circuitry, our findings have implications for our understanding of the links between attention and the oculomotor system.For example, it has previously been established that the superior colliculus, besides regulating saccades and microsaccades, may also play a key role in shifting covert attention (e.g., (Krauzlis et al., 2013;Lovejoy & Krauzlis, 2010;Muller et al., 2005)).Our data are consistent with this, and tentatively suggest that attention shifts may not necessarily add to ongoing activity within the superior colliculus, as evidenced by the absence of an increase in overall microsaccade.Instead, we speculate based on our data that attention may re-balance activity of the pool of superior-colliculus neurons (whose overall activity levels and excitatory/inhibitory balance may be normalised, for example, via a divisive normalisation mechanism; (Carandini & Heeger, 2012;Reynolds & Heeger, 2009)).Such re-balancing would predict that the distribution of activity may vary depending on attention, but not the total amount of activity -consistent with our finding of not more microsaccades, but instead the same number of microsaccades that go more in the attended direction.
Previous work has suggested that microsaccades are correlated with, but not necessary for attentional shifts (e.g., (Horowitz et al., 2007;Liu et al., 2022;Yu et al., 2022)).Our findings are consistent with, and help to appreciate, this probabilistic nature of this link between microsaccades and selective attention.Because attention shifts themselves do not trigger microsaccades, it is possible to have attention shifts without a peripheral trace in the form of a microsaccade.Instead, only when attention shifts are made in the presence of an (already planned) microsaccade, will we observe a correlation between microsaccade direction and the direction of the covert or internal shift of attention.It is noteworthy, however, that we here studied microsaccades when participants are explicitly cued to voluntarily shift attention.Complementary work has shown how spontaneous microsaccades -made in the absence of volitional shifts of attention -may themselves trigger performance and neural modulations that are typically associated with attention shifts (Hafed, 2013;Shelchkova & Poletti, 2020;Yuval-Greenberg et al., 2014).Whether and how attention can become decoupled from such spontaneous microsaccades, or whether attention may inevitably follow in the case of spontaneous microsaccades, remains an interesting question not addressed by the current study.
A recent study (Willett & Mayo, 2023) found little to no evidence for a directional biasing of microsaccades to an attended visual stimulus, despite clear behavioural and neural benefits of attention.A critical difference with our study is that the authors did not consider shifts of attention following a cue, but rather sustaining attention to either of two targets that remained fixed throughout a block of trials.It is conceivable that microsaccade biases may be particularly sensitive to shifts of covert selective attention, without necessarily also tracking the process of sustaining covert visual attention after this initial shift (van Ede, 2023).Another set of complementary studies focused on microsaccades following exogenous capture of attention to a peripheral cue.These studies have typically reported microsaccades biases that look away from the cued location (e.g., (Engbert, 2012;Laubrock et al., 2005)), rather than the observation of more toward microsaccades that we reported here, and that previous studies also reported following voluntary attention cues.Whether such microsaccades biases that occur in the opposite direction also reflect biasing of ongoing microsaccades or rather the addition of new microsaccades remains an interesting question for future work.
At least two early studies on microsaccade biases by attention also included neutral cues (Engbert & Kliegl, 2003;Laubrock et al., 2005).While their data thus allowed the same key comparison as we targeted here, there are several relevant differences.The most critical difference is that in these studies the employed endogenous cues showed only weak directional biasing of microsaccades (Laubrock et al., 2005).In comparison, here, we observed a clear bias; building on our prior work using the same overall task set-up (e.g., (Liu et al., 2022;van Ede et al., 2019van Ede et al., , 2020van Ede et al., , 2021))).It was only in the presence of this clear spatial modulation that our question was of interest: whether we could "explain" this spatial modulation by the addition of new microsaccades.In addition, in aforementioned work, the authors did not fully match informative (attention-directing) and neutral cues.In contrast, we always used the same colour cues, and counterbalanced whether specific colours were attention-directing or neutral.Finally, we here uniquely compared microsaccades following attention-directing versus neutral cues in the context of an internal selective attention task in which attention was directed internally to the contents of working memory -a task that yielded robust spatial modulations that were a prerequisite for addressing our central question.
In conclusion, we report a clear biasing of microsaccades by the direction of internal attention shifts and reveal how this microsaccade modulation must be attributed to a spatial biasing of ongoing microsaccades rather than by the addition of new microsaccades that are triggered by the attention shift itself.This helps to explain the probabilistic nature of the link between microsaccades and attention and provides relevant context for future work delineating the links between attention, microsaccades, and upstream oculomotor brain circuitry.
Figure 1 .
Figure 1.Internal selective attention task with attention-directing and neutral colour retrocues.Participants encoded two visual items into working memory in order to later compare the orientation of one of the items to a test stimulus that was tilted 10 or 20 degrees clockwise or counter-clockwise relative to the colour-matching memory item.During the delay, the colour of the central fixation dot changed colour serving as a cue.In a random half of the trials, the retrocue matched the colour of either item in working memory, informing with 100% reliability that this item would become tested.In the other trials, the cue also involved a colour change, but this time the colour did not match either item in working memory.Colours of memory items and cues were counterbalanced, such that a physically identical colour cue would be attention directing in some trials while neutral in other trials.
Figure 2 .
Figure 2. Behavioural performance confirms participants used the cue when possible.a) Task accuracy in trials with attention-directing and neutral cues.b) Reaction times in trials with attention-directing and neutral cues.Error bars indicate ± 1 SEM calculated across participants (n=24).Grey lines denote individual participants.
t(23)= 4.171, p = 0.0004, d = 0.851) with more saccades toward than away from the memorised location of the cued item.
Figure 3 .
Figure3.Internal selective attention modulates the direction of microsaccades without changing overall microsaccade rate.a) Saccade rates in trials with attention-directing cues as a function of time after cue onset for saccades in the direction of the memorised location of the cued memory item (toward) and in the opposite direction (away).Shadings indicate ± 1 SEM calculated across participants (n=24).The black horizontal lines denote significant clusters following cluster-based permutation analysis(Maris & Oostenveld, 2007).Dashed vertical lines indicate the a-priori defined time window of interest, from 200 to 600 ms after cue onset.b) Bar graph of toward and away saccade rates from panel a, averaged over the a-priori defined window from 200-600 ms after cue onset (based on(Liu et al., 2022)).Error bars indicate ± 1 SEM calculated across participants (n=24).Grey lines denote individual participants.c) Overall saccade rates as a function of time after cue onset following attention-directing and neutral cues.Note how saccade rates are than in panel a, given that overall saccade rates include both toward and away saccades.d) Bar graph of overall saccade rates in panel c, averaged over the a-priori defined window from 200-600 ms post cue onset.
Figure 4 .
Figure 4. Attentional biasing of saccades is driven by saccades in the microsaccade range.a) Saccade rates as a function of saccade size (y axes) and time after attention-directing cues (x axes) for toward saccades (top), away saccades (middle), and their difference (toward minus away; bottom).This shows how the vast majority of saccades occur below 1 degree visual angle (i.e. in the microsaccade range) and how the directional bias (bottom) is also largely confined to the microsaccade range (as also in(Liu et al., 2022)).During encoding, items were centred at ± 5 degrees to the left and right of fixation.b) Overall saccade rates as a function of saccade size and time after cue onset for trials with attention-directing cues (top), neutral cues (middle), and their difference (attention-direction minus neutral; bottom).
|
v3-fos-license
|
2018-10-17T23:45:24.666Z
|
2015-06-02T00:00:00.000
|
58082011
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://storage.googleapis.com/jnl-sljo-j-sljs-files/journals/1/articles/8128/submission/proof/8128-1-28652-1-10-20150601.pdf",
"pdf_hash": "ad04f60319c57aa05c1a164449dd5c988ff5290f",
"pdf_src": "ScienceParseMerged",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2354",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"sha1": "ad04f60319c57aa05c1a164449dd5c988ff5290f",
"year": 2015
}
|
pes2o/s2orc
|
A SRI LANKAN FAMILY WITH CEREBELLAR HEMANGIOBLASTOMA DUE TO A HETEROZYGOUS NONSENSE MUTATION IN THE VON HIPPEL-LINDAU TUMOR SUPPRESSOR , E 3 UBIQUITIN PROTEIN LIGASE ( VHL ) GENE
Mutations in the von Hippel-Lindau tumor suppressor, E3 ubiquitin protein ligase (VHL) gene cause a variety of phenotypes including von Hippel-Lindau (VHL) disease. This report describes a Sri Lankan family with three siblings with cerebellar haemangioblastoma due to a nonsense mutation in the VHL gene. A heterozygous nucleotide substitution in exon 3 was identified in all three siblings resulting in a stop codon at amino acid position 175 leading to a truncated non-functional VHL protein[NM_000551.3(VHL):c.525C>G;p.Tyr175Ter;rs5030835C>G]. Patients with rare tumours characteristic of VHL should undergo clinical and genetic evaluation for VHL. The Sri Lanka Journal of Surgery 2015; 33(1): 30-32
ACKNOWLEDGEMENT
I would like to express my utmost gratitude towards Prof. Vajira Dissanayake and Prof.
Rohan Jayasekare for their invaluable guidance throughout the M.Sc.programme.I would also like to thank the Norwegian Center for International Cooperation in Education for funding the M.Sc.programme with NOMA grant and the Department of Medical Genetics at Oslo University Hospital for support in conducting the M.Sc.programme in collaboration with Human Genetics Unit at University of Colombo.My Special thanks go to Dr. Kalum Wettasinghe, Ms.Vindya Udalamatta, Ms.Imalki Kariyawasam and Dr. Nirmala Sirisena for their supervision, guidance and help during the research work and manuscript preparations.I would like to thank Dr. Asantha Jayawardane for his support during the preparation of molecular cytogenetic report and my friend Mr. Navoda Palpola for his continuous support during my research.I thank all my colleagues reading for masters in Genetic Diagnostic and Clinical Genetics, all the staff members at Human Genetics Unit, scientists at Asiri Center for Genomics and Regenerative medicine and my family for their invaluable support given during my studies and research.
Mutations in the VHL tumor suppresser gene cause a variety of phenotypes including von
Hippel-Lindau disease (VHL), familial phaeochromocytoma and inherited polycythaemia [2].VHL is an autosomal dominantly inherited familial cancer syndrome predisposing to a variety of malignant and benign tumors [3] such as haemangioblastomas of the cerebellum, spinal cord, brainstem and retina, clear cell renal carcinomas, pheochromocytomas, endolymphatic sac tumours, pancreatic islet cell tumours, haemangiomas of the adrenals, liver and lungs, epididymal and broad ligament papillary cyst adenomas as well as visceral cysts in the kidneys and pancreas [4].A germline mutation of the VHL gene is the basis of familial inheritance of VHL syndrome.According to Knudson's ("Two Hit") hypothesis, both alleles of a tumor suppresser gene need to be mutated in order for a tumour to develop, therefore a patient who manifests a tumour, inherits one mutation from a parent, and develops the second mutation in the same gene in the affected organ as a somatic mutation, at which point the tumour begins to manifest [5].
To date, more than 300 mutations have been identified in families with VHL disease, consisting of partial and whole gene deletions, frameshift, nonsense, missense, and splice site mutations [6].About 20% of cases are due to de novo mutations.This report describes a Sri Lankan family with 3 siblings with cerebellar haemangioblastoma due to a heterozygous nonsense mutation in the VHL gene.
The Family
A 28 year old female that was clinically diagnosed with cerebellar hemangioblastoma was referred to the Human Genetics Unit for genetic evaluation.The patient was clinically diagnosed with cerebellar haemangioblastoma at the age of 13 years, since then, she had undergone four surgeries for removal of the recurrent tumour in the posterior cranial fossa.In addition, a tumor arising from the fourth ventricle of the brain was also surgically removed.
Two of her male siblings were also diagnosed with cerebellar hemangioblastoma.The CT scan of one of the brothers showed dilatation and a cystic mass in the lateral third ventricle as well as a renal cyst.Figure 1 shows the pedigree of the family with VHL disease.
Materials and Methods
The VHL gene was sequenced in the patient and her 2 siblings after obtaining their written informed consent.DNA was extracted from peripheral blood using QIAamp blood DNA midi kit from Qiagen.All 3 exons and flanking intronic regions of the VHL gene were sequenced using an ABI PRISM 3130 Genetic Analyzer.The published human VHL gene Reference Sequence file obtained from GenBank (http://www.ncbi.nlm.nih.gov) was used for comparison of the nucleotide sequences generated from the patients and to confirm the presence of any mutations.
Results
A heterozygous nonsense mutation was identified in all 3 individuals in exon 3 of the VHL gene.A single nucleotide substitution at position 13214 (NG_008212.3.g13214C>G)replaced the codon for amino acid tyrosine (UAC) in transcript variant 1 (NM_000551.3.c525C>G) to a stop codon (UAG) resulting in premature termination of the VHL protein at amino acid position 175 (NP_000542.1.pTyr175Ter).This mutation has previously been reported in other families and documented in the dbSNP database and assigned the SNPID rs5030835 (http://www.ncbi.nlm.nih.gov/projects/SNP/rs=5030835).
Discussion
This report describes a Sri Lankan family with 3 siblings with cerebellar haemangioblastoma due to a heterozygous nonsense mutation in the VHL gene.VHL mutations are associated with various benign and malignant tumours resulting in high morbidity and mortality rates.
Mutations in the VHL gene are known to cause haemangioblastomas of the central nervous system (CNS) in 60-80% of VHL patients [6,7].
A study conducted by van der Harst et al. in 1998 reported that 8 out 68 patients with pheochromocytoma had mutations in the VHL gene.Among these patients, two were relatives and had a familial mutation [8].Familial mutations in the VHL gene have also been reported in VHL families presenting with clear cell renal cell carcinoma.Recent advances in understanding the genetic basis of VHL disease has resulted in improved diagnosis of VHL disease and provided greater insights into the molecular pathogenesis of the disease [1].The prognosis can be improved through early screening, diagnosis and surveillance [9].Molecular genetic testing coupled with genetic counseling is now considered standard for the evaluation of patients and families with suspected VHL [10].
Introduction
In the human population there is a vast variability to drug response among different individuals which is always connected with drug safety and efficacy.Different reasons such as environmental factors, physiologic factors, drug-drug interactions and genetic factors play an important role in this phenomenon.These genetic factors are involved in polymorphism in genes related to drug metabolizers, drug transporters and drug receptors [1].
"Pharmocogenetics" is the study of how different genetic variants like Single Nucleotide Polymorphisms and Copy Number Variants(SNPs &CNVs) in particular gene effects the drug response [2].
CYP2D6 gene is located on chromosome 22q13.1 and consists of 4.2 kb region [1].
Cytochrome P450 2D6 (CYP2D6) is a highly polymorphic gene, which is responsible for the metabolism of several important endogenous substrates and other xenobiotics [3].Human cytochrome P450 is a gene supper family which contains 57 genes and 58 pseudogenes.Out of which, CYP2D6, CYP2C19, and CYP2C9 play a crutial role in pharmocogenomics since in the present day, 80% of the prescribed drugs are being metabolized by these enzymes [2].
In this study we have selected CYP2D6 gene since it metabolizes around 25% of the currently used drugs worldwide [4].This gene contains around 82 allelic variants important in the context of pharmacogenomics [3].There is a wide difference in the distribution among different ethnic groups.CYP2D6 is responsible for metabolizing antidepressants, antipsychotics, antiarrhythmics, antiemetics, beta-adrenoceptor antagonists (beta-blockers) and opioids.The presence of SNPs alters the CYP2D6 enzymatic activity with effects ranging significantly within a population and includes individuals with ultrarapid (UM), extensive (EM), intermediate (IM), and poor (PM) metabolizer status [3].The response to the drug may vary depending on the fact that whether the enzyme converts the drug in to the active form or whether the enzyme converts the active form of the drug in to the inactive form [4].
Most of the times CYP2D6 genotypes are depicted by using a star (*) eg: -CYP2D6 * 1.Each allele / haplotype can be identified using a specific combination of SNPs and/or other sequence variants (deletions, duplications) within the CYP2D6 gene [5].More than 130 SNPs have been identified within the CYP2D6 gene [3].In this study we use 9 SNPs (Table .1),which will allow us to identify CYP2D6 alleles in our population [1].
Currently, different molecular genetic techniques have been employed to genotype CYP2D6 gene variants, including Polymerase Chain Reaction-Restriction Fragment Length Polymorphism (PCR-RFLP) and allele specific PCR methods.Multiplex single base primer extension technique is a well-documented method developed for genotyping CYP2D6 gene [1].In this study we attempted to develop a comparatively less expensive new multiplex allele specific PCR method to genotype the above given variants.
Materials and methods
Genomic DNA was obtained from an already existing population-based DNA collection maintained in our unit for studies of this nature with the approval of the Ethics Review Committee of the Faculty of Medicine, University of Colombo.Eighteen allele specific PCR primers for normal alleles (CYP2D6NPs) and mutated alleles (CYP2D6MPs) of all nine SNPs with another nine common reveres primers (CYP2D6RPs) were designed.In the majority of the allele specific primers, additional mismatches were introduced at the 3rd base from the 3' end of the primer to increase their efficacy.
Initially, the allele specific primers were separated into two master mixtures (MN and MM).
MN mixture contained all the normal allele while MM mixture contained all the mutated allele specific primers.In the final 25 µL reaction tube, concentration of each primer was 0.2 µM, 1X PCR buffer, 3mM MgCl2,50µM each dNTPs and 0.04U/µL Taq DNA polymerase.
Final extension -72 o C -7mins
The PCR products were visualized in a 3% agarose gel.However, it was hard to distinguish between 1846G>A (203bp) and 1661G>C (193bp) due to the poor band separation.
Therefore, primers were separated in to four master mixtures (MN1, MM1, MN2, MM2).The relevant reverse primers were also added to the master mixtures respectively (MN1- The final 25 µL reaction mixture consisted of 0.5 µM of each primer, 1X PCR buffer, 3mM MgCl2,50µM each dNTPs and 0.04U/µL Taq DNA polymerase.2 µL of the genomic DNA was added each reaction mixtures separately.PCR tubes were labeled as N1, M1, N2, and M2 accordingly.Amplification of desired product were carried out in applied bio system 2720 thermal cycler with following conditions, Initial denaturation -94 C -5 min.
The PCR products were separated by electrophoresis on a 3% agarose gel impregnated with ethidium bromide for 1 hour at 65V and visualized under UV light.
Interpretations
CYP2D6 plays a crucial role in metabolizing antidepressant drugs.Therefore, pharmocogenomic testing should be appropriately combined with the clinical practice in order to get better results.Furthermore, phromocogenomic data should be utilized at the correct moment to improve the patient outcome.Thus, the dosage of the drug should be determined accordingly [7].
There are several very efficient high throughput SNP genotyping methods currently in use, nevertheless, these methods require specialized probes, chemicals and instrument which are highly expensive hence, unaffordable to the public.These reasons lead to the limitation of the use of these methods in clinical diagnosis.Concurrently, traditional PCR-RFLP methods are laborious and time consuming [8].Therefore, a solution for above predicament was to design a multiplex allele specific PCR assay.
Multiplex allele specific PCR technique provides an opportunity to amplify many targeted product simultaneously, with several drawbacks .Thus, a complex optimization process is required to enhance the assay.Additionally, the increase number of primer sets reduces the efficiency and flexibility of amplification of desired products [9].
Therefore, the current assay was optimized by a very complex and tedious optimization process.The only limitation with this method is its low consistency.
Conclusion
Multiplex Allele specific PCR method is a comparatively cost effective method for genotyping CYP2D6 variants nevertheless, this method require further optimization to increase the consistency and reproducibility.
Reasons for Referral:
Evaluating Genetic factors affecting drug metabolism for patients taking drugs metabolized by CYP2D6.(Ex: antidepressant drugs, Tamoxifen)
Limitations:
Not all variants with known impact on enzyme expression and activity are tested in this assay.
Testing Methodology: DNA amplification is done by allele specific multiplex PCR and the analyzing of PCR products is done by agarose gel electrophoresis.
MOLECULAR-CYTOGENETIC ANALYSIS OF A RING CHROMOSOME 18 IN A SRI LANKAN FEMALE CHILD WITH CONGENITAL MALFORMATION, HEART DEFECTS AND GLOBAL DEVELOPMENT DELAY Introduction
Ring chromosome 18, (r18) is a rare chromosomal disorder [1].It can be formed when a part of one or both ends of chromosome 18 has been deleted and joined together [2].As a result, clinical features of patients with ring chromosome 18 depends on the extent of the deleted region at the chromosomal ends [3].Clinical features of ring chromosome 18 may include facial dysmorphism, blepharoptosis, hypotonia, development delay, short stature, microcephaly, mental retardation, heart defect and IgA deficiency [1,3,4].Association of global development delay, congenital malformation and heart defects with ring chromosome 18 has been reported in previous studies [4] Ring chromosome 18 can also arise without any deletions due to dysfunctional telomeres [5].This may lead to asymptomatic carriers of ring chromosome 18.
We report cytogenetic and molecular-cytogenetic findings of a Sri Lankan female child with congenital malformation, heart defects and global development delay.
Case presentation
A two year old girl with congenital malformation, heart Defects and global development delay was referred to the Human Genetics Unit for karyotyping.She is the second child born to a non consanguineous healthy couple at term by a normal delivery following an uneventful pregnancy.The first child of this family is a healthy boy and the second pregnancy ended up in a spontaneous miscarriage at 10 weeks of gestation.
She had a birth weight of 2.2kg,a head circumference of 32 cm and length of 50 cm.All her birth parameters were below the third centile.She had a complete left side cleft lip and cleft palate which was severe enough to make breast feeding impossible.She developed a cyanotic episode in early neonatal period and was referred to a Paediatric cardiology unit.Initial 2D echocardiogram revealed double outlet right ventricle, large permembranous ventricular septal defect and moderate subpulmonic pulmonary stenosis.Further echocardiograms and cardiac catheterization identified single opening in mitral valve with mitral stenosis.Surgical correction of the cardiac defect had been ruled out due to high risk and she was managed conservatively.
She had severe developmental delay and hypotonia.Only partial head control has been achieved at the time of presentation.Social smile was present.She had severe growth failure.
At the age of 2 years her weight was 4.5 kg, head circumference was 39.5 cm and the length was 59 cm.All parameters were well below the 3 rd centile.Apart from dysmorphic features such as microcephaly, flat occiput, mid face hypoplasia and micrognathia were also present.
She had telecanthus, epicanthic folds and slightly up slanted palpebral fissures.Her nasal bridge was flat and ears were marginally low set.Except for long tapering fingers there was no other deformity in the upper limbs but in lower limbs bilateral talipes equinovarus deformity was noted.Appearance of her external genitalia was normal and she did not have any thoracic or spinal abnormalities.Furthermore, ultra sound scan of the abdomen was found to be normal and CT scan of the brain did not identify any structural malformation of the brain.
Materials and Methods
Ethical clearance for the study was obtained from the Ethics review committee of Faculty of Medicine, University of Colombo.Patient was recruited for this study after obtaining the written informed consent from the parents.
Cytogenetic analysis
Peripheral blood of the patient and her parents were cultured in PB-MAX karyotyping media separately for 72 hour.Peripheral blood lymphocytes were harvested and chromosomal staining was carried out with GTL banding technique according to standard protocols.
Metaphase chromosome spreads were analyzed under Olympus BX61 microscope and analyzed using Cytovision 3.1.
FISH analysis
Fish probes RP11-317F19 (specific for GLAR1 gene at 18q23 position) and RP11-1082M21 (specific for TGIF 1 gene at 18p11.3) were ordered from Empire Genomics, USA after studying about the probable genes causing the phenotype.Slide was cleaned with 70% ethanol.Slide preparation and hybridization of probes were carried out according to standard FISH protocol of Empire Genomics (http://www.empiregenomics.com/resources/genomic_procedures).
Slide preparation from harvested cell suspension carried out with 50% relative humidity.
After air drying the slide, it was kept at 45 0 C in the incubator and then 10 µL of probe mixture (2 µL of the probe and 8 µL of hybrid buffer) was added to the slide.Cover slip was applied on the slide and sealed with rubber cement.Slide was placed in the Thermobrite.
Denatured at 73 0 C for 2 minutes and hybridized for 16 hours.After hybridization cover slip was removed and the slide was washed by agitating with prewarmed WS1 (wash solution 1) at 73 0 C for 10 seconds and incubated for 2minutes in the solution.Slide was transferred to WS2 at room temperature for 1minutes then it was dried in the dark and 10 µL of DAPPI was added.Again another cover slip was applied on the slide.After 30 minutes fluorescent signals were observed under Olympus BX61 and analysis was done using Applied Spectral Imaging system software.Above procedures were carried separately for both probes.
Results
Twenty six metaphase chromosomal spreads of the patient were analyzed and six of these spreads were karyotyped.Ring chromosome 18 was observed in all the spreads analyzed.
Karyotypes of the parents were found to be normal.
When showing the TGIF1 gene has not been disrupted.Figure 3 shows FISH images of metaphase spreads.
Discussion
The patients with ring chromosome 18 show a wide range of phenotypic features [1].Majority of the patients reported with ring chromosome 18 shows symptoms of 18q-syndrome while the others show symptoms of 18p-syndrome symptoms or both 18q-and 18psyndromes [4].Mental retardation, development delay, heart defects and facial dysmorphism was reported with many cases of ring chromosome 18 [4,6].
Stankiewicz et al carried out a clinical and molecular cytogenetic study in 7 patients with ring chromosome 18.In that study they even found a patient mosaic for ring chromosome 18 and another patient with a duplication in 18p arm with mild phenotypes [4].Table 1 compares the prominent phenotypes of 5 patients of the above study with our patient excluding the patient with mosaic ring chromosome 18.Previous studies had suggested that growth hormone insufficiency is associated with the haploinsufficiency at 18q23 position and GLAR 1 gene(also known as GALNR1) is known to play a role in this phenomena [7].This explains the deletion of GLAR1 gene in the ring chromosome 18 of our patient FISH probe RP11-1082M21 which is specific for TGIF 1 gene at 18p11.3, had signals in both normal and ring chromosome 18.This result shows the presence of TGIF 1 gene in both ring and normal chromosome, but it does not exclude any possibilities of a deletion distal to the location of TGIF gene.
TGIF 1 Gene (also known as HPE4) at 18p11.3 is found to be associated with holoprosencephaly which is associated with abnormal development of the forebrain and midface [8].Holoprosencephaly is also characterized by hypotelorism (http://www.ncbi.nlm.nih.gov/books/NBK1530/)but our patient showed telecanthus.The CT scan report of the patient did not identify any structural malformation of the brain.From the FISH results it was found that TGIF 1 gene was not deleted in ring chromosome 18.Presence of TGIF 1 gene in both ring and normal chromosome may explain the above phenomena.
Further FISH analysis with more probes and molecular studies can be employed to determine the exact break points of ring chromosome 18 of this patient.
Figure 2
Figure 2 shows the partial electropherogram with the point mutation at position 13214 of the
Figure 1 :
Figure 1: Pedigree of the family with VHL disease showing the familial mutation in the three siblings.
Figure 2 :
Figure 2: Partial electrophoragram of the patient showing the heterozygous nonsense mutation in the VHL gene of results were achieved by the presents or absence of the bands on the gel.The bands which represent the PCR product of N1 and N2 mixtures (mixtures with normal allele specific primers) in the gel indicates the presence of normal alleles and the bands which represent M1 and M2 PCR products indicates the presence of mutated allele.If both bands (mutated and normal) were present, the sample was considered as heterozygous for the respective SNP.
Figure1.
Figure1.Facial dysmorphic features of the patient.This view shows complete left sided cleft lip and palate, telecanthus, epicanthic folds and slightly up slanted palpebral fissures , flat nasal bridge and upsweep of the frontal hairline.
FISH probe RP11-317F19 (specific for GLAR1 gene at 18q23) position was used, Green fluorescent signals were only seen in normal chromosome 18 and no signal was observed in ring chromosome of all the analyzed spreads.Absence of fluorescent signals on ring chromosome 18 indicates that the GLAR1 gene is deleted from the18q23 region.When FISH probe RP11-1082M21(specific for TGIF 1 gene at 18p11.3) was used red fluorescent signals were observed in both normal 18 and ring chromosome 18 of all the analyzed spreads,
Figure 3 :Figure 4 :
Figure 3: FISH image of probe RP11-317F19 showing green fluorescent signals only in normal chromosome 18.Ring chromosome 18 is indicate by an arrow fi
Table 1 :
List of primers and their sequences used in the study.
CYP) 2D6 gene metabolizes around 25% of current drugs used worldwide. CYP2D6 is responsible in metabolizing different drugs such as tricyclic antidepressants, antipsychotics not only that but also antiarrhythmics, antiemetics, beta- adrenoceptor antagonists (beta- blockers) and opioids. The presence of SNPs alters the CYP2D6 enzymatic activity with effects ranging significantly within a population and includes individuals with ultrarapid (UM), extensive (EM), intermediate (IM), and poor (PM) metabolizer status. The response to the drug may vary depending on the fact that weather the enzyme converts the drug in to the active form or the inactive form.
Run 10µl of PCR product at 65v in 2% gel for 30 min. Date:-………………
Confidential Molecular Genetic Laboratory Test Report Date : DD/MM/YY Patient Identification:
These 9 variants would not cover all the pharmocogenomically important variants in CYP2D6 gene.
|
v3-fos-license
|
2018-04-03T03:44:35.349Z
|
2016-09-17T00:00:00.000
|
7382518
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-016-1251-0",
"pdf_hash": "2dcc537dc86bbb78e016940c9ec7c48beb78965c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2358",
"s2fieldsofstudy": [
"Medicine",
"Psychology",
"Political Science"
],
"sha1": "2dcc537dc86bbb78e016940c9ec7c48beb78965c",
"year": 2016
}
|
pes2o/s2orc
|
Severity of back pain may influence choice and order of practitioner consultations across conventional, allied and complementary health care: a cross-sectional study of 1851 mid-age Australian women
Background Back pain is a common, disabling and costly disorder for which patients often consult with a wide range of health practitioners. Unfortunately, no research to date has directly examined the association between the severity of back pain and back pain sufferers’ choice of whom and in what order to consult different health practitioners. Methods This is a sub-study of the large nationally representative Australian Longitudinal Study on Women’s Health (ALSWH). The mid-age cohort women (born 1946-51, n = 13,715) of the ALSWH were recruited from the Australian national Medicare database in 1996. These women have been surveyed six time, with survey 6 being conducted in 2010 (n = 10,011). Mid-age women (n = 1851) who in 2010 had sought help from a health care practitioner for their back pain were mailed a self-report questionnaire targeting their previous 12 months of health services utilisation, health status and their levels of back pain intensity. Results A total of 1620 women were deemed eligible and 1310 (80.9 %) returned completed questionnaires. Mid-age women with back pain visited various conventional, allied health and CAM practitioners for care: 75.6 % consulted a CAM practitioner; 58.4 % consulted a medical doctor; and 54.2 % consulted an allied health practitioner. Women with the most severe back pain sought conventional care from a general practitioner, and those who consulted a general practitioner first had more severe back pain than those who consulted another practitioner first. Following the general practitioner visit, the women with more severe back pain were more likely to be referred to a conventional specialist, and those with less severe back pain were more likely to be referred to a physiotherapist. Conclusions Our findings suggest that women with more severe back pain are likely to visit a conventional practitioner first, whereas women with less severe back pain are likely to explore a range of treatment options including CAM practitioners. The improvement of back pain over time following the various possible sequencing of consultations with different types of health practitioners is a topic with implications for ensuring safe and effective back pain care and worthy of further detailed investigation.
Background
Back pain (BP) is pain that targets the posterior aspect of the body in the area of the lumbar spine and pelvis [1]. The global age-standardised point prevalence of BP has been estimated to 9 % and appears to have remained stable over recent decades [2]. BP is a burden not only for the individual sufferer [3] but also for their families, has major cost impacts for health care systems [4] and leads to even greater indirect cost burden through loss of job productivity and inefficiencies [5,6]. Back pain severity shows substantial influence in individuals' choice of health care consultations and treatments. A previous randomized controlled trial found only patients with moderate or severe baseline pain intensity was associated with the reductions in mean back pain intensity via taking tapentadol [7]. A longitudinal analysis reported the finding that changes in pain severity predicted subsequent depression severity in primary care [8]. Further, a clinical practice guideline strongly recommended all health care practitioners to assess the severity of baseline pain when diagnosing and treating back pain as the diverse medications used have different benefits for patients with varied severity of back pain [9].
BP has long been reported as a highly prevalent complaint in Australian primary care [10] and a multitude of health care provider options exist for back pain -the use of which in many cases attract out-of-pocket spending by consumers in Australia [5]. BP care options include a range of conventional medicine providers (e.g. specialists), allied health providers (e.g. physiotherapists, pharmacists) and complementary and alternative medicine (CAM) providers (e.g. massage therapists, chiropractors, osteopaths) [11]. In Australia, primary health care is typically a person's first point of contact with the health system and is most often provided outside the hospital system. A person does not need a referral for this level of care, which includes services provided by general practitioners, allied health professionals and CAM practitioners. Primary health care is part of a larger system involving other services and sectors, and so can be considered as the gateway to the wider health system. Through assessment and referral, individuals are directed from one primary care service to another, and from primary services onto secondary specialist health care (which is facilitated by the GP) or onto other health services, and back again. Secondary health care is medical care provided by a specialist or facility upon referral by a primary care physician. It includes services provided by hospitals and specialist medical practices [12].
While population-based research in Australia has reported that significant numbers of back pain sufferers choose not to seek care [11,13,14], those that do seek care are more likely to be females and with more severe pain and disability [14]; visiting both conventional and CAM practitioners [15], in particular general practitioners and chiropractors [14] as well as massage therapists [13]. It is common for people in mid-age to experience back pain [16]. In addition, mid-age women are more likely to utilise a range of health care practitioners for the treatment of chronic illness, including back pain [15]. Furthermore, recent research also shows a very small percentage of BP sufferers seek exclusive help from CAM and over 50 % seek help from only conventional providers [13]. It is interesting to note the varied characteristics of patients with back pain who sought help in general practice and chiropractic practice. Patients with back pain treated by general practitioners were younger, more likely to be a smoker, and experienced greater pain than those treated by chiropractors [17][18][19], and the back pain sufferers' selfreferral to chiropractors was likely associated with acute back pain, while self-referral to general practitioners was likely associated with chronic back pain [18].
To date, there are variable levels of evidence showing improved patient outcomes for low back pain available via a range of different treatment approaches and providers [20][21][22]. How to provide care in a way that reflects best clinical practice across a range of individual circumstances and variables is reflected in this wide range of clinical approaches [23,24]. Recent work examining women's practitioner choices for BP care suggests that choice of treatment is largely unrelated to the relative evidence-base of those treatments, being instead influenced by the patient's experience and familiarity of care, wider social network recommendations and the geographical proximity of seemingly qualified practitioners [25]. Indeed, it has also been suggested that the range of BP management options available to sufferers may lead to increased but not necessarily more effective BP health care use constituting a challenge to providing cost-efficient care and health services for BP [26].
Despite these complexities around the wide range of BP care options available, no research to date has examined the association between the severity of BP or general health and BP sufferers' choice of whom and in what order to consult different health practitioners. In response, this paper reports the first such examination amongst a nationally representative sample of mid-age women with BP.
Sample
This research was conducted as part of the Australian Longitudinal Study on Women's Health (ALSWH), which was designed to investigate multiple factors affecting the health and well-being of women in Australia over a 20-year period. Women in three age groups ("young" 18-23, "mid age" 45-50 and "older" 70-75 years) were randomly selected from the national Medicare database, which is maintained by the Australian Government and covers all Australian citizens and permanent residents' name and address details [27]. The age groups for the main cohorts were selected so that participation would commence, at least for most women in the cohorts, before the occurrence of major life events, such as first pregnancy, establishment of a long-term relationship, menopause, retirement, or death of a partner [27]. The focus of this study is women from the "mid-age" cohort. The baseline survey (n = 14,779) was conducted in 1996 and the respondents have been shown to be broadly representative of the national population of women in the target age group [28]. Socio-demographic characteristics, health services, health behaviours and risk factors, physical health, emotional health, and time use were collected from the survey questionnaires. For this cross-sectional sub-study, 1851 women who had indicated in survey 6 (conducted in 2010) that they had sought help from a health care practitioner for their back pain were mailed an invitation to participate and a questionnaire. Of these women, 1620 were deemed eligible, and 1310 (80.9 %) returned completed sub-study questionnaires. At the time of this survey, the women were aged 59-64 years.
Health service utilisation -outcome measures
The women were asked if they had consulted with a range of commonly consulted medical doctors, a range of allied health practitioners and a range of CAM practitioners for their back pain in the previous 12 months. The list of medical doctors included: general practitioner (GP), orthopaedic specialist, neurologist, rheumatologist, or other medical practitioner. The list of allied health practitioners included: physiotherapist, occupational therapist, nurse, pharmacist, or other allied health practitioner. The list of CAM practitioners included: massage therapist, chiropractor, herbalist/naturopath, meditation/yoga therapist, acupuncturist, osteopath, reflexologist, traditional Chinese medicine therapist, aromatherapist, craniosacral therapist, reiki therapist, or other CAM practitioner. In addition to asking which health care practitioners the women consulted for their back pain, the women were also asked in which order they consulted each health care practitioner for their back pain.
Health status
The women were asked to indicate how frequently ('never', 'rarely', 'intermittently', 'regularly', and 'continuously') they experienced the back pain in the previous 12 months, which was categorised as continuous back pain or not continuous back pain. The women were also asked about their typical level of back pain intensity and their worst level of back pain intensity over the previous 12 months on a scale from 0 to 10, with 0 representing 'no pain' and 10 representing 'worst possible pain'. It has previously been shown that it is valid to use a 12 month recall period using such a scale [29,30]. The Short-Form 36 (SF-36) Quality of Life questionnaire was used to produce a measure of health status and quality of life, with higher scores representing better health. Results of the SF-36 were transformed into eight mental and physical health subscales [31]; all subscales were considered for analyses.
Statistical analyses
Descriptive statistics were used to assess the frequency and percentages of all the included medical doctors, allied health practitioners, and alternative health practitioners. Comparisons were made between women consulting a GP first, women consulting chiropractor first, women consulting physiotherapist first, women consulting massage therapist first, and women consulting other therapist first regarding the characteristics of their back pain (back pain frequency, typical back pain intensity, worst back pain intensity) and their quality of life (all eight domains of SF-36). Comparisons between two categorical variables were made using the chi-square test. Comparisons between continuous and categorical variables were made using analysis of variance (ANOVA). To correct for multiple comparisons, a modified Bonferroni test was used [32]. At most, there was minimal missing data for any variable (the maximum was 2 % missing for SF36 General Health) and as such, data were analysed as is. All analyses were conducted using the statistical software Stata, version 11. P-values less than 0.05 were considered statistically significant.
Results
The main questionnaire item used in the analyses was the order in which the women consulted each health care practitioner for their back pain. There were 116 women who were excluded from the analyses due to inconsistencies in their response to this question (e.g. they ticked more than one health care practitioner as being the first practitioner they consulted), leaving a total sample size of 1194.
The frequency of consultations with a medical doctor, allied health practitioner and/or CAM practitioner in the previous 12 months for back pain is presented in Table 1. On average, the women consulted 3.0 (95 % CI: 2.8, 3.1) health care practitioners over a 12 month period, for their back pain. In total, 697 (58.4 %) women consulted a medical doctor for their back pain. General practitioners (n = 664, 55.6 %) were the most commonly consulted medical doctors, followed by orthopaedic specialists (n = 94, 7.9 %) and rheumatologists (n = 75, 6.3 %). In total, 647 (54.2 %) women consulted an allied health practitioner for their back pain. Physiotherapists (n = 430, 36.0 %) were the most commonly consulted allied health practitioner, followed by pharmacists (n = 243, 20.4 %) and nurses (n = 40, 3.4 %). In total, 903 (75.6 %) women consulted a CAM practitioner for their back pain. Massage therapists (n = 515, 43.1 %) were the most commonly consulted CAM practitioner, followed by chiropractors (n = 441, 36.9 %) and acupuncturists (n = 154, 12.0 %).
For approximately half of the women (n = 594; 49.7 %) a GP was the first practitioner consulted for their back pain. A chiropractor was the first practitioner consulted for back pain by 20.1 % (n = 240) of the women. This is in contrast to the higher percentage of women consulting an allied health practitioner (54.2 %) or a massage therapist (43.1 %) more generally for back pain in the previous 12 months, while a physiotherapist and a massage therapist were the first practitioners consulted by 7.2 % (n = 86) and 4.6 % (n = 55) of the women respectively. Of the remaining women, 7.9 % (n = 119) consulted an 'other' practitioner first for their back pain and 10.5 % did not consult a practitioner for their back pain (Table 2). Table 2 shows comparisons made between the women based on the health care practitioner consulted first (i.e. GP consulted first, chiropractor consulted first, physiotherapist consulted first, massage therapist consulted first, and 'other' health care practitioner consulted first) across a number of measures including frequency of back pain, intensity of back pain, as well as SF-36 quality of life dimensions, respectively. In terms of back pain frequency, 22 % of the women who consulted a GP first experienced back pain continuously beforehand, compared to only 7-11 % of women who consulted any of the other practitioners first (p < 0.001). Similarly, women who consulted a GP first experienced more intense typical and worst back pain than women who consulted any of the other practitioners first (both p < 0.001). Women who consulted a GP first also had significantly lower levels of general health, bodily pain, physical functioning, role physical, role emotional, vitality, social functioning (SF-36) than women who consulted any of the other practitioners first (all p < 0.001) and lower levels of mental health (SF-36) than those women who consulted any other practitioner first (p = 0.002). Figure 1 presents a partitioning of the sample based on the order of consultation with any health care practitioner by women for their BP, with associated mean BP intensity scores. For the first practitioner consulted, the sample was partitioned into the 4 most commonly consulted practitioner groups as well as an 'other' category (referring to the wide range of health care practitioners excluding GPs, chiropractors, physiotherapists, massage therapists). For the second practitioner consulted, the commonly consulted practitioner groups included GPs, specialists (e.g. orthopaedic specialists, neurologists, and rheumatologists), physiotherapists, pharmacists, CAM practitioners, 'other' (referring to any other health care practitioners), and 'no other' (referring to the solo consultation with the first practitioner group). In the partitioning of the GP group, it can be seen that for those women who next consulted a specialist (after consulting a GP first) their 'typical' and 'worst' BP intensity levels were higher than the women who next consulted a physiotherapist (p < 0.05), pharmacist, CAM practitioner (p < 0.05) or 'no other' practitioner. Conversely, for those women who next consulted a physiotherapist (after consulting a GP first) their 'typical' and 'worst' BP intensity levels were lower than women who next consulted a specialist (p < 0.05), pharmacist, CAM practitioner, or 'no other' practitioner (p < 0.05). In the partitioning of the chiropractic group, it can be seen that for those women who next consulted a GP (after consulting a chiropractor first) their 'typical' and 'worst' BP intensity levels were higher than the women who next consulted a 'other' practitioner or 'no other' practitioner, which was statistically significant for the 'worst' BP intensity levels (p < 0.05). In the partitioning of the physiotherapist, massage therapist and 'other' groups, there was a common pattern in that for those women who next consulted a GP (after consulting either a physiotherapist, massage therapist or 'other' practitioner first) their 'typical' and 'worst' BP intensity levels were higher than the women who consulted an 'other' practitioner, although none of these differences were statistically significant. It is interesting to note that all participants who first consulted a physiotherapist, massage therapist or 'other' saw an additional provider afterwards.
Discussion
This study of a large, nationally-representative sample of Australian women aged 59-64 years with BP provides the first examination of the severity of BP and its association to BP sufferers' choice of whom and in which order to consult a range of conventional, allied health and CAM providers. The study presents 3 key findings related to: whom BP sufferers' seek help from; the sequencing of practitioner use from whom these BP sufferers seek help; and the association between severity of BP and BP sufferers' choice of which practitioner from whom to seek help.
First, the study shows that mid-age female BP sufferers consult with a wide range of conventional, allied health and CAM practitioners in response to their BP. In line with previous research examining health care utilisation for BP [11,13,15,33], our analyses illustrate that a substantial percentage of women with BP do not exclusively consult a conventional, allied or CAM provider for their BP but instead utilise the services of different practitioner types sequentially over their patient journey. This finding suggests these women may be adopting a pragmatic approach to pain management free from an allegiance or preconception related to any specific provider or approach to treatment [25].
Our analyses also provide the first focused examination of the sequencing of practitioner use amongst BP sufferers as associated with BP severity. Interestingly, the study shows women with the most severe BP (either initially or ultimately) are significantly more likely to seek care from a GP than from any other practitioner group. While our analyses do not accommodate the opportunity to investigate and explain this finding, there are at least three prominent possibilities. This finding may be due to women possibly having a pre-established long-term relationship with their GP and that as such they seek consultation with Table 2 Comparisons between the health care practitioner first consulted for back pain by women across measures of back pain frequency, back pain Intensity of worst back pain a, b, c 7.7 (7.5, 7.8) 6.8 (6.5, 7.0) 6.6 (6.0, 7.1) 6.5 (5.9, 7.1) 7. their GP as a first port-of-call and gatekeeper/advisor to other services (for BP as for other conditions) [34].
On the other hand, this finding may be due to women's own perceptions of the severity of their BP. It may be these women feel more comfortable consulting a GP when they perceive their BP to be (unusually) severe and where they perceive the need for a more trusted authority and/or a provider who has greater access to Fig. 1 Partitioning of the sample based on order of consultation with any healthcare practitioners*. *Back pain (BP) intensity = mean back pain intensity score, as measured on a scale from 0 to 10, with 0 representing 'no pain' and 10 representing 'worst possible pain' more advanced diagnostic investigations (eg. MRI, CT, blood tests) [35]. A third possibility is that some women in more severe pain may be aware of the 'faster relief' that is more likely available through prescriptive medicine [36] or that the conventional medical approach may require less personal time and effort compared to the more active-care model typically provided through a physiotherapy or a chiropractic. Another possible explanation for this finding may be related to back pain sufferers' self-payment. The out-of-pocket expense of the GP consultations is lower than that of the CAM practitioner consultations and that of the allied health practitioner consultations amongst women with back pain [3]. Ultimately, further research is needed to fully examine and help understand the detailed decisionmaking of women with BP regarding the association between the severity of their BP and their choice of who to initially consult. Focusing upon just those women who consult with a GP first for their BP, our study also identified those women with more severe BP as more likely to then subsequently consult with a specialist. While, in contrast, those women who consult with a GP first but who report less severe BP were more likely to next consult with a physiotherapist or CAM practitioner. The choice who to consult subsequent to GP consultation may be influenced by several factors. On the one hand, GPs might be more likely to refer more severely affected patients to a specialist as this might indicate a more serious cause of BP, especially when the patient also presents with concurrent red-flag findings that may require more advanced clinical investigation [37]. GPs may also believe physical therapy to be less effective for some BP categories, such as acute and subacute low back pain, where clinical evidence is still poorly validated compared to chronic low back pain [38,39]. On the other hand, patients who consult their GP may also have a strong preference towards subsequent therapies; and those more severely affected might demand to see a specialist rather than any other health care provider. It is also unclear how much influence the lower levels of physical function and mental health found more often in women who first present to GPs may have on the selection of follow-up care. That women may be presenting to GPs with these additional health factors may further influence routine GP clinical decision-making and referral patterns, especially for those who do not otherwise present with red-flag findings that would otherwise lead to a specialist referral [40]. Since our analyses cannot provide information about whether the second practitioner was consulted due to referral or choice, these assumptions remain speculative and require further detailed empirical investigation.
Regarding those women with BP who consulted a practitioner other than a GP first, our results indicate that approximately 1 in 5 later consult a GP and that these particular women report more severe BP than those not consulting their GP as either first or second choice of provider. It would appear that while women with BP may initially consult from a wide range of provider types, those women with more severe BP will eventually return to or initiate GP care. Our study does not accommodate direct examination of the reasons for this specific consultation pattern, but it appears that in these cases the consultation with an allied or CAM practitioner may have failed to deliver satisfactory pain relief. For example, this outcome may more likely occur for those types of low back pain that are currently less proven to be responsive to physical therapies (acute and sub-acute verses chronic low back pain) or where lower back pain is further complicated by potentially more serious medical red-flag clinical findings (spinal stenosis, disc prolapse, spinal instability) [41,42]. What does require further investigation is whether the return to or initiation of GP care at this later stage in the patient journey is primarily led by the patient (with or without the knowledge and/or support of the practitioner currently being consulted) or the current health care practitioner, and to examine the extent to which others (family/friends and informal carers/support people) may influence this aspect of the women's decision-making.
One limitation of our study is that BP and health care utilisation are self-reported, potentially introducing a recall bias. Additionally, BP status was defined in our study by the self-reporting of a single question. This lack of confirmatory diagnosis could potentially bias the findings. However, existing research has evidenced the validity, and comparability to medical record assessments, of a questionnaire-based measure of comorbidity [43] and further, these limitations are offset by the opportunity to analyse data from a large nationally-representative sample of mid-age women with BP. Another limitation to this study is the fact that pain level is only one variable that may influence patient and provider decision-making. There are other factors that may also influence decision-making about care either in association with pain levels or separate. For example, patient decision-making may be further influenced by the increasing comparative costs associated with some care options, especially toward treatment available outside of government funded medical care. Provider decision-making may be further influenced by patterns of referral associated with their knowledge or awareness of the evidence to support some treatment options or concerns about adverse treatment reactions or patient findings outside of pain levels that may require further investigation.
Conclusions
Our findings suggest that women with more severe back pain are likely to visit a conventional medical practitioner first, whereas women with less severe back pain are more likely to explore a range of provider options including CAM practitioners. Both the detailed reasons for such provider use as well as the improvement of back pain over time following the various possible sequencing of consultations with different types of health practitioners is a topic with implications for ensuring safe and effective back pain care and worthy of further detailed investigation.
|
v3-fos-license
|
2020-01-17T14:02:05.592Z
|
2020-01-15T00:00:00.000
|
210331125
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jlr.org/article/S0022227520435242/pdf",
"pdf_hash": "9f5a2e910185afc03c6a6b9ec91dae1ef956f2dc",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2359",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "8e06aa37cbaf9df3f28c1f8f7ac0a3d6b9cf16b3",
"year": 2020
}
|
pes2o/s2orc
|
Deficiency in ZMPSTE24 and resulting farnesyl–prelamin A accumulation only modestly affect mouse adipose tissue stores
Running ZMPSTE24 deficiency has only modest effects on mouse adipose tissue Abstract Zinc metallopeptidase STE24 (ZMPSTE24) is essential for the conversion of farnesyl–prelamin A to mature lamin A, a key component of the nuclear lamina. In the absence of ZMPSTE24, farnesyl–prelamin A accumulates in the nucleus and exerts toxicity, causing a variety of disease phenotypes. By ~4 months of age, both male and female Zmpste24 –/– mice manifest a near-complete loss of adipose tissue, but it has never been clear whether this phenotype is a direct consequence of farnesyl–prelamin A toxicity in adipocytes. To address this question, we generated a conditional knockout Zmpste24 allele and used it to create adipocyte-specific Zmpste24 – knockout mice. To boost farnesyl–prelamin A levels, we bred in the “prelamin A–only” Lmna allele. Gene expression, immunoblotting, and immunohistochemistry experiments revealed that adipose tissue in these mice had decreased Zmpste24 expression along with strikingly increased accumulation of prelamin A. In male mice, Zmpste24 deficiency in adipocytes was accompanied by modest changes in adipose stores (an 11% decrease in body weight, a 23% decrease in body fat mass, and significantly smaller gonadal and inguinal white adipose depots). No changes in adipose stores were detected in female mice, likely because prelamin A expression in adipose tissue is lower in female mice. Zmpste24 deficiency in adipocytes did not alter the number of macrophages in adipose tissue, nor did it alter plasma levels of glucose, triglycerides, or fatty acids. We conclude that ZMPSTE24 deficiency in adipocytes, and the accompanying accumulation of farnesyl– prelamin A, reduces adipose tissue stores, but only modestly and only in male mice.
Introduction
ZMPSTE24, an integral membrane zinc metalloprotease (1), is required for the biogenesis of mature lamin A, a key component of the nuclear lamina (2,3). Lamin A is produced from a precursor protein, prelamin A, by four enzymatic processing steps (4). The cysteine in prelamin A's carboxyl-terminal CaaX motif (-CSIM) is farnesylated by protein farnesyltransferase. Next, the last three amino acids of the protein (-SIM) are clipped off by RCE1 (Ras-converting enzyme 1) or ZMPSTE24. The newly exposed farnesylcysteine is then methylated by ICMT (Isoprenylcysteine methyltransferase). Finally, the last 15 amino acids of prelamin A (including the farnesylcysteine methyl ester) are clipped off by ZMPSTE24, releasing mature lamin A.
Prelamin A-to-mature lamin A processing is normally very efficient, such that prelamin A is virtually undetectable in cells and tissues. However, prelamin A-to-lamin A processing is blocked by ZMPSTE24 deficiency (2,3). In the absence of ZMPSTE24, farnesyl-prelamin A accumulates in the cell nucleus, and the biogenesis of mature lamin A is completely abolished.
The accumulation of farnesyl-prelamin A in Zmpste24 -/mice is toxic, resulting in a variety of disease phenotypes (e.g., reduced growth, nonhealing bone fractures, sclerodermatous changes in the skin, and loss of adipose tissue) (2,5). The extent of disease depends on the level of prelamin A expression. When farnesyl-prelamin A production in Zmpste24 -/mice is reduced by 50% (by introducing a single knockout allele for Lmna), disease phenotypes are completely abolished (5).
The loss of adipose tissue in Zmpste24 -/mice is profound, such that white adipose tissue (WAT) is nearly undetectable in both male and female Zmpste24 -/mice by ~5 months of age (2,5). However, the mechanism for the loss of adipose tissue has been unclear. One possibility is that the loss of adipose tissue is a direct consequence of farnesyl-prelamin A toxicity in adipocytes.
Such a mechanism is plausible-for several reasons. Missense mutations in LMNA cause partial lipodystrophy in humans (6)(7)(8). Also, patients with mandibuloacral dysplasia Type B, a disease resulting from loss-of-function mutations in ZMPSTE24, have reduced adipose tissue stores (9,10). Finally, HIV protease inhibitors (HIV-PI) that have been linked to the side effect of partial by guest, on May 7, 2020 www.jlr.org Downloaded from lipodystrophy (e.g., lopinavir) inhibit ZMPSTE24 in cultured fibroblasts, resulting in an accumulation of farnesyl-prelamin A (11,12). Darunavir, an HIV-PI that is largely free of the lipodystrophy side effect, does not inhibit ZMPSTE24 or lead to an accumulation of farnesylprelamin A in fibroblasts (12). Despite these observations, there are ample reasons to be cautious about ascribing the loss of adipose tissue to the toxic effects of farnesyl-prelamin A. First, no one has actually tested the impact of farnesyl-prelamin A accumulation in adipocytes, and it is entirely conceivable that adipose tissue is resistant to the toxicity of farnesyl-prelamin A. For example, Zmpste24-deficient mice are free of liver disease despite a substantial expression of prelamin A in hepatocytes (2,5). Also, Zmpste24 -/mice have nonhealing bone fractures, most prominently in the ribs and the zygomatic arch (2,5), and it is conceivable that the loss of adipose tissue is secondary to these bone fractures (and reduced food intake) rather than being a direct result of farnesyl-prelamin A accumulation in adipose tissue.
In the current study, our goal was to determine if the loss of adipose tissue is a direct consequence of ZMPSTE24 inactivation in adipocytes (and the resulting accumulation of farnesyl-prelamin A). To pursue this goal, we created a conditional knockout allele for Zmpste24 (Zmpste24 fl ) and used it to create mice lacking ZMPSTE24 specifically in adipocytes. To minimize the possibility of overlooking a small effect of farnesyl-prelamin A on adipocyte biology, we generated adipocyte-specific Zmpste24 knockout mice that were homozygous for the "prelamin A-only" Lmna allele (Lmna PLAO ) (13,14). Prelamin A production from the Lmna PLAO allele is approximately twice-normal; thus, we were able to examine whether an exaggerated accumulation of farnesyl-prelamin A in adipocytes alters adipose tissue stores in mice. Echo-MRI. Body composition in live mice was measured using an EchoMRI 3-in-1 analyzer (EchoMRI Corp., Houston, TX), which assesses lean mass, fat mass, free water (mostly urine), and total water.
Plasma glucose, triglyceride, and free fatty acid levels. A blood sample (100 µl) was collected from anesthetized mice by retro-orbital puncture with a heparinized capillary tube (Kimble Chase).
Plasma was separated from red blood cells by centrifugation (13,000 ´ g for 30 sec) and stored at -80° C until analysis. Plasma triglycerides (Sigma, TR0100), free fatty acids (Abcam, ab65341), and glucose (Cayman Chemical, 10009582) were measured according to kit instructions.
Measurement of macrophage content in WAT by fluorescence activated cell sorting (FACS).
The stromal vascular fraction from gonadal WAT was prepared as described (16). Briefly, gonadal WAT was minced on ice, digested with collagenase type II (Worthington, LS004176) in PBS containing 0.5% BSA at 37° C, and filtered through a 100-µm filter. After centrifugation at 300 ´ g for 10 min at 4° C, the cell pellet was incubated with RBC lysis solution (Caprico Biotech), centrifuged, and the resuspended cell pellet filtered a second time through a 100-µm filter. The Western blotting. Urea-soluble protein extracts from tissues were prepared as described (5,17).
Targeted mouse embryonic stem (ES) cells were identified by long-range PCR and used to generate chimeric mice, which were bred with C57BL/6 females to create Zmpste24 fl/+ mice.
Adipocyte-specific Zmpste24 knockout mice. To inactivate Zmpste24 in adipocytes, we bred Zmpste24 fl/fl mice harboring a Cre transgene driven by the adiponectin promoter (Adipoq-Cre) (24,25). Quantitative (q)RT-PCR studies revealed that the Adipoq-Cre transgene was expressed in adipose tissue but not in liver. Also, fluorescence microscopy studies on tissues of Rosa mT/mG transgenic mice (26) carrying the Adipoq-Cre revealed recombination in adipose tissue but not in kidney or peritoneal macrophages (supplemental Figure S1A-C). In Zmpste24 fl/fl Adipoq-Cre + mice, Zmpste24 transcript levels were reduced by ~50% in WAT ( Figure 1B) and ~70% in brown adipose tissue (BAT) (Figure 1C), whereas transcript levels were not altered in liver and kidney and reduced by only 9% in peritoneal macrophages (supplemental Figure S1D-F).
We were uncertain whether the levels of farnesyl-prelamin A accumulation in adipocytes of Zmpste24 fl/fl Adipoq-Cre + mice would be sufficient to elicit disease phenotypes. For that reason, we bred Zmpste24 fl/fl Adipoq-Cre + mice homozygous for the prelamin A-only allele (Lmna PLAO ) (14,15). All of the output from the Lmna PLAO allele is channeled into prelamin A (rather than into both lamin C and prelamin A) (14,15), resulting in an ~twofold increase in prelamin A expression.
We showed previously that prelamin A in Lmna PLAO/PLAO mice is fully processed to mature lamin A and that twice-normal amounts of lamin A have no effect on the vitality of mice or body weight (14,15). Also, the levels of farnesyl-prelamin A in Zmpste24 -/-Lmna PLAO/PLAO mice are double those in Zmpste24 -/-Lmna +/+ mice (15). Not surprisingly, the disease phenotypes in Zmpste24 -/-by guest, on May 7, 2020 www.jlr.org
Western blots of BAT and WAT extracts from Lmna PLAO/PLAO Zmpste24 fl/fl Adipoq-Cre + mice revealed an accumulation of farnesyl-prelamin A (Figure 1D). There was no prelamin A accumulation in the liver of these mice. The accumulation of prelamin A in the WAT and BAT of Lmna PLAO/PLAO Zmpste24 fl/fl Adipoq-Cre + mice was located in adipocytes, as judged by immunohistochemistry (Figure 2). As expected, the prelamin A was located at the nuclear rim ( Figure 2). No prelamin A accumulation was observed in littermate mice lacking the Adipoq-Cre transgene (Figure 2). Also, no prelamin A was detected in the endothelial cells of adipose tissue (supplemental Figure S2). (Figure 4A-B). There was a trend for lower BAT weights in male mice, but the difference did not achieve statistical significance ( Figure 4C) (P = 0.24). There were no differences in kidney weights ( Figure 4D).
Fat pad weights in female
Lmna PLAO/PLAO Zmpste24 fl/fl Adipoq-Cre + mice were no different than in littermate controls ( Figure 4A-C). The adipose tissue phenotypes in male Lmna PLAO/PLAO Zmpste24 fl/fl Adipoq-Cre + mice did not appear to be explained by changes in food consumption (supplemental Figure S3).
Because the severity of disease phenotypes in conventional Zmpste24-deficient mice depends on the level of prelamin A expression (5, 15), we hypothesized that the more prominent adipose tissue findings in male Lmna PLAO/PLAO Zmpste24 fl/fl Adipoq-Cre + mice might relate to greater amounts of farnesyl-prelamin A accumulation in adipose tissue. Indeed, as judged by western blotting, the prelamin A levels in BAT and WAT extracts were ~60% greater in male mice than in female mice (P < 0.02) (Figure 5A-B). The higher prelamin A protein levels in the male mice are likely due to increased expression of the Lmna gene. Prelamin A transcripts in adipose tissue were 46% higher in male mice than in female mice (P < 0.05) (Figure 5C). Prelamin A transcripts in the liver were similar in male and female mice ( Figure 5D). The modest decrease in adiposity in male Lmna PLAO/PLAO Zmpste24 fl/fl Adipoq-Cre + mice was not accompanied by perturbations in free fatty acid, triglyceride, or glucose levels (supplemental Figure S4).
We examined gene expression related to adipocyte differentiation, triglyceride metabolism, extracellular matrix synthesis, and the p53 pathway in the adipose tissue of Figure 6C); however, we found no differences in the macrophage content of adipose tissue in the two groups of mice ( Figure 6D). Consistent with these findings, we did not observe changes in the expression of macrophage-
Discussion
We used a newly developed Zmpste24 conditional knockout allele and the Adipoq-Cre transgene to create adipocyte-specific Zmpste24 knockout mice. Our goal was to examine the impact of farnesyl-prelamin A accumulation in adipocytes. We created adipocyte-specific Zmpste24 knockout mice that were homozygous for the Lmna PLAO allele, reasoning that twice-normal amounts of prelamin A expression would make farnesyl-prelamin A toxicity more pronounced and easier to detect.
Our a priori expectation was that we would encounter substantial loss of adipose tissue in We were initially puzzled by the fact that changes in adipose tissue mass were evident only in male Lmna PLAO/PLAO Zmpste24 fl/fl Adipoq-Cre + mice, but we uncovered a likely explanation. The expression of prelamin A transcripts was ~40% higher in male mice than in female mice, and by western blotting the level of prelamin A accumulation in adipose tissue was ~60% higher in male mice. These 40-60% differences are obviously not enormous, but it is important to note that modest differences in farnesyl-prelamin A accumulation have a huge effect on disease phenotypes.
Reducing prelamin A expression levels by 50% in Zmpste24 -/mice completely eliminates disease phenotypes (5), whereas doubling prelamin A expression levels with the Lmna PLAO allele markedly increased the severity of disease (15). However, we cannot exclude the contribution of other mechanisms. For example, differences in genetic background have been suggested to explain the more severe lipodystrophy in male R482Q-lamin A transgenic mice (28), whereas androgen synthesis has been suggested to account for the earlier onset of cardiomyopathy in male Lmna H222P/H222P mice (29).
The fact that a deficiency of ZMPSTE24 in adipocytes and the accompanying accumulation of farnesyl-prelamin A did not induce lipodystrophy and lipodystrophy-related metabolic abnormalities will likely raise doubts about the relevance of ZMPSTE24 inhibition to the lipodystrophy observed in patients treated with HIV-PIs (e.g., lopinavir). The fact that therapeutic concentrations of lopinavir bind to ZMPSTE24 (30) and inhibit ZMPSTE24 activity in cultured fibroblasts is well documented (11,12,31), but it is important to note that the level of inhibition is far from complete. Even in the presence of high levels of lopinavir, more than half of the prelamin A in fibroblasts is cleaved by ZMPSTE24 and processed to mature lamin A (11,12).
Also, it is not clear that the lopinavir-induced accumulation of farnesyl-prelamin A observed in cultured fibroblasts occurs in the tissues of patients. In one study (32), prelamin A was detected by western blot in the adipose tissue of patients undergoing treatment for HIV, but the amount of prelamin A, relative to mature lamin A, was extremely low. Another study failed to detect any prelamin A in leukocytes from HIV-PI-treated patients (33). Those observations, combined with
|
v3-fos-license
|
2021-08-17T13:18:44.828Z
|
2021-06-01T00:00:00.000
|
237099768
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ccr3.4293",
"pdf_hash": "45b963680106fd493e35e683f84a6015b3657f41",
"pdf_src": "Wiley",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2360",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "45b963680106fd493e35e683f84a6015b3657f41",
"year": 2021
}
|
pes2o/s2orc
|
Two novel biallelic variants in TECPR2 and FA2H genes causing complicated hereditary spastic paraplegia in Iranian families from Lur ethnicity: Case series
We herein report first Iranian families with spastic paraplegia 35 and 49 and claim that TECPR2 gene causes complicated spastic paraplegia 49 with or without sensory autonomic neuropathy. In addition, we show how coexistence of SPG49 and griscelli syndrome can lead to misdiagnosis.
| INTRODUCTION
Weherein report two Iranian Lur families with typical features of hereditary spasticparaplegia.Subsequent genomics and clinical analyses have suggested novel pathogenicvariants in TECPR2 and FA2H genes as well as additional clinical features.
Hereditary spastic paraplegia (HSPs) refers to a heterogeneous group of rare inherited neurodegenerative disorders. They are generally characterized by progressive and length-dependent degeneration of distal retrograde axons of the corticospinal tracts (CST) and posterior columns of the spinal cord. [1][2][3] Clinically, these conditions share the primary symptoms of progressive spasticity, hyperreflexia, and mild weakness of the lower limbs in the "Pure" form. In the "complicated/complex" form, additional symptoms such as peripheral nerve involvement, extrapyramidal disturbances, cerebellar ataxia, polyneuropathy, cognitive impairment, optic atrophy, and seizures might be added. 2,4,5 So far, at least 76 clinical types of HSPs and around 80 corresponding genes with different patterns of inheritance have been reported. 1,2 Almost, 21 HSP-associated genes involved in the autosomal recessive (AR) form of these disorders. One of the complicated HSPs with AR inheritance pattern is spastic paraplegia 35 (SPG35, MIM 612 319), also known as fatty acid hydroxylase-associated neurodegeneration (FAHN). The condition is caused by pathogenic alterations in FA2H, located on chromosome 16q23.1. This gene encodes endoplasmic reticulum (ER) enzyme fatty acid 2-hydroxylase. The enzyme is a membrane-bound protein with NADPH-dependent mono oxygenase activity, converting free fatty acids to 2-hydroxy fatty acids (hFAs), which subsequently are incorporated into membrane sphingolipids as essential components of myelin. These compounds show particular temporal expression pattern and are unessential in early development, but are required as the individual matures. [6][7][8][9] Considering the substantial role of the enzyme in the maintenance of the myelin sheath around neuronal axons, deficiency of its coding gene can manifest diverse demyelinating phenotypes such as dysmyelinating, leukodystrophy-associated cognitive decline, dysarthria, spastic paraparesis with or without dystonia, and neurodegeneration with brain iron accumulation (NBIA). [10][11][12][13][14] Although the frequency of FA2H mutations in patients with HSPs has been obscure at least in Asia, it was considered as the second most common subtype of AR-HSP. 5 SPG35 has early onset and shows heterogeneous neuroimaging patterns such as a variable degree of white matter lesions (WMLs), thin corpus callosum, cortical and cerebellar atrophy, and iron accumulation in the globus pallidus. 4,7,10,15 In addition, genetic alterations in TECPR2, tectonin beta-propeller repeat containing 2 (TECPR), have been reported with another complicated form of autosomal recessive HSP called Spastic Paraplegia 49 (SPG49). 16 This gene encodes a protein which contains two main domains, tryptophan-aspartic acid repeat (WD repeat), and TECPR and plays a significant role in autophagy process. [17][18][19] The disorder characterized by developmental delay, generalized hypotonia, microcephaly, short stature, and dysmorphic faces. Affected individuals evolved progressive spasticity in the lower body muscles. However, in 2016 three more affected individuals with additional autonomic-sensory neuropathy features have been introduced. 20 Therefore, Heimer et al reclassified disorder as a new subtype of hereditary sensory-autonomic neuropathy, adding controversy to the exact class of this disorder. So far, only three pathogenic variations have been reported in TECPR2, implying that SPG49 is a rare genetic disorder with unknown frequency and heterogenous phenotype. 16,20 2 | CASE PRESENTATION
| Patient A
Two affected siblings (shown in Figure 1) from first cousin Lur parents were referred to Madar Medical Genetics Center. The proband was 3.7-year-old girl who was hospitalized immediately after delivery due to respiratory distress. Until age 1.5, she was repeatedly hospitalized because of recurrent respiratory infections and decreased consciousness, diagnosed as pneumonia. She had generalized hypotonia and moderate developmental delay since she has not acquired ambulation until the time of study; however, she was able to creep and crawl. She can stand up only by physical aid. The proband achieved limited level of communication, using few words to call her parents. Physical examination illustrated short stature, microcephaly, brachycephaly, and synostotic trigonocephaly. The other clinical features include failure to thrive, short neck, dysmorphic face, triangular face, severe strabismus, myopia, and amblyopia. The proband experienced three episodes of seizures following temperatures up to 40, phenobarbital was administered in order to control seizures. She showed aggressive behavior and easy mood changes, injuring other infants by attacking them physically. Detailed clinical examination revealed specific facial dysmorphic features such as protruding ears, absent antihelical fold in left ear, sparce eyebrows, bulbus nose, wide nasal bridge, flat nasal tip, and chubby cheeks. The auditory brain-stem responses (ABR) examination was normal at age 2. Furthermore, her parents complained about her increased appetite in addition to gastroesophageal refluxes. Neurological examination showed the extension of hallux during Babinski test, indicating upper motor neuropathy. However, unlike previously reported patients, no sign of sensory-autonomic neuropathy including sleep disturbances, apnea, decreased pain sensitivity, blood pressure, and arrhythmia was identified at the time of study. Laboratory findings indicated elevated levels of platelets and serum glutamic oxaloacetic transaminase (SGOT). The proband's 2-year-old sister represented with similar clinical features of the proband ( Figure 1). Furthermore, the affected sisters had silver-gray hairs and partial white patches on their skin. The younger sister's hair was completely silvery or gray in majority of parts, while the proband's hair was considerably darker. Similarly, white eyelashes and light iris were observed in both sisters. The proband had family history of grisclli appearance since her father and two of aunts and uncles represented with similar appearance. Since both siblings had griscelli phenotype in addition to developmental delay and neurological regression, the disorder was mistakenly diagnosed as phenylketonuria (PKU).
| Patient B
An 11-year-old male who exhibited a history of gait difficulties, frequent falls, and clumsiness was requited to this study. The proband (Shown in Figure 2) was the first child born to healthy consanguineous parents from Lorestan province, Iran. The proband was born naturally, and delivery was uneventful. However, the mother had a miscarriage history, and she was pregnant for a male fetus at the time of the study. The process of the childhood development was remarkable, since the proband was able to hold his neck, sit, communicate, and walk at the estimated time. By the age of 4 years, the first sign of the disorder appeared as he represented with lower limb spasticity and gait difficulties. The proband's toes became spastic which led to fixed plantar flexion of the foot, indicating pes caus. Subsequently, he acquired motor difficulties including hyperreflexia, tremor, and ataxia. Strabismus and poor vision were also observed in eye examination. The rapid progression of the disorder resulted in loss of previously acquired developmental milestones, leading to mild-sever cognitive decline, intellectual disability, and progressive loss of ambulation. The family also complained about the proband's urinary urgency. Electromyography (EMG) and nerve conduction velocity (NCV) evaluations of skeletal muscles presented no evidence of myopathy or peripheral neuropathy at the age of five. The magnetic resonance imaging (MRI) of brain and spinal cord disclosed mild-moderate abnormal signal intensity in centrum semiovale, suggesting leukoencephalopathy or periventricular leukomalacia (PVL; shown in Figure 2). With this regard, clinical findings proposed spastic paraplegia disorder.
| Molecular analysis
Whole-exome sequencing (WES) was implemented in Madar Medical Genetics Center, Khorramabad, Lorestan, Iran to identify the subtype of HSPs and the corresponding genes, due to heterogenous nature of the disorder. To this end, the genomic DNA of the probands and their relatives were extracted from peripheral blood based on an established salting out protocol. 21 Only the probands' samples were subjected to WES. For Patient B, the SureSelect Human All Exon V6 Kit (Agilent Technologies Inc) was used to capture exonic region and paired-end sequencing was carried out on illumine NextSeq (Illumina Inc). For patient A, Human Core Exome Kit (Twist Bioscience) was used for this purpose. The Burrows-Wheeler Aligner 22 was implemented to align the sequence reads to GRC38 human reference genome. GATK HaplotypeCaller 23,24 tool was used to call all variants within the target regain and annotation was performed using ANNOVAR. At the next step, all variants with more than 0.01 alle frequency in 1000genome, genomAD exome and GenomAD genome were removed and the remaining variants were prioritized according to bioinformatics predictions, inheritance pattern, and clinical information. Once WES data were analyzed, the cosegregation analysis of the disease-associated variant was done for understanding the inheritance pattern of the disorder. For this purpose, specific primers (Table 1) were designed using online tools such as primer3 (https://prime r3.ut.ee), oligoanalyzer (https://eu.idtdna. com/calc/analyzer), and ensembl databases (https://ensem bl.org). After PCR amplification, Sanger sequencing was performed using Applied Biosystems 3500 Genetic Analyzer and the sequences were aligned to the reference genome by Codon Code Aligner (https://www.codon code.com/aligner).
| Patient A
In patient A, following WES, a frameshift variant defined as c.1568delC (NM_014844, 14:102434385 GCR38, p.S523Ffs*12) in exon 9 of TECPR2 was identified. This variant was not found neither in GenomeAD, 1000genome, Iranome databases nor in our local database for 300 Lur individuals, being extremely rare. The c.1568delC variant was a single nucleotide deletion of cytosine, leading to a premature stop codon only 11 amino acids after deletion site (shown in Figure 1). TECPR2 encodes 1411 length F I G U R E 2 Pedigree and molecular findings of family B. A, MRI imaging at age 5 which shows very mild abnormal deep white matter periventricular (yellow arrows). B, Pedigree of the family. The proband has been indicated by the arrow. C, The mutation segregation by Sanger sequencing has been done for the proband's relatives. As can be seen in sequence chromatograms, his parents (III-3 and III-4) are heterozygotes for c. del685-687 and the proband (IV-1, colored in red) is homozygous. D, The deletion of three nucleotides leads to nonframeshift deletion of highly conserved amino acid Isoleucine at 229th residue as shown by the orthologous sequences (https://blast.ncbi.nlm.nih.gov/Blast.cgi). E, Schematic representation of 7 exons of the FA2H and its two highly conserved domains; cytochrome b5-like heme-binding domain (residues 15-85) and sterol desaturase domain (residues 124-366). The identified variant in this study was shown
Gene
Forward primer Reverse primer amino acid protein in canonical isoform (NM_014844), which contains three WD and ten TECPR domains. As a consequence of this frameshift mutation, approximately 63% of critical region of original protein is eliminated after translation, remaining only 37%. In addition, another homozygous deletion defined as c.1135_1136del (NM_024101, 2:237540378 GRC38, p.D379Cfs*19) in exon 10 of MLPH was detected in the proband. This mutation describes the grisclli appearance of affected siblings, since mutations in this gene have been reported with Griscelli syndrome, type 3. Genotyping family members including mother and siblings reveled that affected sister harbored c.1568delC variant in homozygote manner, while mother and the healthy elder daughter of the family were carriers (shown in Figure 1). However, father of the family was unavailable to participate in the study. As expected, two affected siblings were homozygote for c.1135_1136 deletion in MLPH, confirming the Griscelli syndrome, type 3 in the family.
| Patient B
As a consequence of WES, a homozygous three-nucleotide deletion in exon 5 of FA2H was identified. The c.685_687delATC mutation was a nonframeshift deletion of nucleotides 685 to 687 (NM_024306), which leads to deletion of a nonpolar, uncharged amino acid Isoleucine at 229th residue. For understanding the pattern of the inheritance, the proband's relatives underwent cosegregation analysis. The results revealed heterozygotes status of parents and male fetus, suggesting an AR pattern of inheritance (shown in Figure 2). Since the pathogenicity of the mutation has not been reported in gene variant public databases, we applied several variant effect predictor websites to identify its potential pathogenicity. The predictors predicted the mutation according to ACMG parameters as a probably pathogenic variant (http://www.varso me.com/). Our analysis also showed that the nucleotides and their corresponding amino acid are highly conserved among different species (up to Chrysochloris Asiatica and considering 4 species; shown in Figure 2).
| DISCUSSION
In the current study, we identified two novel pathogenic homozygous deletions in TECPR2 and FA2H in two Iranian families, who were diagnosed with complicated spastic paraplegia. Patient A and her sibling were homozygous for two frameshift deletions in TECPR2 and MLPH genes, while patient B was homozygous for nonframeshift mutation in FA2H. Regarding TECPR2, in 2012, it was the first time when a frameshift single nucleotide deletion (c.3416delT) in exon 16 of TECPR2 was linked to a complicated form of HSP, named SPG49. 16 All five affected individuals in the study were from the same ethnic group and shared similar clinical features of generalized hypotonia, developmental delay, progressive spasticity in lower body, dysmorphic features, and recurrent respiratory infections.
Since the fact that all individuals were from unrelated Jewish Bukharin families and harbored similar genetic variation, the c.3416delT mutation was considered as a founder mutation in that specific ethnic group. There had not been any TECPR2 related phenotype until 2016, when three affected individuals with c.C566T (p. Thr189Ile) and c.1319delT (p. Leu440Argfs*19) mutations in TECPR2 were reported. 20 Despite having similar clinical features of previously reported individuals, all the patients showed extra clinical manifestations of autonomic neuropathy. The authors, as a conclusion, believed that this disorder should be classified as a form of HSAN not HSP, owing to the fact that none of complicated forms of HSP shows autonomic neuropathy. Interestingly, two affected siblings in current study represented with identical clinical manifestations to those who were previously reported in the 2012 study, showing no sign of autonomic neuropathy except having grescilli appearance due to c.1135_1136del genetic alteration in MLPH. Moreover, detailed clinical and paraclinical examinations revealed additional symptoms including elevated levels of platelets and SGOT in blood and triangular face. All in all, because of diverse multisystem signs and symptoms of this disorder, affecting autonomic neurons in limited individuals, we believe that it should still be classified as complicated SGP49 with or without sensory autonomic neuropathy. Another interesting point is that coexistence of two monogenic disorders in an individual can sometimes misguide clinicians in diagnosis, as in our study, we initially misdiagnosed the disorder as PKU with combination of immunodeficiency. TECPR2 plays a critical role in autophagy by associating with distinct cellular components such as COPII, SEC24D, HOPS, and BLOC-1. 19 The most possible role of TECPR2 is that it probably acts as an anchor point for multiple cellular components. In one example, it stabilizes SEC24D to ensure the efficient function of endoplasmic reticulum in exporting several secretory elements. In general, more functional studies are required to throw light on the exact mechanism of TECPR2 in neuronal cell development and maintenance. Turning to FA2H, the detected variant was predicted as likely pathogenic, due to the protein changing effect of the deletion of conservative amino acid Isoleucine (p.I229del). FA2H contains seven exons and encodes an integral membrane enzyme of smooth endoplasmic reticulum (ER), which catalyzes galactosylceramide and sulfatide hydroxylation in the myelin sheath. The enzyme contains two conserved domains. First, is the N-terminus cytochrome b5-like heme-binding domain (residues 15-85), responsible for the redox activity of the enzyme. Second, is the sphingolipid fatty acid hydroxylase domain, spanning residues 124-366 in the C-terminal. The identified deletion is located in the latter domain. The domain is known as sterol desaturase domain, composed of a catalytic di-iron cluster and four transmembrane domains, mediating anchoring the enzyme to the ER membrane. 1,4,25 With this regard, the variant may interfere with the catalytic activity of the protein. In previous reports, several pathogenic alterations, which are scattered along the sterol desaturase domain, including p.V149L, p.L130F, p.R235C, and p.H260Q, have been examined. The result indicated their pathogenic effects on the hydroxylase activity of the FA2H. The enzyme activity of p.V149L and p.R235C variants each separately was 60%-80%, p.L130F approximately 52% and p.H260Q nearly 0%. 5,26,27 However, additional consequences of FA2H variants in post transcriptional level, such as p. R154C variant, which, reduces the mRNA or protein stability should not be excluded. In such a case, Western blot for measuring protein abundance and enzyme-linked immunosorbent assay (ELISA) or gas chromatography-mass spectrometry for quantifying the enzyme activity are applied. 10 Since we were unable to determine the enzyme activity and protein abundance in our patient, we predicted the variant's pathogenicity and its effect on protein structure in silico. Our findings showed on the surface that the mutation is pathogenic and leads to change in the helix structure of the enzyme to a moderate extent, proposing a possible role of the conserved amino acid Isoleucine in the catalytic activity. Further functional studies in terms of patient's cell culture or animal models are necessary to confirm our findings.
The SPG35 affected patients show a complicated genotype-phenotype correlation. Similar to the majority of cases with early onset spastic paraplegia (mean age of onset 5.76 years ± 3.20; 36 patients out of 38 28 ), our proband revealed early onset spastic paraplegia at age 2. Dystonia, with a high prevalence among the SPG35 patients (89.7%; 26 patients out of 29), was also observed in the proband. SPG35 is considered as a subtype of neurodegeneration with brain iron accumulation (NBIA). 10,29 According to some research, iron accumulation is not always present in all of the patients, while the affected individuals share the same mutation. 10,11 In our proband's MRI, no sign of brain iron deposition was found. This is suggestive of the phenotype being associated with iron accumulation as a consequence of FA2H variation seeming variable in the patients, or T2 MRI not being a fully conclusive technique for detection of iron deposition. However, progressing cognitive decline and epileptic seizures as typical symptoms of SPG35, which are seen in almost 90% and 30% of patients, respectively. 1,30 In summary, WES analysis allowed us to identify two homozygous deletions, p.I229del and c.1568delC in F2AH and TECPR2, respectively, in two patients with symptoms of complicated spastic paraplegia. Nonetheless, functional studies in terms of enzyme testing, for assessing the enzyme activity in patients and also animal models are required to validate such a deduction. Further investigations are needed to precisely describe the range of phenotypes in the SPG35 and SPG49 cases in Iran.
ACKNOWLEDGMENTS
Authors are grateful for participants and their families for their participation and cooperation in this study.
|
v3-fos-license
|
2018-04-03T04:26:07.460Z
|
2013-08-15T00:00:00.000
|
54627283
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GREEN",
"oa_url": "https://europepmc.org/articles/pmc3734002?pdf=render",
"pdf_hash": "55589a0bd73abaaf557ef07986251c04ed812c7f",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2361",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "4aa227ff7469f011d59a4e62a7ac645b4d04fa4d",
"year": 2013
}
|
pes2o/s2orc
|
Developing a Reporting Guideline for Social and Psychological Intervention Trials
Social and psychological interventions are often complex. Understanding randomized controlled trials (RCTs) of these complex interventions requires a detailed description of the interventions tested and the methods used to evaluate them; however, RCT reports often omit, or inadequately report, this information. Incomplete and inaccurate reporting hinders the optimal use of research, wastes resources, and fails to meet ethical obligations to research participants and consumers. In this article, we explain how reporting guidelines have improved the quality of reports in medicine and describe the ongoing development of a new reporting guideline for RCTs: Consolidated Standards of Reporting Trials-SPI (an extension for social and psychological interven-tions). We invite readers to participate in the project by visiting our website, in order to help us reach the best-informed consensus on these guidelines (http://tinyurl.com/CONSORT-study).
Introduction
Social and psychological interventions aim to improve physical health, mental health, and associated social outcomes. They are often complex and typically involve multiple, interacting intervention components (e.g., several behavior change techniques) that may act and target outcomes on several levels (e.g., individual, family, and community;Medical Research Council [MRC], 2008). Moreover, these interventions may be contextually dependent upon the hard-to-control environments in which they are delivered (e.g., health care settings and correctional facilities; Bonell, 2002;Pawson, Greenhalgh, Harvey, & Walshe, 2004). The functions and processes of these interventions may be designed to accommodate particular individuals or contexts, taking on different forms while still aiming to achieve the same objective (Bonell, Fletcher, Morton, Lorenc, & Moore, 2012;Hawe, Shiell, & Riley, 2004).
Complex interventions are common in public health, psychology, education, social work, criminology, and related disciplines. For example, multisystemic therapy (MST) is an intensive intervention for juvenile offenders. Based on social ecological and family system theories, MST providers target a variety of individual, family, school, peer, neighborhood, and community influences on psychosocial and behavioral problems (Henggeler, Schoenwald, Rowland, & Cunningham, 2002). Treatment teams of professional therapists and caseworkers work with individuals, their families, and their peer groups to provide tailored services (Littell, Campbell, Green, & Toews, 2009). These services may be delivered in homes, social care, and community settings. Other examples of social and psychological interventions may be found in reviews by the Cochrane Collaboration (2013; e.g., the Developmental, Psychosocial, and Learning Problems Group; the Cochrane Public Health Group) and the Campbell Collaboration (2013).
To understand their effects and to keep services up to date, academics, policy makers, journalists, clinicians, and consumers rely on research reports of intervention studies in scientific journals. Such reports should explain the methods, including the design, delivery, uptake, and context of interventions, as well as subsequent results. Accurate, complete, and transparent reporting is essential for readers to make best use of new evidence, to achieve returns on research investment, to meet ethical obligations to research participants and consumers of interventions, and to minimize waste in research.
This article describes the development of a reporting guideline that aims to improve the quality of reports of RCTs of social and psychological interventions. We explain how reporting guidelines have improved the quality of reports in medicine, and why guidelines have not yet improved the quality of reports in other disciplines. We then introduce a plan to develop a new reporting guideline for RCTs-Consolidated Standards of Reporting Trials (CONSORT)-SPI (an extension for social and psychological interventions)-which will be written using recommend techniques for guideline development and dissemination (Moher, Schulz, Simera, & Altman, 2010). Wide stakeholder involvement and consensus are needed to create a useful, acceptable, and evidence-based guideline, so we hope to recruit stakeholders from multiple disciplines and professions.
Randomized trials are not the only rigorous method for evaluating interventions; many alternatives exist when RCTs are not possible or appropriate due to scientific, practical, and ethical concerns (Bonell et al., 2011). Nonetheless, RCTs are important to policy makers, practitioners, scientists, and service users, as they are generally considered the most valid and reliable research method for estimating the effectiveness of interventions (Chalmers, 2003). Moreover, many of the issues faced in reporting RCTs also relate to other evaluation designs. As a result, this project will focus on standards for RCTs, which could then also inform the development of future guidelines for other evaluation designs.
Impact of CONSORT Guidelines
Reporting guidelines list (in the form of a checklist) the minimum information required to understand the methods and results of studies. They do not prescribe research conduct, but facilitate the writing of transparent reports by authors and appraisal of reports by research consumers. For example, the CONSORT Statement 2010 is an evidence-based guideline; to identify items, the developers reviewed evidence of trial design and conduct that could contribute to bias. Using consensus methods, they developed a checklist of 25 items and a flow diagram . CONSORT has improved the reporting of thousands of medical experiments (Turner et al., 2012). It has been endorsed by over 600 journals (Moher, Altman, Schulz, & Elbourne, 2004), and it is supported by the Institute of Educational Sciences (Torgerson et al., 2005). CONSORT is the only guideline for reporting RCTs that has been developed with such rigor, and it has remained more prominent that any other guideline for over 15 years; for greatest impact, any further reporting guidelines related to RCTs should be developed in collaboration with the CONSORT Group.
Limitations of Previous Reporting Guidelines for Social and Psychological Interventions
Researchers and journal editors in the social and behavioral sciences are generally aware of CONSORT but often object that it is not fully appropriate for social and psychological interventions (Bonell et al., 2006;Davidson et al., 2003;Perry et al., 2010;Stinson et al., 2003). As a result, uptake of CONSORT guidelines in these disciplines is low. While some criticisms are due to inaccurate perceptions about common features of RCTs across disciplines, many relate to real limitations for social and psychological interventions (Mayo-Wilson, 2007). For example, CONSORT is most relevant to RCTs in medical disciplines; it was developed by biostatisticians and medical researchers with minimal input from experts in other disciplines. Journal editors, as well as social and behavioral science researchers, believe there is a need to include appropriate stakeholders in developing a new, targeted guideline to improve uptake in their disciplines (Gill, 2011;Torgerson et al., 2005). The CONSORT Group has produced extensions of the original CONSORT Statement relevant to social and psychological interventions, such as additional checklists for cluster (Campbell, Elbourne, & Altman, 2004), nonpharmacological (Boutron et al., 2008a), pragmatic (Zwarenstein et al., 2008), and quality of life RCTs (Calvert, Blazeby, Revicki, Moher, & Brundage, 2011). These extensions provide important insights, but complex social and psychological interventions, for example, include multiple, interacting components at several levels, with various outcomes. These RCTs require use of several extensions at once, creating a barrier to guideline uptake; increasing intervention complexity also gives rise to new issues that are not included in existing guidelines. Therefore, simply disseminating CONSORT guidelines as they stand is insufficient, as this would not address the need for editors and authors to ''buy-in'' to this process. To improve uptake in these disciplines, CONSORT guidelines need to be extended to specifically address the important features of social and psychological interventions. (2008). While they address issues not covered by the CONSORT Statement and its extensions, these guidelines (except for JARS; APA Publications and Communications Board Working Group on JARS, 2008) do not provide specific guidance for RCTs. Moreover, compared with the CONSORT Statement and its official extensions, guidelines in the social and behavioral sciences have not consistently followed optimal techniques for guideline development and dissemination that are recommended by international leaders in the advancement of reporting guidelines , such as the use of systematic literature reviews and formal consensus methods to select reporting standards (Grant, Montgomery, & Mayo-Wilson, 2012). Researchers in public health, psychology, education, social work, and criminology have noted that these guidelines could be more ''user-friendly,'' and dissemination could benefit from up-to-date knowledge transfer techniques (Abraham, 2009;Armstrong et al., 2008;Davidson et al., 2003;Naleppa & Cagle, 2010;Perry & Johnson, 2008;Stinson et al., 2003;Torgerson et al., 2005).
For example, JARS-a notable and valuable guideline for empirical psychological research-is endorsed by few journals outside of the APA, whereas CONSORT is endorsed by hundreds of journals internationally. According to ISI Web of Knowledge and Google Scholar citations, JARS is cited approximately a dozen times annually, while CONSORT guidelines are cited hundreds of times per year. Moreover, the APA commissioned a select group of APA journal editors and reviewers to develop JARS, and the group based most of their work on existent CONSORT guidelines; by comparison, official CONSORT extensions have been developed using rigorous consensus methods, have involved various international stakeholders in guideline development and dissemination, and update content on the most recent scientific literature. Nonetheless, no current CON-SORT guideline adequately addresses the unique features of social and psychological interventions. This new CONSORT extension will incorporate lessons from previous extensions, reporting guidelines, and the research literature to aid the critical appraisal, replication, and uptake of this research.
Aspects of Internal Validity
Internal validity is the extent to which the results of a study may be influenced by bias. Like other study designs, the validity of RCTs depends on high-quality execution. Poorly conducted RCTs can produce more biased results than well-conducted RCTs and well-conducted nonrandomized studies (Pildal et al., 2007;Prescott et al., 1999). For example, evidence indicates that RCTs that do not adequately conceal the randomization sequence can exaggerate effect estimates by up to 30% (Schulz, Chalmers, Hayes, & Altman, 1995), while lowquality reports of these RCTs are associated with effect estimates exaggerated by up to 35% (Moher et al., 1999). Social and psychological intervention RCTs are susceptible to these risks of bias as well.
Some aspects of internal validity, although included in CONSORT, remain poorly reported-even in the least complex social and psychological intervention studies. Reports of RCTs should describe procedures for minimizing selection bias, but reports often omit information about random sequence generation and allocation concealment (Ladd, McCrady, Manuel, & Campbell, 2010;Perry & Johnson, 2008), and psychological journals report methods of sequence generation less frequently than medical journals . A review of educational reports found no studies that adequately reported allocation concealment (Torgerson et al., 2005), and reports in criminology often lack information about randomization procedures (Gill, 2011;Perry et al., 2010). RCTs of social and psychological interventions may also use nontraditional randomization techniques, such as stepped wedge or natural allocation (MRC, 2011), which need to be thoroughly described. In addition, reports of social and psychological intervention trials often fail to include details about trial registration, protocols, and adverse events (Ladd et al., 2010;Perry & Johnson, 2008), which may include important negative consequences at individual, familial, and community levels.
Other aspects of CONSORT may require greater emphasis or modification for RCTs of social and psychological interventions. In developing this CONSORT extension, we expect to identify new items and to adapt existing items that relate to the internal validity. These may include items discussed during the development of previous CONSORT extensions or other guidelines, as well as items suggested by participants in this project. For example, it may not be possible to blind participants and providers of interventions, but blinding of outcome assessors is often possible but rarely reported, and few studies explain if blinding was maintained or how lack of blinding was handled Ladd et al., 2010;Perry & Johnson, 2008). In social and psychological intervention studies, outcome measures are often subjective, variables may relate to latent constructs, and information may come from multiple sources (e.g., participants and providers). While an issue in other areas of research, the influence on RCT results of the quality of subjective outcome measures in social and psychological intervention research has long been highlighted, given their prevalence in social and psychological intervention research (Marshall et al., 2000). Descriptions of the validity, reliability, and psychometric properties of such measures are therefore particularly useful for social and psychological intervention trials, especially when they are not widely available or discussed in the research literature (Campbell et al., 2004;Fraser, Galinsky, Richman, & Day, 2009). Moreover, multiple measures may be analyzed in several ways, so authors need to transparently report which procedures were performed and to explain their rationale.
Aspects of External Validity
External validity is the extent to which a study's results are applicable in other settings or populations. Currently, given that RCTs are primarily designed to increase the internal validity of study findings, the CONSORT Statement gives relatively little attention to external validity. While high internal validity is an important precondition for any discussion of an RCT's external validity, updating the CONSORT Statement to include more information about external validity is critical for the relevance and uptake of a CONSORT extension for social and psychological interventions. These interventions may be influenced by context, as different underlying social, institutional, psychological, and physical structures may yield different causal and probabilistic relations between interventions and observed outcomes. Contextual information is necessary to compare the effectiveness of an intervention across time and place (Cartwright & Munro, 2010). Lack of information relevant to external validity may prevent practitioners or policy makers from using evidence appropriately to inform decision making; yet, existing guidelines do not adequately explain how authors should describe (a) how interventions work, (b) for whom, and (c) under what conditions (Moore & Moore, 2011).
First, it is useful for authors to explain the key components of interventions, how those components could be delivered, and how they relate to the outcomes selected. At present, authors can follow current standards for reporting interventions without providing adequate details about complex interventions (Shepperd et al., 2009). Many reports neither contain sufficient information about the interventions tested nor reference treatment manuals (Glasziou, Meats, Heneghan, & Shepperd, 2008). Providing logic models-as described in the MRC Framework for Complex Interventions (Craig et al., 2008)-or presenting theories of change can help elucidate links in causal chains that can be tested, identify important mediators and moderators, and facilitate syntheses in reviews (Ivers et al., 2012). Moreover, interventions are rarely implemented exactly as designed, and complex interventions may be designed to be implemented with some flexibility, in order to accommodate differences across participants (Hawe et al., 2004), so it is important to report how interventions were actually delivered by providers and actually received by participants (Hardeman et al., 2008). Particularly for social and psychological interventions, the integrity of implementing the intended functions and processes of the intervention are essential to understand (Hawe et al., 2004). As RCTs of a particular intervention can yield different relative effects depending on the nature of the control groups, information about delivery and uptake should be provided for all trial arms (McGrath, Stinson, & Davidson, 2003).
Second, reports should describe recruitment processes and representativeness of samples. Participants in RCTs of social and psychological intervention are often recruited outside of routine practice settings via processes that differ from routine services (AERA, 2006). An intervention that works for one group of people may not work for people living in different cultures or physical spaces, or it may not work for people with slightly different problems and comorbidities. Enrolling in an RCT can be a complex process that affects the measured and unmeasured characteristics of participants, and recruitment may differ from how users normally access interventions.
Well-described RCT reports will include the characteristics of all participants (volunteers, those who enrolled, and those who completed) in sufficient detail for readers to assess the comparability of the study sample to populations and in everyday services (AERA, 2006;APA Publications and Communications Board Working Group on JARS, 2008;Evans & Brown, 2003) Finally, given that these interventions often occur in social environments, reports should describe factors of the RCT context that are believed to support, attenuate, or frustrate observed effects (Moore, 2002). Interventions may differ across groups of different social or socioeconomic positions, and equity considerations should be addressed explicitly (Tugwell et al., 2010;Welch et al., 2012). Several aspects of setting and implementation may be important to consider, such as administrative support, staff training and supervision, organizational resources, the wider service system, and concurrent political or social events (Bonell et al., 2012;Fixsen, Naoom, Blase, Friedman, & Wallace, 2005;Shepperd et al., 2009;Wang, Moss, & Hiller, 2006). Reporting process evaluations may help understand mechanisms and outcomes.
Developing a New CONSORT Extension
This new reporting guideline for RCTs of social and psychological interventions will be an official extension of the CONSORT Statement. Optimally, it will help improve the reporting of these studies. Like other official CONSORT extensions (Boutron et al., 2008a;Campbell et al., 2004;Hopewell et al., 2008;Zwarenstein et al., 2008), this guideline will be integrated with the CONSORT Statement and previous extensions, and updates of the CONSORT Statement may incorporate references to this extension.
The project is being led by an international collaboration of researchers, methodologists, guideline developers, funders, service providers, journal editors, and consumer advocacy groups. We will be recruiting participants in a manner similar to other reporting guideline initiatives-identifying stakeholders through literature reviews, the project's International Advisory Group, and stakeholder-initiated interest in the project (Michie et al., 2011;Schulz et al., 2010). We hope to recruit stakeholders with expertise from all related disciplines and regions of the world, including low-and middle-income countries. Methodologists will identify items that relate to known sources of bias, and they will identify items that facilitate systematic reviews and research synthesis. Funders will consider how the guideline can aid the assessment of grant applications for RCTs and methodological innovations in intervention evaluation. Practitioners will identify information that can aid decision making. Journal editors will identify practical steps to implement the guideline and to ensure uptake.
We will use consensus techniques to reduce bias in group decision making and to promote widespread guideline uptake and knowledge translation activities upon project completion (Murphy et al., 1998). Following rigorous reviews of existing guidelines and current reporting quality, we will conduct an online Delphi process to identify a prioritized list of reporting items to consider for the extension. That is, we will invite a group of experts to electronically answer questions about reporting items and to suggest further questions. We will circulate their feedback to the group and ask a second round of questions. The Delphi process will capture a variety of international perspectives and allow participants to share their views anonymously. Following the Delphi process, we will host a consensus meeting to review the findings and to generate a list of minimal reporting standards, mirroring the development of previous CONSORT guidelines (Boutron et al., 2008b;Schulz et al., 2010;Zwarenstein et al., 2008).
Together, participants in this process will create a checklist of reporting items and a flowchart for reporting social and psychological intervention RCTs. In addition, we will develop an Explanation and Elaboration (E&E) document to explain the scientific rationale for each recommendation and to provide examples of clear reporting; a similar document was developed by the CONSORT group to help disseminate a better understanding for each included checklist item (Moher, Hopewell, et al., 2010). This document will help persuade editors, authors, and funders of the importance of the guideline. It will be a useful pedagogical tool, helping students and researchers understand the methods for conducting RCTs of social and psychological interventions, and it will help authors meet the guideline requirements .
The success of this project depends on widespread involvement and agreement among key international stakeholders in research, policy, and practice. For example, previous developers have obtained guideline endorsement by journal editors who require authors and peer reviewers to use the guideline during article submission and who must enforce journal article word limits (Michie, Fixsen, Grimshaw, & Eccles, 2009). Many journal editors have already agreed to participate, and we hope other researchers and stakeholders will volunteer their time and expertise.
Conclusion
Reporting guidelines help us use scarce resources efficiently and ethically. RCTs are expensive, and the public have a right to expect returns on their investments through transparent, usable reports. When RCT reports cannot be used (for whatever reason), resources are wasted. Participants contribute their time and put themselves at risk of harm to generate evidence that will help others, and researchers should disseminate that information effectively . Policy makers benefit from research when developing effective, affordable standards of practice and choosing which programs and services to fund. Administrators and managers are required to make contextually appropriate decisions. Transparent reporting of primary studies is essential for their inclusion in systematic reviews that inform these activities. For example, there is the need to determine if primary studies are comparable, examine biases within included studies, assess the generalizability of results, and implement effective interventions. Finally, we hope this guideline will reduce the effort and time required for authors to write reports of RCTs.
RCTs are not the only valid method for evaluating interventions (Bonell et al., 2011) nor are they the only type of research that would benefit from better reporting (Goldbeck & Vitiello, 2011). Colleagues have identified the importance of reporting standards for other types of research, including observational (von Elm et al., 2007), quasi-experimental (Des Jarlais, Lyles, Crepaz, & the TREND Group, 2004), and qualitative studies (Tong, Sainsbury, & Craig, 2007). This guideline is the first step toward improving reports of many designs for evaluating social and psychological interventions, which we hope will be addressed by this and future projects. We invite stakeholders from disciplines that frequently research these interventions to join this important effort and participate in guideline development by visiting our website, where they can find more information about the project, updates on its progress, and sign up to be involved (http://tinyurl.com/CONSORT-study).
conceived of the idea for the project. All authors helped to draft the article, and all have read and approved the final article.
|
v3-fos-license
|
2019-04-03T13:08:43.244Z
|
2018-12-03T00:00:00.000
|
54498148
|
{
"extfieldsofstudy": [
"Biology"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ajpb.20180303.11.pdf",
"pdf_hash": "b306bad9df3bed94cfe442a7d00051256b109b12",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2362",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "1b7367d93c06fb01e33ccc484fd7de1e9134cdcb",
"year": 2018
}
|
pes2o/s2orc
|
On Farm Demonstration and Evaluation of Synthetic Insecticides for the Control of Pod Borer (Helicoverpa armigera Hubner) on Chickpea in Bale Zone
s: Chickpea (Cicer arietinum L.) is the world’s second most important grain legumes after common bean (Phaseolus vulgaris L.) among food legumes grown for production worldwide. Ethiopia is considered as secondary center of genetic diversity for chickpea. Field experiment was conducted at two districts with the objectives to demonstrate different insecticide for the control of pod borer on chickpea and to give awareness on the use and effectiveness of the insecticide against pod borer on chick pea. The experiment was conducted using one chickpea variety; Habru (more preferred) and two insecticide Diazenon (1.2l/ha) and Karate (400ml/ha). The result revealed that both insecticides were effective against pod borer even if they have slight difference on percent larval reductions at both districts. The pod borer damage reduction by different treatments ranged from 71.87% to 90.63% and 58.33% to 66.66% compared to that in control at Ginir and Goro, respectively. Diazenon resulted maximum seed yield 2610kg/ha and 2200kg/ha at Ginir and Goro, respectively. The plot sprayed with Diazenon gave the maximum net return birr 75,348/ha and 61,120/ha at Ginir and Goro, respectively. It is recommended that these insecticides are suggested to the growers for the management of pod borer population below economic threshold level under field conditions.
Background and Justifications
Chickpea (CicerarietinumL.) is the world's second most important grain legumes after common bean (PhaseolusvulgarisL.) among food legumes grown for production worldwide [2]. Ethiopia is considered as secondary center of genetic diversity for chickpea and the wild relative of cultivated chickpea (CicerarietinumL.), is found in Tigray region of Ethiopia [13,6]. An average chickpea yield in Ethiopia on farmers field is usually below1t/ha although its potential is more than 5t/ha [4,8]. This is resulted from susceptibility of landraces to frost; drought, water logging and poor cultural practices; low protection measures against weeds, diseases and insect pests [12,3]. Chickpea is susceptible to a number of insect pests, which attack on roots, foliage and pods. Gram Pod borer (Helicoverpa armigera H.) is one of the major insect pests of chickpea and has great economic importance [1]. It is highly polyphagous insect feeding on many other crops such as cotton, tobacco, safflower, tomato, maize, cabbage, peanuts and pulses [9,5] Chickpea pod borer(Helicoverpa armigera Hubner) (Lepidoptera:Noctuidae) is a major field insect pest affecting pulses in several agro-ecological zones. Single larva can damage 40pods and selectively feeds up on growing points and reproductive parts of the host plant. It feeds on floral buds, flowers and young pods of the growing crop [7]. There is a high infestation of pod borer on chickpea, field pea and lentil in three woreda of Bale Zone, namely Goro, Ginnir and Golelcha. Farmers are trying to protect his crops from these pests by spraying different insecticides chemicals purchased from local pesticide dealers and farmers union. The chemical control is still considered as the last resort for (Helicoverpa armigera Hubner) on Chickpea in Bale Zone its management due to their quick known effect [10]. However, wise use of insecticide is the need of the time to avoid their drastic side effects on environment and natural bio control agents [11]. So farmers are asking for effective insecticide chemicals for the management of pod borer and also they asked the frequencies. Most of our farmers have limited information on the use of insecticide for pulse crops. So to alleviate such limitation the activity was initiated with the following objectives:-1. To demonstrate different insecticide for the control of pod borer of chickpea. 2. To give awareness on the use and effectiveness of the insecticide against pod borer of chickpea.
Description of Study Area
The experiment was done on the farmer's field at two location Ginnir and Goro districts during 2017-2018 crop seasons. The location is suitable for appearance of pod borer every year under natural conditions. The experiment was conducted at Ginir located at 907-2524 meter above sea level receiving mean annual rainfall of 612-1214mm and mean annual temperature of 11.31-24.72°C. Goro located at 1272-3275meters above sea level receiving mean annual rainfall of 796-1138mm and mean annual temperature of 12.93-22.59°C (Adamu Zeleke unpublished survey). Goro is characterized by Chromic Cambi sols soil type and Ginir is characterized PellicVertisols soil type.
Treatments and Experimental Design
The experiment was conducted using one chickpea varieties; Habru (more preferred). Two insecticide Diazenon (1.2l/ha) and Karate (400ml/ha) were used in the experiments. The experiment was laid out in none replicated with three plots.
1. Plot one Diazenon sprayed plot 2. Plot two Karate sprayed plot 3. Control(unsprayedplot) The plots have a size of 100m 2 (10mx10m). Normal agronomic practices were followed for raising the crop. The insecticide was applied starting from the appearance of the insects.
Data on pod borer population before and after insecticide application was recorded from5 randomly selected plants in each treatment after the emergence of the pod borer. The number of larval population per plant from 5 randomly selected plants in each plot before and after first spray of insecticides was recorded. The reduction percentage of larvae was recorded by counting of larval population over check.
Farmers Selection and Evaluations
Farmers were participated on the evaluations of insecticide against chickpea pod borer. Selection and evaluations was considered on the farmers 'interests and motivation toward the technology.
Results and Discussions
Data collected on the comparative efficacy of two insecticides tested for the management of pod borer on chickpea was presented in tables.
Larval Population
Five plants were randomly selected from each plots and observation were recorded at 7days intervals. The result revealed that both insecticides were effective against pod borer even if they have different percent larval reductions at both locations. At Ginir the data summarized in table1 revealed that the pest population of Helicoverpa armigera ranged from 1.6 to3.4 larvae per plant before spray and 0.3 to 3.2 after spray during the season. It indicated that the pest was active during December. This period coincided with the flowering and pod formation stage of the crop. The pod borer damage reduction by different treatments ranged from 71.87% to 90.63% compared to that in control at Ginir. The highest pod borer larval reduction (90.63%) was found in Diazenon sprayed plot followed by Karate5% EC (71.87%) sprayed plot. At Goro the result revealed that both insecticides are effective against pod borer even if they have different percent larval reductions. The data summarized in table 1 revealed that the pest population of Helicoverpa armigera ranged from 1.3 to 3.6 larvae per plant before spray and 0.8 to 2.4 after spray during the season. The pod borer damage reduction by different treatments ranged from 58.33% to 66.66% compared to that in control at Goro. The highest pod borer larval reduction (66.66%) was found on Diazenon sprayed plot followed by Karate5% EC (58.33%) sprayed plot.
Grain Yields of Chickpea
The data of seed yields (kg/ha) and increased percent over check is presented in table 2. From the result obtained at Ginir, Diazenon resulted maximum seed yield 2610 kg/ha, followed by Karate 5%EC1800kg/ha, and where as the minimum seed yield 820kg/ha on unsprayed plot. Maximum percent of seed yield (68.58%) was increased over check by Diazenon. The second maximum percent of seed yield (54.44%) was increased over check by Katare5% EC. Again at Goro Diazenon resulted maximum seed yield 2200kg/ha, followed by Karate 5% EC 1600kg/ha, and where as the minimum seed yield 600kg/ha on unsprayed plot. Maximum percent of seed yield (72.73%) was increased over check by Diazenon. The second maximum percent of seed yield (62.5%) was increased over check by Karate 5%.
Return and benefit cost ratio
At Ginir the result showed that Diazenon sprayed plot provided the highest gross returns (ETB91350/ha) and the lowest gross return TB28700/ha was computed from untreated check. The plot sprayed with Diazenon gave the maximum net return ETB 75,348/ha and also gave the highest benefit cost ratio (4.7).
The unsprayed plot gave the minimum net returns birr 15,054/ha and gave the lowest benefit cost ratio (1.10). In addition at Goro district Diazenon sprayed plot provided the highest gross returns (ETB77,000/ha) and the lowest gross return ETB21,000/ha was computed from untreated check.
The plots prayed with Diazenon gave the maximum net return ETB 61,120/ha and also gave the highest benefit cost ratio (3.85). The unsprayed plot gave the minimum net returns ETB 7,420/ha and gave the lowest benefit cost ratio (0.55).
Farmers' Perceptions
About 54 farmers were participated on the evaluation and selection of insecticides at Goro and 56 farmers were participated at Ginir. At both locations the farmers were selected the plot sprayed by Diazenon as their first choice and Karate as a second choice. During the evaluation and selections farmers mostly considers the number of pod damaged per plots. Accordingly they said that the plot with no insecticide applications was more damaged by the larvae as compared to the untreated plot. To avoid the biasness during evaluation and selection farmers haven't get any clue on the sprayed and unsprayed plot. They simply observe the status of the plots only.
Conclusion and Recommendations
The result revealed that Diazenon and Karate5% EC were the most effective insecticides to give high mortality of pod borer on chickpea under field conditions. The most economic (Helicoverpa armigera Hubner) on Chickpea in Bale Zone benefit for pod borer management was obtained from Diazenon sprayed plot and followed by karate sprayed plots. It has been indicated from the present studies that insecticide Deazenon and karate were remained the most effective against pod borer on chickpea and resulted in the maximum reduction percentage of larval population of pod borer on chickpea even if they have slight difference on efficacy at both locations. Farmers should have used both insecticides for the management of pod borer in chickpea. They can be used one insecticide in the absence of the other as an option/alternatives to increase their productivity and also quality.
Therefore, it is suggested/recommended that these effective insecticides were suggested to the growers/farmers or other stake holders for management of the pod borer population below economic threshold level under field conditions.
|
v3-fos-license
|
2021-12-05T16:17:13.124Z
|
2021-01-01T00:00:00.000
|
244873610
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://pub.law.uaic.ro/files/articole/2021/vol.2_1/11.mandeng.pdf",
"pdf_hash": "4ccb3773027b8b3e499207353046f94c9e8183f9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2363",
"s2fieldsofstudy": [
"Law",
"Linguistics"
],
"sha1": "4f396a71a2d131ab552c613307228cbe105393fd",
"year": 2021
}
|
pes2o/s2orc
|
Legal Translaboration for Effective Intercultural Communication
The European-inspired bilingualism and bi-legal system in Cameroon lead to an irregular profile and may be interesting for the European Union (EU) in its quest for preservation of intercultural processes through translation. The Organization for the Harmonization of Business Law in Africa (OHADA) of which Cameroon is part is only affiliated to Civil Law. However, the two legal systems employed in Cameroon (where both Civil Law and Common Law are used) are based on a balance in what concerns the conceptual, epistemic and stylistic representation. Intercultural dysfunction is the consequence of the lack of methodology in legal translation. Collaboration between legal translators and practitioners is key to adopt an agreed-upon model in multilingualism.
realization of collective identity avowed by a legal community in the course of its sociohistorical journey. In this regard, literal translation and the systemic search for terminological/conceptual equivalence are detrimental to the role of legal translators as cultural actors and intercultural negotiators, because these strategies superimpose an epistemic standpoint to a minority culture. Collaboration among legal translators appears essential to secure effective intercultural communication.
Neumann's game theory
Legal translation is of major importance as it encompasses cross-cultural and territorial range of interests -historical, sociocultural, normative, commercialbeing, nevertheless, institutionalized. 11 This painstaking task that is carried out by legal translators is based on a set of methodological technics tailored to the complexity of multi-legal, multilingual and multicultural areas like the European Union or countries faced with high-stake identity issues like Cameroon and South Africa. 12 In this regard, the minimax strategy, the lodestar of game theory proposed by Neumann, can be of relevance in translation. 13 It can be implemented by a collaboration between legal stakeholders in order to find a consensus in what concerns intercultural diversity to be adopted in legal translation. The legal system and the linguistic diversity in Cameroon are both a legacy and an epitome of European countries.
In fact, the link between language and culture is well-known in the field of cultural studies, linguistics and intercultural communication. 14 In line with Durkeim's views on the coercive aspects of culture, Sinha 15 Moro, N. Muller support the idea that culture is endowed with deontic and restrictive epistemic truth framing the mind and engaging the community to perceive the world through filtered lenses. Law is indeed the immaterial substance connecting members of a community and regulating their action-perception model in the course of their daily experience. Therefore, law is the cornerstone undergirding the sociocultural model implemented in a community at a specific time. Language is an important lever for intercultural communication as the immersion in a material world calls for material references expressing the cognitive truth implicitly shared by cultural insiders. In this regard, language is an identity displayer or a sociocultural manifesto expressing specific truths. Translation, in particular legal translation, does require an immersion into the Other's framework of references and ethnographic standards of representation and communication. Gémar suggests that one should resort to a specific field commonly referred to as jurilinguistics to secure sociocultural symmetry and identity convergence through an apt management of semantic tools (terms, notions, concepts) in the target text. 17 In this regard, Garzone states that: "[…] the distinctive quality of the language of law, which marks it off from ordinary language and makes it a case apart in the field of special languages, has been recognized and legal translation is no longer regarded simply as a particular case within the general framework of LSP [Language for Specialized Purpose] texts. A certain reluctance has emerged to accept the application of a general translation theory to include the translation of legal texts." 18 Actors molded within the framework of specific communities anchored on geographical territories are world citizens taking part in the circulation and mobility of ideas and paradigms undergirding the mental infrastructure which is home to the global mind. However, different geographical settings and historical routes direct communities to represent the shared experiential truth using everspecific artefactual resources and ethnographic conventions. The representation of sociocultural models is evidenced in the process of (legal) translation. Legal translation, especially in intercultural and multilingual spheres, is sometimes a process whereby the majority holding power imposes its aspectual truth (textual, semantic representation) over the Other in order to drive him out from the conscience of commonness/sameness to meet a sociocultural and economic agenda through cultural alienation. The legal translation of OHADA uniform acts is a demonstration of the attempt of civil law actors (legal translators) assuming the majority position in Cameroon in order to create a representational model. Martin & Nakayama support the view that "translation is more than merely switching Mirza (ed.) Methodology is the result of a specific negotiation with an aim to find a consensus. Without ignoring the cultural aspect, people can negotiate in order to attain a commonsensical view of reality. Linguistic aspects are the surface structure representing communities in what makes their specificity and identity. In the domain of legal translation, textual elements (terms, notions, concepts and syntax) are the elements pertaining to the community's surface structure. These elements are the result of the cultural appropriation of the shared conceptual substance at the very basis of universal culture. Being acquainted with the Other's culture is therefore of major importance for identity convergence. Collaboration among legal translators emerging from distinctive cultural settings is crucial in order to capture different (legal) identities.
Critical approach to intercultural communication
Being aware of the multiple implications of the ethnographic imprint of languages, especially in multicultural and multilingual areas, extended the epistemic scope of translation beyond the borders of linguistic territorialities, reaching the sphere of the sociopolitical arena. 20 The institutionalization of multilingualism and multiculturalism is most of the time the result of campaigns led by minority groups for identity recognition, be it at the local, national or transnational levels. The clash of cultural perspectives between majority group members -releasing a set of references extracted from the shrine of history and represented through a wide array of identity diffusion like language and text -and minority groups endowed with ecocultural truth running counter the prevailing literature spread by institutions seems to be unavoidable. As the receptacle of experience, ecocultural and historical truth, language is indeed an ontological and identity displayer in the face of alterity. It is also a facsimile of the sociocultural and political model in force in specific/local territories. The relation between languages of various communities during the intercultural contact result into the triumph of one over the Other, the latter tending to be disregarded by institutions.
The experience of African countries, especially of countries where a colonial master overpowered the Other, is emblematic of the attempts aiming at homogenizing the cognitive truth nationwide through linguistic and translation policies adopted by institutions. Krog & al tell the story of tug-of-war between languages in South Africa resulting into the victory of English. 21 Social resistance against the homogenization of experience and epistemic truths calls for the intervention of specific actors apt to achieve betweenness. In this regard, Krog & al support the view that "translation might mediate […] as a form of reconciliation in which the periphery talks to the center as well as the center to the periphery and through which all languages are enriched as a result". 22 Legal language/text is an offshoot of specific culture.
Legal translation of OHADA uniform acts is an epitome of ideological annexation and sociocultural resistance. As the verbalization of experience and historically-constructed anthropological truth, the legal text and the semantic tools (terms and concepts) contained therein are the avatar of identity. In this regard, Krog supports the idea that "every term in translation has an ideological freight" 23 . Ideologies are nurtured on a political ground over time. The historical facsimile reveals the central position occupied by civil law with regards to the conceptualization of law. French is being considered the language of law in Cameroon. The translation workload carried out in the administration is most of the time from French into English, being usually performed for informative purposes. This tendency is observed in legal translation strategies paralleling/drawing correspondence between both epistemic standpoints. Schweda Nicholson suggests that "Common Law and civil law are quite different in their approaches." 24 In this regard, legal translators should engage into an overarching reconciliation process ranging from the epistemic to the sociocultural and historical level.
Recognition of historically-constructed identity representation, including language, is an underlying element of the concept of power. Terms, notions, concepts, the language structure are items endowed with both prescriptive and anthropologizing properties unveiling sociocultural norms and directing individuals towards an epistemic source of action and perception. 25 The case of representation of the legal system is one of the kind. The country is home to competing sociocultural and normative models enshrined in legal systems. In this regard, Pelage's viewpoint is the following: "Ne correspondant à aucune notion connue de nous, les termes du droit anglais sont intraduisibles dans nos langues comme sont les termes de la faune et de la flore d'un autre climat. On en dénature le sens le plus souvent quand on veut, coûte que coûte, le traduire." 26 Indeed, civil law and Common Law are legal systems expressing antinomic social norms. Methodology used in legal translation is therefore of major importance to bridge the gap amid the legal divide. The legal text is an anthropological field where experiences are verbalized through specific symbols canonized by the community. Terms, notions, concepts and syntactical conventions are the sociocultural avatar of reality taking part in the institutional game where no cultural actor is supposed to take the lead over the Other. The textual correspondence in legal translation is a tacit institutionalization/standardization of the sociocultural hegemony of the source language over the target culture. Each semantic item (term, notion and concept) translated following a nominalist approach is tantamount to advancing a pawn on a chessboard without allowing the Other's camp to play, i.e., to communicate its ontological specificity. The case of the legal translation of OHADA is an emblematic example of irrelevance of what shall commonly be referred to as a (bi)lingual approach to legal translation. Indeed, languages are the representation of distinctive ontologies/identities molded within the framework of cultural references. Translation plays the role of watershed during the cultural divide, each culture using (legal) language as a megaphone of its specificities. Edwards talks about the lack of betweenness in bilingualism. 27 Legal language in OHADA represents the anthropological model nurtured by communities in a specific geographical area during specific trade activities. In the case of Cameroon, Common Law and civil law are legal systems covering the anglophone regions, which are a minority, and the francophone regions, which represent the majority. The intellectual dimension of translation is not realistic in what concerns the representation of the sociocultural model corresponding to local identities.
III. Legal texts and intercultural communication
Diversity is an emblematic feature of the world. In this regard, the necessity to communicate, i.e., to find the common substrate irrespective of the rhetoric/ethnographic specificities, is a vital one. 28 The legal text is both the cultural and institutional verbalization of the anthropological dimension characterizing specific communities engaged in the communication and negotiation in view of meeting economic, political, and sociocultural agendas. Undoubtedly, languages are the most important channels of the identity of a community. Organizations are most of times multilingual by nature. They are therefore the scene hosting multicultural systems of representation. translation issues are of paramount importance for identity convergence and intercultural communication. The strategies used therein are vital to secure the abovementioned imperatives. In line with the ideas supported by Pelage 29 , an adaptation effort and groundbreaking methodologies are essential for the gapbridging process.
Legal translation and ideology
Culture, language, ideology and law are pervasive elements framing identity, perception and communicative patterns used during the intercultural contact with the alterity. 30 The Whorfian tradition supports the view that language is the receptacle where the abovementioned patterns of identity are showcased ahead of the intercultural encounter with the Other. It is a record of the sociocultural and normative models adopted by a community in the course of its historical experience (time) in a specific ecology (space). Issues of time and space take center stage in the investigation of meaning construction and representation. 31 Indeed, meaning and sociocultural models conveyed in legal language are embedded in an ecocultural network of reference. 32 Concepts are the mental representation of ever-specific sociocultural models anchored on a physical ground. They gain a territorial seat and an institutional status through their enshrinement in legal systems. Europe offers a fair view of the territorialization of meaning and concepts. Through its historical and philosophical tradition and its power of diffusion, Europe has been the leading light spreading the diversity of its epistemic vision across the world. Colonization has been the event through which local ontologies were reframed. In this regard, the existing divide between legal systems is a consequence of this event. The ideological bias is a common and, at times, inevitable flaw in the practice of (legal) translation. 33 In fact, as Aristotle stated, every human being or citizen in a society is "a political animal" i.e., s/he partakes in the sustainability of a cultural approach through politics fueled by ideologies. Prominent among stakeholders in the sociopolitical game is the legal translator. He/she has a specific set of references received from his/her experience accumulated across history and within a particular ecology. 29 Schäffner supports the view that "decisions at the linguistic micro-level have had effects for […] society debating its identity due to the textual treatment of ideological keywords" 34 . Legal translation, especially in bilingual spaces, is a process through which identities are negotiated on the balance of translation subjectivity. The translator's abidance to a normative model directs him/her to misplace legal terms, therefore perpetrating cultural/ideological annexation. Elements of micro-linguistics (terms and concepts) bear the sociohistorical and ideological load of a specific community. Relying on morphological resemblances in order to draw equivalence in the translation process is a misleading method igniting social resistance.
The epistemic place of the legal translator
Each physical territoriality is stamped with an epistemic seal standardizing the cognitive substance adopted by the community at a specific period of its historical experience. The neural apparatus of community members receives the set of cognitive references released in the specific setting. The said references pattern the mind and frame their perception. Thus, the discourse on norms in translation is closely tied to issues of geography and the environment. 35 Since its colonization, Cameroon may be seen as a battleground between citizens located in distinctive epistemic and geographical areas. From a historical and legal viewpoint, one can say it is the scene of an institutional contest between followers of the French ideology enshrined in civil law and of the English philosophy materialized in Common Law. Cameroon has 10 Regions. North-West and South-West regions are English-speaking areas. Both regions were historically under the English rule during the colonization process and therefore formed the minority group. At the same time, the other part of the country, which forms the majority, was ideologically framed by the French mainstream culture. Civil law then became the legal system of French-speaking Cameroon and the dominant system of law in its institutions.
IV. Methodological considerations and comparative analysis
The relevance of the methodology used in legal translation is measured following the yardstick of epistemology. 36 In fact, epistemology provides handy hints on the gap-bridging strategies to be used for smooth communication in legal translation. The ontology used by cultural stakeholders in multicultural organizations is also of relevance, as it provides the guiding lines for the optimization of receptivity. 34 In order to establish a legal translation methodology in bilingual areas, one has to analyze the specificities of the legal cultures that are studied. In this respect, we present below the profile of OHADA members:
N° Member States
Official language (s) Legal system ( OHADA is predominantly composed of civil law-affiliated countries owing to colonization and ontological superimposition. Lambert supports the view that French ideology is characterized by a universalist approach in communication. He suggests that "the Napoleonian Code […] is heavily inspired by the idea of standardization and homogenization: all in one movement the dispersed legal traditions were meant to be unified in one single formulation that became a model of the French community." 37 This ideological trait that marks legal translation is materialized by conceptual cleansing and nominalist approaches. We may find in Cameroon the European-inspired bilingualism (through the use of English and French) and the legal biculturalism (as both civil law and Common Law are employed). The superimposition of the legal structure through the footbridge of language is therefore at the basis of (social) resistance.
In what follows, we will carry out a comparative analysis in order to underline several translation issues in OHADA texts. Terms, concepts and notions arranged in a conventional syntactic structure in the legal text are the avatar of the sociocultural truth in the game of institutional representation.
Original version
English version (first translation) Partie i -dispositions générales de la société commerciale -livre 2fonctionnement de la société commerciale -titre 4procédure d'alerte -chapitre 1alerte par le commissaire aux comptessection 1 -sociétés autres que les sociétés par actions/article 150 (p. 38) Part 1 -general provisions governing commercial companies -book 2functioning of a commercial companytitle 4alarm procedure -chapter 1alarm by the auditor -section 1companies other than public limited companies/article 150 (p. 143) In the text above, the phrase "procédure d'alerte" is translated by "alarm procedure". The translation of the term is carried out following the onomasiologic approach in terminology, which goes against the universal truth. Each legal community avows a specific conceptualization of reality. 38 This translation is realized following a nominalist approach to language, i.e, without any consideration to the Other's epistemic truth as to how accurate a symmetric process is termed into the network of representation of the target community. According to the Companies and Allied Matter Acts (C.A.M.A 39 ), the cultural equivalent of the phrase "procédure d'alerte" in the trade field is "early warning procedure". The failure to keep the balance in the representation of both legal cultures can be perceived as a cultural annexation. The second version of the translation of OHADA uniform acts was published in 2019. A formal change may be observed, but the version does not include the functional equivalent of the French phrase. The term "procédure d'alerte", wrongfully translated as "alarm Procedure", was changed to "alert procedure". The cosmetic change applied to this retranslation does not achieve terminological symmetry as the items used in this version refer to a different meaning, out of the scope of specialized language.
Original Version
English version (First translation) Partie i -dispositions générales de la société commerciale -livre 7 -dissolutionliquidation de la société commerciale -titre 1dissolution de la société -chapitre 1causes de la dissolution/article 200 (p. 51) La société prend fin : 1°) par l'expiration du temps pour lequel elle a été constituée ; Part 1 -general provisions governing commercial companies -book 7dissolution -liquidation of a commercial company -title 1dissolution of the company -chapter 1 -causes of dissolution/article 200 (p. 153) A company shall come to an end: 1°) on the expiry of the period for which it was formed; The phrase "dissolution of the company" fails to achieve both conceptual and semantic equivalence. The French term refers indeed to the process whereby a company goes through a liquidation process as a result of economic failure, bankruptcy or end of its objects. The English "equivalent" in this translation does not pertain to specialized language, but to general language. This translation is therefore a mix between specialized language and general language. According to the C.A.M.A, the terminological item likely to realize cultural equivalence is "winding up of the company". English Version (Second translation) Partie i -dispositions générales de la société commerciale -livre 7dissolution-liquidation de la société commerciale -titre 1 dissolution de la sociétéchapitre 1 -causes de la dissolution/article 200 (p. 51) La société prend fin : 1°) par l'expiration du temps pour lequel elle a été constituée ; Part 1 -general provisions governing commercial companies -book 7dissolution -liquidation of a commercial companytitle 1dissolution of the company -chapter 1causes of dissolution/article 200 (p. 153) A company shall come to an end: 1°) on the expiry of the period for which it was formed; Part 1 -general provisions governing commercial companiesbook 7 -dissolutionliquidation of commercial company -title 1company dissolutionchapter 1 -causes of dissolution/article 200 (p. 237-238) The company shall cease to exist: 1°) by expiration of the period for which it was formed; The retranslation of this article cannot meet the semantic and conceptual representation of the target community. The issue is, indeed, rather epistemic than stylistic. The proposed equivalent, namely "company dissolution", is nothing but a syntactic inversion of the initial proposal, "dissolution of the company". The semantic gap remains as wide as it previously was. The research of formal equivalence therefore cannot create legal symmetry. "Winding up" or "liquidation of the company" is the terminological equivalent that can create convergence with the right modulation of concepts.
b) Concepts
Concepts are core elements of identity. They refer to the specific understanding one community has of cross-cultural and transnational elements. The literal translation of concepts from one language to the language of the Other can be considered as an identity superimposition, as the specificity of the target culture is not taken into account. Relevant methodological translation strategies should therefore be found for an efficient gap-bridging process between stakeholders in a multicultural network of representation.
La durée de la société ne peut excéder quatre-vingt-dix-neuf (99) ans.
Part 1 -general provisions governing commercial companies -book 1formation of a commercial companytitle 3 -articles of association -chapter 6 -duration-extension/article 28 (p. 121) Every company shall be set up for a duration which shall be indicated in the articles of association. The duration of the company may not exceed ninety-nine (99) years.
The conceptual projections both legal communities have on this phrase are divergent. The civil law community supports the view that a company's timespan should not go beyond the critical threshold of 99 years. As opposed to that epistemic stance on the issue, Common Law does not set any limitation to a company properly incorporated. Owing to its historical and colonial past, Cameroon is home to two legal systems having a range of divergence. The text is the verbalization of the cultural truth upon the sociocultural and normative patterns of identity. This state of affairs must be taken into consideration in the translation process. Literal translation done for informative purposes imposes cultural untruths in the network of representation of the target culture. In this regard, legal translation stakes go beyond legal and textual frontiers. Negotiation on the epistemic content to be proposed for identity convergence is to be carried out by translators coming from different cultural networks of representation. English version (second translation) Partie i -dispositions générales de la société commerciale -livre 1constitution de la société commerciale -titre 3statuts -chapitre 6 -duréeprorogation/article 28 (p. 13) Toute société a une durée qui doit être mentionnée dans ses statuts.
La durée de la société ne peut excéder quatre-vingt-dix-neuf (99) ans.
Part 1 -general provisions governing commercial companies -book 1formation of a commercial company -title 3 -articles of association -chapter 6 -duration-extensionsection 1 -duration/article 28 (p. 121) Every company shall be set up for a duration which shall be indicated in the articles of association. The duration of the company may not exceed ninety-nine years.
Part 1 -general provisions governing commercial companiesbook 1 -formation of a commercial companytitle 3 -articles of association -chapter 6durationextension/article 28 (p. 196) Every company has a duration which must be stated in its articles of association. The company's existence shall not exceed ninetynine (99) years.
The retranslation of this article does not solve the epistemic issues. The use of the phrase "company's existence" and of the modal verb "shall" does not give a fair account of the meaning of the concept in the Common Law representation.
c) Syntactic Structure
Syntax plays a key role in the verbalization of sociocultural paradigms chosen by communities in the course of their separate historical experience. Issues related to syntax are inextricably tangled to patterns of culture, thought and identity emerging in a local network. There is a sameness of structure between language and experience. Communication with alterity bears the seal of agreedupon conventions adopted by individuals. The position of the subject in the sentence is the mirror of the philosophical representation a community has of the role played by individuals. Vinay & Darbelnet suggest that French and English social positions are different. 40 The rationalist position of French ideology engage linguistic communities to give center stage to individuals (subjectivism) in the representation of experiential truth. Conversely, English gives precedence to sociohistorical determiners (objectivism) influencing individuals immersed in an empirical setting. Within the framework of communication and translation, syntactical structure is the iceberg submerging the cultural, ideological and philosophical underpinnings of communities involved in intercultural relations. 41 Therefore, it is of relevance to make an adaptation in the process of translation to secure intercultural communication. 42 Legal translation of OHADA uniforms acts offers an illustration of what could be considered as cultural annexation. English Version (Second translation) Partie iii -dispositions pénales -titre 2infractions relatives à la gérance, à l'administration et à la direction des sociétés/article 890 (p. 218)
Any company executives who, knowingly, even without any sharing of dividends […]
Part 3 -penal provisionstitle 2 -offences relating to the management and administration of the company/article 890 (p. 419)
Shall face a criminal charge company management who has knowingly […]
The sociocultural substance is ethnographically converted in the syntactical structure of the legal text. The French sociocultural tradition places the subject at the helm of the historical process through which evolution is experienced. This state of affairs directs the linguistic community to distance itself from the convention (Subject + Verb + Object). The syntactic structure "encourent une sanction pénale les dirigeants sociaux qui ont sciemment […]" is a demonstration of the hegemonic position of the subject who (re)shapes the normative convention of the life of a community in a specific institutional network. The precedence of the verb in the sentence is the expression of the activity (as opposed to passivity) and the preeminence of individual rationality in the shaping of a new social contract.
The English mainstream culture envisions legal norms as an assemblage in motion. 43 As the language of institutions, legal language is the verbalization of the evolving states of the normative model. The jurisprudence is a conceptual reform experienced by a community in the course of its historical evolution. It is the verbalization of the superorganic paradigm engaging community members to shift from one position to the Other. In this regard, the individual's willingness is not of paramount value in the emergence of the new social convention. At the level of syntax, the subject keeps a passive position in the syntactic structure. The structure "Shall face a criminal charge company management who have knowingly […]" gives center stage to the verb representing the instrument shaping the willingness of the community. This syntactic convention runs counter to local representation. 42 K. Sturge, Cultural Translation, in Routledge Encyclopedia of Translation Studies, Routledge Taylor & Francis Group, 2011, pp. 67-70. G. Bastin, Adaptation, in Routledge Encyclopedia of Translation Studies, Routledge Taylor & Francis Group, pp. 3-6. 43 See S. Glanert, Law-in-translation: an Assemblage in motion, in The Translator, volume 20, no. 3/2014, pp. 255-272.
d) Style
The ethnographic process whereby specific communities represent their ontology is subjective. Indeed, communities have a local and subjective view of reality. Terms, concepts, notions in the text are artefactual representations of the local and experiential truth of communities. These semantic items appear following a specific design uncovering the traditional representation of the perceptual stance advocated by each community. The elected design of representation of identity and perception is referred to as style. The respect of style is of paramount importance for the optimization of receptivity in the target community. Legal communities (civil law and Common Law) have their respective traditions in the representation of truth. Vinay and Darbelnet support the following view: "le français préfère le présent au futur dans les avis où interviennent des considérations juridiques […]. Mais l'anglais, plus empirique, met le verbe au futur." 44 English Version (Second translation) Partie i -dispositions générales de la société commerciale -livre 1constitution de la société commerciale -titre 3statuts -chapitre 6durée-prorogation/article 29 (p. 13) Le point de départ de la durée de la société est la date de son immatriculation au registre […] Part 1 -general provisions governing commercial companies -book 1formation of a commercial company -title 3 -articles of association -chapter 6duration-extensionsection 1 -duration/article 29 (p. 121) Except otherwise provided for by this uniform act, the existence of a company shall commence […] Part 1 -general provisions governing commercial companiesbook 1 -formation of a commercial companytitle 3 -articles of association -chapter 6duration-extension/article 29 (p. 196) The starting date of company's existence is the date of its registration with the registry of commerce and securities, unless […] Partie i -dispositions générales de la société commerciale -livre 1constitution de la société commerciale -titre 3statuts -chapitre 7apports -section 3réalisation des apports en numéraire/article 42 (p. 15)
Ne sont considérés comme libérés que les apports en numéraire correspondant à des sommes dont […]
Part 1 -general provisions governing commercial companies -book 1formation of a commercial company -title 3 -articles of association -chapter 7contributions -section 3realization of cash contributions/article 42 (p.
123)
The only cash contributions that shall be considered as fully paid up are those over […] Part 1 -general provisions governing commercial companiesbook 1 -formation of a commercial companytitle 3 -articles of association -chapter 7contributions -extension -section 3 -payment of cash contributions/article 42 (p. 198) The only cash contributions considered as fully paid up are sums over […] 44 J.-P. Vinay, J. Darbelnet, Stylistique comparée du français et de l'anglais, op. cit., p. 131. The literal translation is par excellence the demonstration of a Universalist approach in identity negotiation through text. The correspondence between the French present tense ("Ne sont considérés comme libérés que les apports en numéraire correspondant à des sommes […]") and the simple present tense in English ("The starting date of company's existence is the date of its registration with the registry of commerce and securities […]") can be regarded upon as a stylistic annexation. It sustains a feeling of oddness. The modulation is therefore of major importance for the identity convergence.
Legal translation and bilingualism
Language is the expression of culture and local experience in specific settings energized by distinctive sociohistorical headlines and artefactual processes used to represented collective identities. Bilingual areas, especially those bringing antinomies together, are intercultural hotspots encompassing a great number of stakes ranging from textual, to identity and social issues. 45 With regards to social uproars triggered by translation-related issues in bilingual areas, it becomes imperative to negotiate (legal) identities in the textual game.
The traditional methodological options (strategies and techniques) used to secure receptivity, inter-comprehension and, most importantly, intercultural communication seem to be outdated within the framework of an ever-changing world. Indeed, the immersion of translators in a local network of representation subsuming dynamic patterns of history, ideology, sociocultural and normative models constructed over time, along with artefactual resources (ethnographic conventions in language), determines them to bias the intercultural negotiation in translation. Multilingual countries and organizations are intercultural hotspots requiring efficient methods to secure intercultural communication and taper the social resistance which might derive from identity ascription. In this regard, Cameroon is an emblematic case of the necessity to bridge the gap between contrasting traditions of law, language and ethnographic conventions. This special status is due to the tug-of-war inside and outside Cameroon at a specific point in its history.
The colonial experience in Europe resulted in the defeat of Germany and the disruption of ideological and cultural patterns related to the existence of sociocultural models and the legal system. France and Great Britain took the lead and reframed the legal systems. Therefore, Cameroon, just like other multilingual countries and organizations, became the scene of an epistemic showdown. Polezzi suggests that: "The Latin word 'translatio' indicates the movement or transfer of objects and people across space […]. Travel and its textual accounts are associated with a form of translation of the Other and the new in terms familiar to a home audience. Translation, in turn, is configured as a form of transportation or appropriation of the foreign within the language and culture of the nation. The coupling between the figures of the traveler and the translator (or interpreter) is also well established and encompasses historical as well as phenomenological parallels." 46 The complex historical background ensuing the travel of epistemic substance calls for a set of innovative methodologies to secure a fair negotiation and intercultural communication. There are historical and phenomenological parallels between Cameroon and the European countries pertaining to the European Union (EU). The negotiation of (trans)national unity and identity calls for a set of methodological strategies likely to achieve epistemic conversion. In this regard, collaboration with actors emerging from different local networks is an imperative for intercultural communication in translation.
Legal translaboration
In a globalized world, minority groups try to resist, expressing their identities through dedicated social and anthropological channels like text. Thus, the reach of a sociocultural consensus in a multicultural network of representation can be achieved only through collaboration between actors pertaining to distinctive cultural and sociohistorical micro-worlds constantly involved in interaction in institutional settings and organizations. Languages are tools expressing the identity, anthropological, sociohistorical, normative and esthetic model advocated by communities and their local setting. In this regard, translation is undoubtedly the in-between allowing intercultural communication to take place.
However, the case study of OHADA texts reveals that effective intercultural communication in translation, in particular in legal translation, cannot take place unless the content (epistemic truth conveyed through terms and concepts) and the cover (style and ethnographic convention) match the expectations of target communities. The results observed in the retranslation of legal texts reveal the dysfunction of the processual strategies implemented in order to achieve intercultural communication. It is of relevance to mention that ideologies determine translators to maintain an imbalance in what concerns terminological and conceptual representations. A groundbreaking methodology likely to secure sociocultural and institutional consensus by mobilizing actors pertaining to cultural micro-worlds is therefore required.
Translaboration, i.e the collaboration between actors belonging to distinct micro-worlds, is crucial in translation in order to reach the sociocultural consensus and an identity convergence. Alfer suggests that collaboration in translation is necessarily conducive to epistemic and stylistic decentering and conversion. 47 experience-sharing in intercultural milieus allows the construction of an agreedupon model in particular through techniques like the corpora-based approach. 48 The collaboration of stakeholders in multicultural networks of representation is a guarantee of sociocultural convergence, especially if the architecture of the participation is balanced.
VI. Recommendations
The receptivity of the legal text in translation is conditioned by the accuracy of the epistemic offer and the appropriateness of the stylistic approach especially in a high-stake domain like commercial translation. 49 In this regard, four stakeholders should be mobilized: 1. Legal practitioner(s) of civil law. They should provide the right interpretation of law ahead of re-verbalization by the Other. 2. Legal translator(s) having French as first language and a great deal of experience in legal translation. They should come up with the ethnographic specificities of French legal language. 3. Legal practitioner(s) of Common Law. They should play a key role by providing solutions to find a middle ground between static civil law and dynamic Common Law, giving an indication of the necessary adaptations to be carried out. 4. Legal translator(s) having English as their first language and a great deal of experience in legal translation. They should propose a stylistic offer likely to meet the expectations of the target culture.
VII. Conclusion
The current status of multilingualism and the plurality of legal cultures in African organizations like OHADA is the result of conceptual and epistemic travel across history, specifically during the colonization by European powers. Therefore, the identity and legal translation challenges encountered in European-designed Organizations, as well as in bilingual and bi-legal countries like Cameroon, are of great interest, including for European countries grouped in the European Union (EU). The results of our study indicate that intercultural communication in legal translation lies on the sociohistorical and cultural approach of the text. The efficiency of the traditional methodology in legal translation, in particular of literal translation, is called into question as it maintains the power gap between legal and cultural stakeholders. The ideological pattern resorted to by legal translators leads to cultural bias. Sociocultural consensus can therefore be found through experience-sharing in collaboration. Legal translaboration, i.e, collaboration among legal translators and legal practitioners coming from different communities, is the solution to secure the receptivity of translation and identity convergence.
|
v3-fos-license
|
2022-09-30T15:26:12.398Z
|
2022-09-28T00:00:00.000
|
252619082
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bdj.pensoft.net/article/94358/download/pdf/",
"pdf_hash": "ef144fad1572d8faf56b1b71a6af4dcd080c8f17",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2365",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "99d3d0a1aa4375cd7f3955dc3f22f504678ff329",
"year": 2022
}
|
pes2o/s2orc
|
Plume moths (Lepidoptera, Pterophoridae) reared from the Chilean endemic Steviaphilippiana (Asteraceae)
Abstract Background The micromoth fauna of the arid environments of the western slopes of central Andes remains poorly explored. Plants native to this area host overlooked species. A survey for micromoth larvae on the Chilean endemic Steviaphilippiana Hieron. (Asteraceae) was performed. New information The first records of plume moths (Lepidoptera, Pterophoridae) associated with S.philippiana are provided. Adults of Adainajobimi Vargas, 2020 and a new species of Oidaematophorus Wallengren, 1862 were reared from larvae collected on inflorescences and leaves, respectively. Oidaematophorusandresi sp. n. is described and illustrated. A phylogenetic analysis of mitochondrial DNA sequences clustered each of the two plume moths with the type species of its respective genus. These records expand the host plant range of A.jobimi and add a second species of Oidaematophorus to the Chilean fauna of plume moths.
Introduction
Along the altitudinal gradient of the northernmost part of Chile, extending from sea level to the highlands of the Andes, the highest plant diversity occurs in a narrow altitudinal belt around 3500 m elevation (Arroyo et al. 1988, Rundel et al. 2003. The micromoth fauna of this altitudinal belt remains poorly explored. However, recent studies revealed that native plants host previously overlooked species, including representatives of the family Pterophoridae (Vargas et al. 2020), suggesting that surveys for larvae on these plants could help to improve the understanding of the micromoth diversity of this area.
The Chilean endemic Stevia philippiana Hieron. (Asteraceae) is a morphologically variable shrub or subshrub whose geographic distribution is restricted to two disjunct areas in the north of the country, one at high elevations on the western slopes of the Andes between 18 and 19°S, the other near sea level on the coast of the Atacama Desert between 22 and 26°S (Gutiérrez et al. 2016). Surveys for micromoth larvae in the Andes revealed that two species of plume moths belonging to two genera of the tribe Oidaematophorini (Pterophorinae) use S. philippiana as a host. The aim of this study is to provide these records, including the description of a new species of Oidaematophorus Wallengren, 1862. Furthermore, as some genera of Oidaematophorini have remarkably similar morphology (Gielis 2011), the generic assignment of the two species reared from S. philippiana was assessed using phylogenetic analysis of mitochondrial DNA sequences.
Materials and methods
The study site is about 2 km south of Socoroma Village (18°16'42''S, 69°34'15''W) in the Parinacota Province of northern Chile, at about 3400 m elevation on the western slopes of the Andes. It has a tropical xeric climate with seasonal rains concentrated mainly in summer (Luebert and Pliscoff 2006). Mature plume moth larvae were collected on S. philippiana in March 2021 and April 2022. The collected larvae were placed in plastic vials with inflorescences or leaves, depending upon which plant organ they were feeding on in the field with a paper towel at the bottom. The emerged adults were mounted following standard procedures. For genitalia dissection, the abdomen was removed and placed in hot 10% potassium hydroxide (KOH) for a few minutes. The genitalia were stained with Eosin Y and Chlorazol Black and mounted on slides with Euparal. Photos of the adults were taken with a Sony CyberShot DSC-HX200V digital camera. Photos of the genitalia were taken with a Leica MC170 HD digital camera attached to a Leica DM1000 LED light microscope. Each image of the genitalia was constructed with 3-10 photos assembled with the software Helicon Focus 8. The specimens studied are deposited in the "Colección Entomológica de la Universidad de Tarapacá" (IDEA), Arica, Chile.
Two pupae reared from larvae collected on inflorescences and two legs from a female and a male adult reared from larvae collected on leaves were used for DNA extraction with the QIAamp Fast DNA Tissue Kit, following the manufacturer's instructions. As genitalia morphology suggested that the adults reared from inflorescences belong to Adaina jobimi Vargas, 2020, whose original description was based on specimens reared from inflorescences of Baccharis alinfolia Meyen & Walp. (Asteraceae) (Vargas 2020), DNA was also extracted from two pupae of A. jobimi reared from larvae collected on this plant in the Copaquilla ravine (18°23'55''S, 69°37'49''W) at about 2800 m elevation, 12 kilometres south of the study site. Genomic DNA was sent to Macrogen Inc. (Seoul, South Korea) for purification, PCR amplification and sequencing of the barcode region using the primers LCO1490 and HCO2198 (Folmer et al. 1994). The PCR programme was 5 min at 94°C, 35 cycles of 30 s at 94°C, 30 s at 47°C, 1 min at 72°C and a final elongation step of 10 min at 72°C. In order to assess the generic assignment of the plume moths reared from S. philippiana, the sequences obtained were submitted to a Maximum Likelihood (ML) phylogenetic analysis. As shown in Table 1, the alignment included sequences of the type species of the genera of Oidaematophorini represented in the Neotropical Region (Adaina Tutt, 1905, Emmelina Tutt, 1905, Hellinsia Tutt, 1905 and three outgroup genera of Platyptiliini (Lioptilodes Zimmerman, 1958, Platyptilia Hübner, [1825 and Stenoptilia Hübner, [1825]) downloaded from BOLD (Ratnasingham and Hebert 2007). The restriction of the taxon sampling of Oidaematophorini to the type species of each genus was due to generic definitions being unstable, as evidenced by frequent changes of some species between genera (Gielis 1991, Gielis 2011, Gielis 2014. The software MEGA11 (Tamura et al. 2021) was used to perform sequence alignment with the ClustalW method and to determine genetic distance using the Kimura 2-Parameter (K2P) method. Before the ML analysis, the substitution saturation of the alignment was assessed with the Xia test, using the software DAMBE7 (Xia 2018). The ML analysis was performed with the software IQTREE 1.6.12 (Nguyen et al. 2015) in the web interface W-IQ-TREE (Trifinopoulos et al. 2016). Data were partitioned to codon position. ModelFinder (Kalyaanamoorthy et al. 2017) selected TN+F+I, F81+F+I and HKY+F+R2 as the best-fit models for 1st, 2nd and 3rd partitions, respectively. Branch support was assessed with 1,000 replications of the Shimodaira-Hasegawa-like approximate likelihood ratio test (SH-aLRT, Guindon et al. 2010) and ultrafast bootstrap (UFBoot, Hoang et al. 2017
Head. Vertex and frons mostly grey with scattered white scales. Occiput with erect, narrow, dark grey scales. Labial palpus with first and second segments white, third segment grey. Antenna filiform, about half the costa length, with grey and white scales.
Thorax. Mostly grey with scattered white, brown and black scales. Fore-leg coxa mostly grey with longitudinal row of black-tipped scales anteriorly; femur and tibia tarsus grey. Mid-and hind-leg grey. Fore-wing cleft origin at about 2/3 from wing base. Dorsal surface mostly grey, with a longitudinal yellowish-brown stripe along the anal margin from near the wing base to the complete second lobe; discal spot black; a black spot before cleft base; two black spots on costa near the middle of first lobe; scattered black scales near anal margin; fringe grey; ventral surface grey. Hind-wing dorsal and ventral surfaces and fringe grey.
Abdomen. Mostly grey with scattered white and brown scales.
Male genitalia (Fig. 2). Tegumen bilobed; anterior margin with triangular projection medially. Uncus narrow, slender, curved, apex pointed. Vinculum narrow. Saccus slightly curved in the middle. Juxta asymmetrical, strongly curved to right, left margin more strongly sinuous than right margin, a narrow longitudinal membranous stripe along the middle almost reaching the base of anellus arms. Anellus arms asymmetrical; left arm narrow, slightly curved, with an apical row of small setae; right arm wider than left arm, strongly curved in the middle, with a small subapical projection, a few small setae near and a row of small setae on the opposite side. Valvae asymmetrical, each with a longitudinal fold in the middle and a group of hair-like scales arising basally on external side. Left valva slightly wider than right one; apex rounded; saccular process with a somewhat conical basal section and a slender saccular spine; saccular spine slightly longer than three fourths the costal margin length, basal fourth of saccular spine rounded towards ventral margin of valva, distal three-fourths straight, apex with hooked tip. Right valva with a single dentate process on the sacculus. Phallus cylindrical, curved, acute apex, vesica without cornuti. Female genitalia (Fig. 2). Papilla analis short, posteriorly rounded, mostly slightly sclerotised, with a well-sclerotised band along anterior margin. Posterior apophysis (apex of the left posterior apophysis broken during mounting), narrow, rod-shaped, about four times the length of papilla analis, apex almost reaches the anterior margin of tergum VIII. Anterior apophysis from anterior vertex of tergum VIII, narrow, rod-shaped, about a half the length of papilla analis. Ostium bursae displaced to left. Antrum cupshaped, wider posteriorly, mostly slightly sclerotised, with an oval-shaped sclerite near the junction with ductus bursae. Ductus bursae membranous, narrow, diameter about half of the widest part of antrum. Corpus bursae membranous, elongated, about three times the length of ductus bursae. Ductus seminalis from near the junction of ductus bursae with corpus bursae, about six times as long as corpus bursae, anterior part coiled. is mostly grey at base and mostly cream apically. Furthermore, the saccular process of the left valva is straight along a great part of its length with a short curved portion near the base and the right valva has a single dentate process on the sacculus in the male genitalia of O. andresi sp. n. In contrast, the saccular process of the left valva has a great curved portion and the right valva has two dentate processes on the sacculus in the male genitalia in O. espeletiae. In the female genitalia, the posteriorly wider cupshaped antrum of O. andresi sp. n. contrasts with the anteriorly wider antrum of O. espeletiae. Furthermore, the antrum of O. andresi sp. n. has an oval-shaped sclerite near the junction with ductus bursae, which is absent in O. espeletiae.
Etymology
The name of the species is dedicated to Dr. Andrés Moreira-Muñoz, for his remarkable contributions to the biogeography and systematics of the Chilean flora.
Distribution
Oidaematophorus andresi sp. n. is known only from the type locality, about 2 km south of Socoroma Village, at about 3400 m elevation on the western slopes of the Andes of northern Chile (Fig. 3).
Biology
The only host plant currently recorded for O. andresi sp. n. is S. philippiana (Fig. 3).
Taxon discussion
Species of Oidaematophorus are recognised by fore-wing venation with R1 absent, R2, R3, R4 and R5 separate, Cu1 from the posterior angle of the discal cell and Cu2 from the discal cell, mid-leg with scale bristles at base of spur pairs and female genitalia with bell-or widened funnel-shaped antrum (Gielis 2011). Ten described species of Oidaematophorus occur in the Neotropical Region ( Gielis 2011, Gielis 2014, Hernández et al. 2014, Matthews et al. 2019, Ustjuzhanin et al. 2021b, only one of which, O. pseudotrachyphloeus Gielis, 2011, is known from Chile ( Vargas 2021). Although eight species of the genus were recorded from this country earlier (Gielis 1991), these are currently included in Hellinsia (Gielis 2011). Accordingly, O. andresi sp. n. is the second representative of the genus confirmed from Chile. The two species from this country are easily recognised, based on wing pattern, as the fore-wing of O. pseudotrachyphloeus lacks the longitudinal yellowish-brown stripe along the anal margin typical of O. andresi sp. n. The genitalia also provide useful morphological characters in this case, as in O. pseudotrachyphloeus the male has the spine of the saccular process of the left valva strongly curved throughout its length and the female has asymmetrical anterior apophyses and ductus seminalis only slightly longer than the corpus bursae, in clear contrast to O. andresi sp. n. Although the host plant ranges of these two species must be explored further, the currently available records suggest that they use different host plants, because O. pseudotrachyphloeus has been reared only from Ambrosia cumanensis Kunth (Asteraceae) (Vargas 2021).
Taxon discussion
Host plant records available for Adaina indicate that a single species may be able to feed on several Asteraceae belonging to one or more genera Gielis 1992, Matthews andLott 2005). Baccharis alnifolia Meyen & Walp. (Asteraceae) was the only host plant previously known for A. jobimi (Vargas 2020). Accordingly, rearing from S. philippiana adds a new host plant record and suggests that this plume moth is able to use distantly-related members of Asteraceae. As this plant family is well represented in the study area , further surveys would be needed to know the complete host plant range of A. jobimi.
Analysis
Four identical DNA barcode sequences were obtained from the pupae of A. jobimi reared from larvae collected on S. philippiana (GenBank accessions OP281683, OP281684) and B. alnifolia (OP281685, OP281686), confirming the morphological identification. Two DNA barcode sequences (OP281687, OP281688) with 0.3% (K2P) distance between them were obtained from the adults of O. andresi sp. n. The alignment of ten sequences of 657 bp length was suitable for phylogenetic analysis, as no evidence of stop codons or substitution saturation (ISS < ISS.C; p < 0.001) was detected. The sequences of the two species were clustered with the type species of their respective genus, Adaina microdactyla (Hübner, [1813]) and Oidaematophorus lithodactyla (Treitschke, 1833), in the ML tree (Fig. 4). Genetic distance was 9.6% between A. jobimi and A. microdactyla and 10.9-11.0% between O. andresi sp. n. and O. lithodactyla.
Discussion
Asteraceae is one of the main host families of Pterophoridae and even a single species of this plant family can support multiple lineages of plume moths (Matthews and Lott 2005). In the present study, surveys for lepidopteran larvae on the endemic S. philippiana in the Andes of northern Chile enabled the rearing of two species of the tribe Oidaematophorini, A. jobimi and O. andresi sp. n. This discovery highlights the importance of surveys on native plants to improve the knowledge of the micromoth diversity of the arid environments of the central Andes. As this study was restricted to the northern of the two disjunct areas inhabited by S. philippiana, further surveys in the southern part of its range would be helpful to assess if the two species collected in the highlands of the Andes are also found in the lowlands of the Atacama Desert.
Generic assignment for a given plume moth species can be a difficult task when it involves some morphologically similar genera of Oidaematophorini, as shown by several species that have moved amongst Adaina, Hellinsia and Oidaematophorus ( Gielis 1991, Gielis 2003, Gielis 2011. Phylogenetic analysis of mitochondrial DNA sequences provides a valuable tool in cases like these, as shown in several families of Lepidoptera (e.g. Moreira et al. 2012, Corley et al. 2020, San Blas et al. 2021). In the present study, in agreement with morphology, the result of the phylogenetic analysis provides support for the generic assignment of the two species of Oidaematophorini reared from S. philippiana, because each grouped with the type species of its respective genus. However, a clade must have at least 80% SH-aLRT and 95% UFBoot support to be reliable (Minh et al. 2022). Although the SH-aLRT support values for Adaina and Oidaematophorus are higher than 80%, those of UFBoot are lower than 95%. Accordingly, further phylogenetic analysis, based on wider taxon sampling and additional molecular markers, would be useful to understand better the evolutionary relationships of Neotropical Oidaematophorini and to provide support for delimitation of its genera.
The knowledge of the Neotropical fauna of plume moths has significantly improved in the last thirty years (Gielis 1991, Gielis 2006, Gielis 2011 and recent contributions suggest that many environments of this region harbour additional undiscovered species (Ustjuzhanin et al. 2021a, Ustjuzhanin et al. 2021b, Ustjuzhanin et al. 2021c). As shown in several studies, surveys for adults and immature stages are fundamental to continue the improvement of the understanding of systematics, geographic ranges and host plant use of the plume moths of a given geographic area (Landry and Gielis 1992, Landry 1993,Landry et al. 2004, Matthews et al. 2012, Matthews et al. 2019. Accordingly, field work in underexplored environments should be encouraged to understand better the highly diverse Neotropical fauna of plume moths.
|
v3-fos-license
|
2021-07-17T15:18:11.533Z
|
2021-07-01T00:00:00.000
|
236006143
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2072-6643/13/7/2432/pdf",
"pdf_hash": "a5341b5b73d4e3d6e537d33680106c65d2c904e5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2366",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a5341b5b73d4e3d6e537d33680106c65d2c904e5",
"year": 2021
}
|
pes2o/s2orc
|
The Insight into Insulin-Like Growth Factors and Insulin-Like Growth-Factor-Binding Proteins and Metabolic Profile in Pediatric Obesity
Insulin-like growth factors (IGFs) and insulin-like growth-factor-binding proteins (IGFBPs) regulate cell proliferation and differentiation and may be of importance in obesity development. The aim of the study was to analyze the expression of chosen IGF-axis genes and the concentration of their protein products in 28 obese children (OB) and 34 healthy control (HC), and their correlation with essential parameters associated with childhood obesity. The gene expression of IGFBP7 was higher, and the expression of IGF2 and IGFBP1 genes was lower in the OB. The expression of IGFBP6 tended to be lower in OB. IGFBP4 concentration was significantly higher, and IGFBP3 tended to be higher in the OB compared to the HC, while IGFBP1, IGFBP2, and IGFBP6 were significantly lower, and IGFBP7 tended to be lower in OB. We found numerous correlations between IGFs and IGFBP concentration and obesity metabolic parameters. IGFBP6 correlated positively with apelin, cholecystokinin, glucagone-like peptide-1, and leptin receptor. These peptides were also significantly lower in obese children in our study. The biological role of decreased levels of IGFBP6 in obese children needs further investigation.
Introduction
Obesity is a complex condition with a serious impact on overall health, both physical and psychological. It is defined by the World Health Organization (WHO) as "abnormal or excessive fat accumulation that may impair health" [1]. It is commonly known that obesity leads to many diseases, including cancer [2][3][4], and has a prominent role in the pathogenesis of type 2 diabetes in adolescents and adults [5]. Obesity remains a constant threat to overall health by also causing several other medical conditions. At any age, obesity can affect the cardiovascular, respiratory, skeletal, and endocrine systems [6,7]. Importantly, it not only affects the physical sphere, but also negatively affects the psyche and self-esteem [8]. This common condition is now regarded as a very widespread and growing problem in pediatrics and has been referred to as "an epidemic" [9,10].
Until now, the role of the insulin-like growth factors' axis and the individual markers and hormones associated with obesity in children are poorly studied [4,11,12]. Insulinlike growth factor (IGF) provides cells with information on the well-being of the body, it regulates proliferation, differentiation, and synthesis. This kind of signaling plays high importance in the growth of the organism, but also in neoplastic processes as well as in development of obesity. Particularly in neoplasm with very poor prognosis, the IGF-axis involvement should be explored [13]. The IGF system consists of modulatory proteins IGF1 and IGF2, which interact at the cellular level with the insulin-like growth factor receptor (IGFR). This pathway also includes regulatory proteins, known as insulin-like growth-factor-binding proteins (IGFBPs) that regulate IGF signaling. Extensive experiments in animal models demonstrate that adipose tissue expansion induces a complex and broad immune response, involving both the innate and adaptive arms of the immune system, playing critical roles in the regulation of glucose metabolism and inflammation [14]. The important role of proinflammatory cytokine secretion from adipose tissue has been consistently associated with the risk of adverse outcomes in obesity-linked complications, promoting a persistent, low-grade, inflammatory response. Accordingly, adipokines are considered as regulators of whole-body homeostasis [15]. However, very little is known about the above-mentioned mechanisms in children in the context of obesity.
Because the IGF protein family plays a key role in metabolic processes, we decided to examine the differences in the concentration of numerous components of the IGF axis (IGF1, 2, IGFBP 1, 2, 3, 4, 6, 7) in obese children in comparison to healthy control and to analyze the correlations of these components with parameters, such as blood pressure, concentration of insulin, adipokines (adiponectin, apelin, resistin, visfatin, leptin, leptin receptor), and peptides regulating gastrointestinal tract (cholecystokinin, ghrelin, GLP-1, FGF21). The peptides concentration was assessed both fasting and after an oral glucose tolerance test. Additionally, taking into account the importance of epigenetic factors, the expression of genes regulating studied peptides of the IGF axis were analyzed.
Study Group
Two study groups were recruited. The obesity group (OB) included 28 children, 12 boys and 16 girls, 4-17.8 (average 13.7) years old. All were patients of the Pediatric and Adolescents Endocrinology Department, Jagiellonian University, Medical College in Krakow. Obesity was defined as a BMI at or above the 95th percentile for children of the same age and sex. BMI was calculated by dividing a children's weight in kilograms by the square of height in meters. The inclusion criteria were obesity (BMI Z-score > 2.0) developed before the age of puberty and negative medical history or any signs or symptoms of acute or chronic diseases, no drugs and dietary supplements, and normal diet. The exclusion criteria were obesity secondary to medical conditions (single gene mutations, endocrinopathies), chronic systemic diseases, or drugs (e.g., glucocorticosteroids).
The control group consisted of 34 healthy peers: 13 boys and 21 girls, aged 4.3-16.9 (average 11.8) years. The children were recruited from the families of the patients and children of medical staff, analogous to the study group in terms of age and sex, all having a negative medical history and without any signs or symptoms of acute or chronic diseases, including obesity.
Anthropometric Evaluation
Height and body weight measurements were performed by an anthropometrist. Weight was measured to the nearest 0.1 kg, and height was measured to the nearest 0.1 cm using a stadiometer and a balanced scale. The body mass index (BMI) and BMI percentile/SDS were calculated using online WHO BMI calculators based on CDC growth charts for children and teens ages 2 through 19 years. The results were compared to regional reference values (WC) and values published by WHO (BMI percentile/SDS).
Protocol of the Study
Blood was taken morning in the fasting state. Blood concentrations of glucose, insulin, adiponectin, apelin, cholecystokinin, fibroblast growth factor 21, glucagon-like peptide-1, leptin, leptin receptor, resistin, and visfatin were measured at fasting as well as at 60 and 120 min of the standard oral glucose tolerance test (OGTT) performed using 1.75 g of anhydrous glucose per kilogram of body weight (maximum of 75 g). Collection was performed once after selection to the study. The samples were collected in tubes containing aprotinin. The material was immediately delivered to the laboratory at +4 • C and centrifuged for 15 min at a relative centrifugal force of 1590× g. Plasma samples for insulin, total IGF1 and IGF2, IGFBP1, -2, -3, -4, -6, and -7 analysis were stored at −80 • C until the time of the assay.
Biochemical Tests
Fasting insulin concentrations were measured with immunoradiometric kits (BioSource Company Europe S.A). The concentrations of total IGF and BP were measured using ELISA kits as follows:
Microarray Analysis
We assessed the whole genome expression in peripheral blood leukocytes using GeneChip Human Gene 1.0 ST Array (Affymetrix, Santa Clara, CA, USA). Total RNA extraction was performed using RiboPure Blood Kit (Ambion, Life Technologies, Carlsbad, CA, USA). The whole transcript microarray experiment was performed according to the manufacturer's protocol (GeneChip Whole Transcript sense Target Labeling Assay Manual, Version 4).
Statistical Analysis
Continuous clinical and biochemical variables were presented as mean or median as appropriate. The Shapiro-Wilk test was used to assess the normality of continuous variables. To examine the differences between two independent groups, the Student's t-test (for normally distributed variables) or Mann-Whitney test (for non-normally distributed variables) were used. Two-sided p-values < 0.05 were considered statistically significant. To assess the correlations between 2 continuous variables, Spearman's rank correlation coefficient was calculated. Two-sided p values < 0.05 were considered statistically significant. Gene expression data were robust multi-array average (RMA)-normalized and presented as mean and standard deviation. ANOVA was used to examine the differences in gene expression between 2 independent groups. Benjamini-Hochberg (B-H)-corrected p values < 0.05 were considered statistically significant. Statistical analysis was performed using
Results
The characteristics of study group was presented in Table 1, and the values of metabolic parameters were shown in Table 2. T0-measured at fasting; T60-measured at 60 min and T-120-measured at 120 min of the standard oral glucose tolerance test (OGTT).
Concentration of IGF Proteins
The differences in the mean concentrations of the IGF-axis proteins were presented in Table 3. Mean concentrations of IGFBP3 and IGFBP4 in the obesity group (OB) were higher than in the control group (HC). The differences were significant (p < 0.05) for IGFBP4; there was a trend for IGFBP3. However, the median value of IGF2, IGFBP1, IGFBP2, IGFBP6, and IGFBP7 were lower in the OB group than in the HC group, and the differences were significant for all the parameters (p < 0.001) except for IGF2 and IGFBP7, where the trend was observed. Table 3. IGF-1, -2 and IGFBP-1, -2, -3, -4, -6, and -7 mean concentrations: comparison of obesity group (OB) and healthy control (HC). Values are presented as mean ± standard deviation.
Parameters
Obese
IGF Proteins Concentration and Other Metabolic Parameters
The correlation results of IGF1 and IGF2 proteins' concentrations with selected metabolic parameters are presented in Table 4, while the correlation results of IGFBPs' concentrations are shown in Table 5. Plots presenting the distribution of chosen data of the studied parameters depending on the level of different IGF family proteins' level are presented in Figure 1. The study revealed a statistically significant positive correlation between BMI and IGFBP3 and negative with IGFBP6 and IGFBP1. Blood pressure was positively correlated with IGFBP3 and negatively with IGFBP-1 and IGFBP2. The fasting insulin blood level and OGTT was positively correlated with IGFBP4 and negatively with IGFBP1, IGFBP2, and IGFBP3. There was also a negative correlation between the fasting insulin level and IGFBP6. There were no significant correlations between adiponectin and the proteins of the IGF axis. Apelin was positively correlated with IGF2, IGFBP6, and also, IGFBP-3 (for the last protein, the results were borderline). Cholecystokinin was positively correlated with IGF2, IGFBP6, and IGFBP7 (for the last protein, the results were borderline) and negatively with IGFBP3.
Fibroblast growth factor 21 was positively correlated with IGFBP6. Ghrelin was positively correlated with IGF2, IGFBP1, and IGFBP2 and negatively with IGFBP4. Leptin was negatively correlated with IGF2, IGFBP1, and IGFBP2. The leptin receptor was positively correlated with IGFBP1, IGFBP2, and IGFBP6 and negatively with IGFBP3. For IGF2, the correlation was positive but not significant. The level of resistin in the 60th and 120th minute correlated positively with IGFBP2 and negatively with IGFBP3. Visfatin revealed a positive correlation with IGFBP4, IGFBP6, and IGFBP7.
Expression of IGF Proteins' Genes
The expressions of IGF proteins' genes are presented in Figure 2, while the expression values are presented in Table 6. The hierarchical clustering showing differences in the expression patterns of IGF-axis genes between healthy control and obese groups is presented in the Figure 3. A comparison of the obesity and control groups revealed differences in the expression of IGF2, IGFBP1, and IGFBP7 genes. The expression of IGFBP7 was higher (p = 0.023), and the expression of IGF2 (p = 0.037) and IGFBP1 (p = 0.046) genes were lower in the OB group. IGFBP6 gene expression tended to be lower (p = 0.059) in the OB group. values are presented in Table 6. The hierarchical clustering showing differences in the expression patterns of IGF-axis genes between healthy control and obese groups is presented in the Figure 3. A comparison of the obesity and control groups revealed differences in the expression of IGF2, IGFBP1, and IGFBP7 genes. The expression of IGFBP7 was higher (p = 0.023), and the expression of IGF2 (p = 0.037) and IGFBP1 (p = 0.046) genes were lower in the OB group. IGFBP6 gene expression tended to be lower (p = 0.059) in the OB group.
Discussion
In our study, we showed that the concentration of IGF-axis proteins differs in the healthy and obese pediatric population, as well as numerous, statistically significant correlations between the concentration of the studied proteins and the concentration of adipokines, gastrointestinal tract hormones, insulin or blood pressure. Additionally, we showed statistically significant differences in gene expression of IGF proteins' family between obese and healthy children.
Concentration of IGF-Axis Proteins
In the study, we noted the significant differences in IGF-axis proteins' concentration between the group of obese and healthy children, which may play a role in the pathogenesis of obesity. IGFBP4 showed significantly higher values, and there was a trend toward a higher concentration of IGFBP3 in the OB compared to the HC, while IGFBP1, IGFBP2, and IGFBP6 achieved significantly lower values, and IGFBP7 tended to be lower in OB. No significant difference in the median concentration of IGF1 protein in the serum of obese children and in healthy children was found. The currently published results are inconsistent showing higher [4,16], lower [12,[17][18][19], and comparable [20][21][22] values of IGF1 in obese individuals in comparison to control of normal weight, both in pediatric and adult population. The main but not only factor influencing the production of IGF1 is a growth hormone (GH). In obese people, its reduced level and the lower response to factors increasing its secretion (e.g., physical activity) have been repeatedly demonstrated [23][24][25]. However, no sufficient explanation of the changes in total serum IGF1 concentration of obese patients in response to decreased GH level can be drawn, especially that despite the decreased GH levels, obese patients grew properly [26].
In the group of obese children, the concentration of IGF2 was lower almost significantly (p = 0.06) than in the control group. This protein plays an important role in the fetus
Discussion
In our study, we showed that the concentration of IGF-axis proteins differs in the healthy and obese pediatric population, as well as numerous, statistically significant correlations between the concentration of the studied proteins and the concentration of adipokines, gastrointestinal tract hormones, insulin or blood pressure. Additionally, we showed statistically significant differences in gene expression of IGF proteins' family between obese and healthy children.
Concentration of IGF-Axis Proteins
In the study, we noted the significant differences in IGF-axis proteins' concentration between the group of obese and healthy children, which may play a role in the pathogenesis of obesity. IGFBP4 showed significantly higher values, and there was a trend toward a higher concentration of IGFBP3 in the OB compared to the HC, while IGFBP1, IGFBP2, and IGFBP6 achieved significantly lower values, and IGFBP7 tended to be lower in OB. No significant difference in the median concentration of IGF1 protein in the serum of obese children and in healthy children was found. The currently published results are inconsistent showing higher [4,16], lower [12,[17][18][19], and comparable [20][21][22] values of IGF1 in obese individuals in comparison to control of normal weight, both in pediatric and adult population. The main but not only factor influencing the production of IGF1 is a growth hormone (GH). In obese people, its reduced level and the lower response to factors increasing its secretion (e.g., physical activity) have been repeatedly demonstrated [23][24][25]. However, no sufficient explanation of the changes in total serum IGF1 concentration of obese patients in response to decreased GH level can be drawn, especially that despite the decreased GH levels, obese patients grew properly [26].
In the group of obese children, the concentration of IGF2 was lower almost significantly (p = 0.06) than in the control group. This protein plays an important role in the fetus growth regulation. Increased secretion is believed to be responsible for overgrowth of the fetus and increased amount of adipose tissue [27]. Alfares et al. suggested that IGF2 may stimulate subcutaneous preadipocyte differentiation and inhibit visceral preadipocyte differentiation [28]. However, its impact on obesity in older children is not sufficiently understood, and there is a need for further research in this area.
IGF binding proteins (IGFBPs) are a family of structurally similar proteins that are responsible for transport, extension of half-life, regulation of clearance, and direct modulation of IGF activity [29]. IGFBP1 concentration in our study group of obese children was significantly lower than in the control group. Similar results can be found in other publications [30][31][32][33]. It was initially suggested that a reduced concentration of IGFBP1 might increase the level of free IGF1 to compensate for the decreased concentration of GH in obese subjects. However, the results of research in this area are ambiguous [24]. A similar result was obtained for IGFBP2, the concentration of which in our study was significantly lower among obese patients. Jung Min Ko et al. also received significantly reduced levels of this protein in obese children [34].
IGFBP3 is the most abundant protein in the serum and is responsible for the transport of 90-95% of IGF1 and IGF2 [29]. This protein, apart from transport functions, is in charge for the amount of IGF available for receptors. Structural changes in IGFBP3 affect the amount of free IGF. IGFBP3 concentration, especially in children, is related to GH concentration [35]. Our results showed the trend toward higher concentration of IGFBP3 in obese children compared to the control group, what was consistent with the data from literature [22,36], and the results with not significant differences were also described [4]. However, Ounis O.B. et al. showed that IGFBP3 concentrations were significantly reduced following a diet or exercise associated with weight loss [37], and Juul A. et al. found that IGFBP3 levels increase with age and peak at puberty [38].
In our study, IGFBP4 had a significantly higher concentration in obese patients than in non-obese children. The liver is the main place of production of this protein, but its presence has also been demonstrated in other tissues [39,40]. IGFBP4 expression appears to play, similar to IGF2, an important role in the early growth period. A study in mice shows that in the absence of IGFBP4 production, the mice were born smaller than the controls, and this difference was maintained later in life [41]. It has also been shown that local excess IGFBP4 has a negative effect on the growth of smooth muscle [42]. However, the systemic administration of IGFBP4 had a stimulating effect on the process of bone formation [43]. It is presumed that in the absence of IGFBP4, IGF factors are more likely to be degraded. On the other hand, the significantly increased concentration of IGFBP4 exceeds the capabilities of proteolytic proteins, which are responsible for the amount of active IGF [41].
IGFBP6 has a much greater affinity for IGF2 than for IGF1 [44]. Therefore, it mainly inhibits IGF2 activity, but it may also have IGF-independent actions [45]. The IGFBP7 is believed to influence cell growth processes in the body; however, its affinity for IGF is significantly lower compared to IGFBP1, -2, -3, -4, -5, and -6 [46]. In our study, the concentration of IGFBP6 turned out to be significantly lower in obese children compared to healthy ones, and there was trend toward lower IGFBP7 level. However, the concentrations of these proteins have not yet been described in the context of childhood obesity.
Additionally, the IGFBPs undergo numerous post-translational modifications that may change their properties, and they are sensitive to the action of various types of proteases [47]. It has also been proven that they show numerous activities independent of IGF [48]. Differences in IGF receptor expression may also be important in assessing the influence of the IGF axis on growth processes. Ricco et al. showed higher IGF-1R mRNA expression among obese children [49]. The issue of the mutual actions of GH, IGF, and various IGFBPs on each other and the body's cells seems to be quite complex and not fully understood at the moment. Many publications are not consistent, and some elements of the GH-IGF axis are not sufficiently studied. Existing differences may result from the method of measurement and the influence of other factors involved in the regulation of metabolism. Therefore, there is a need for further studies that will focus not only on the GH-IGF-axis proteins, which are poorly understood, but also on a more thorough assessment of possible dependencies and causality of existing trends.
IGF Proteins Concentration and Other Metabolic Parameters
In our study, the correlations of selected parameters related to obesity and other IGFaxis proteins were also assessed. All the correlations we describe is statistically significant. However, they differ in terms of strength (as presented in Tables 4 and 5). Hence, further research is needed to better understand and define the presented dependences, as well as to determine the relevance of our findings. BMI correlated negatively with IGFBP6. The research shows that this protein is a comparatively specific inhibitor of IGF2 actions [50]. We speculate that the low level of IGFBP6 in obese patients cannot decrease the level IGF2 and that may be the factor that contributes to obesity. Moreover, IGFBP6 correlated positively with apelin, cholecystokinin, glucagone-like peptide-1, and leptin receptor.
These peptides were also significantly lower in obese children in our study ( Table 2). The data in literature about the role of IGFBP6 in obesity in children are scarcely available. Therefore, the biological role of a decreased level of IGFBP6 in obese children needs further investigations.
Systolic blood pressure correlated negatively with the concentration of IGFBP1 and IGFBP2. However, the values of Spearman's correlation coefficient were moderate. Children with obesity have several times higher risk of hypertension in comparison to children with normal weight. Moreover, the risk increases with BMI value [51]. It is possible that different IGF2 gene variants affect blood pressure regulation in obese children [52]. Studies in the adult population showed that different IGFBP1 gene variants may have an effect on blood pressure, and the concentration of IGFBP1 in the serum of people with hypertension was lower compared to the healthy individuals [53]. This suggests a possible relevance of these proteins as possible blood pressure-related biomarkers, also in children.
Insulin correlated positively with IGFBP3 in OGTT and negatively with IGFBP1 and IGFBP2. However, other studies also indicate the positive correlation between IGFBP3 and insulin level [54]. It seems likely that IGFBP1 and IGFBP2 have a beneficial effect on the risk of developing diabetes [55,56]. In other studies, negative correlations between insulin and IGFBP1 as well as insulin and IGFBP2 were also shown [22,33,57]. Therefore, appropriate concentrations of the above proteins of the IGF axis may play an important role in maintaining the normal level of glucose in the body. On the other hand, insulin can affect the number of individual components of the IGF axis.
Both IGFBP1 and IGFBP2, and also IGF2 in OGTT, negatively correlated with leptin. However, for the leptin receptor, we showed a negative correlation with IGFBP3 and a positive correlation with IGFBP1, IGFBP2, and IGFBP6. Higher levels of leptin are characteristic of obese children. In addition, it is possible that leptin contributes to the achievement of normal growth in obese children, despite the lowered levels of growth hormone, by acting directly on growth cartilage cells and indirectly through the components of the IGF-axis proteins [58]. Ibarra-Reynoso et al. also noted a negative correlation between leptin and IGFBP1 [59]. The study in the obese pediatric population suggests that IGFBP2 may be a local preventive factor in adipose tissue accumulation. In the adipose tissue of obese people, compared to the healthy ones, the concentration of IGFBP2 mRNA is lower [60]. Perhaps the documented effect of leptin inducing IGFBP2 mRNA expression and IGFBP2 production is not fully effective due to leptin resistance found in obese subjects [61]. The concentration of a soluble leptin receptor also has a significant effect. Its increased level has been shown to bind leptin and inhibit its action. However, the reduced amount may reflect a low level of membrane leptin receptor expression, which may influence the state of leptin resistance [62]. Cinaz et al. showed a significantly lower concentration of soluble leptin receptor in obese children compared to healthy children [63]. In our study, the correlations of IGFBP1 and IGFBP2 and the leptin receptor were opposite to the correlation of these proteins with leptin concentration; hence, leptin appears to play an important role in IGF-axis regulation.
In the case of ghrelin, we showed a positive correlation with IGF2, IGFBP1, and IGFBP2, both fasting and after glucose administration, IGFBP1 only after OGTT, and a negative correlation with IGFBP4 also after OGTT. It is worth mentioning that despite the statistical significance, these correlations were not strong, but moderate. Önnerfält et al. showed lower levels of ghrelin among obese children and noticed a negative correlation between the fasting ghrelin level and the body mass index, which is consistent with our results [64]. Ghrelin is a circulating orexigenic factor of which the level is reduced in obese humans [65]. It is suggested that the changes in the concentration of this peptide may be an adaptive response to the increase in body weight and the amount of adipose tissue [66]. Additionally, we noted that cholecystokinin and glucagon-like peptide-1 correlated positively with IGFBP6.
Expression of IGF Proteins' Genes
Comparing the mRNA expression of the IGF-axis proteins, we obtained statistically significantly lower values among obese children of IGF2 and IGFBP1; IGFBP6 tended to be lower (p = 0.059) in the OB group. On the other hand, higher expression was noted in the case of IGFBP7 protein. At present, in the literature, there are no comprehensive data on the expression of the IGF-axis genes in humans. The standard source of diagnostic material is peripheral blood, which is readily available. Mononuclears show high expression of genes involved in lipid homeostasis and rapidly detect signals of its disturbance [67]. These genes' expressions might be potential biomarkers of lipid metabolism abnormalities.
The study carried out in rats regarding IGF1 expression showed different results of the intensity of expression in obese individuals depending on the type of tissue [68]. Another study in obese adults showed a significantly lower value of IGF-1Eb mRNA isoform in muscle cells compared to the control group [69]. In mice, an association has also been demonstrated between decreased IGFBP1 mRNA expression and obesity [70].
A study in the adult population showed higher levels of IGFBP2 and IGFBP7 mRNA in obese subjects, and lower IGFBP4 mRNA expression. In our study, despite lower IGF-1 mRNA expression in obese children, we showed no significant difference in IGF-1 protein concentration. The decreased expression of IGF2, IGFBP1, and IGFBP6 corresponds to the decreased amount of these proteins in the group of obese children. In the case of decreased IGFBP7 concentration, we showed higher IGFBP7 mRNA expression. The above results indicate that the regulation of the transcription may be of greater importance for the level of IGF2, IGFBP1, and IGFBP6 than for other components of IGF axis. However, the mechanism of regulation of the concentration of proteins of the IGF family seems to be complex and, apart from epigenetic factors, also influenced by many additional factors.
There are few data in the literature that attempt to describe the relationship between obesity and IGF-axis proteins, e.g., in the papers by Saitoh et al. or Ballerini et al. [71,72]. However, these works are based only on the concentration of these proteins and not on their expression. Based on our preliminary observations, the role of the IGF axis in childhood obesity needs further investigations. It seems that a decreased level of IGFBP6 might play some role in obesity in the child population.
Limitations
The main limitation of the study was the small sample size, which could affect the validity of the results. In the future, it would be advisable to carry out similar experiments on larger cohorts to further confirm the obtained results. Certain outcomes, which did not reach the statistical significance cutoff value, are likely to be found significant when investigated with a population of greater size.
Conclusions
The study revealed relationship between IGF-axis proteins and gene expression and childhood obesity with its metabolic parameters. We suggest that the IGF axis may be involved in obesity development, but the exact mechanism cannot be distinctly defined based on the study. The relationship between GH, IGF, and IGFBPs, as well as their interactions with the body's cells, is complex and remains not fully understood. We found numerous correlations between IGFBP6 concentration and obesity metabolic parameters. As the available data for the expression and concentration of IGF-family proteins are inconsistent, further research, concerning pediatric obesity, including larger populations, is necessary. Informed Consent Statement: All parents, adolescent patients, and adult patients signed a written informed consent before blood sample collection.
Data Availability Statement:
The datasets generated for this study are available on request to the corresponding author.
|
v3-fos-license
|
2021-04-01T06:17:21.243Z
|
2021-03-30T00:00:00.000
|
232431833
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://academic.oup.com/psychsocgerontology/article-pdf/76/7/1452/39728942/gbab056.pdf",
"pdf_hash": "4a3617ed8f9e390a3865e6fb2f3006a9362e91d9",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2367",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"sha1": "bd9e6f0241c2e7e136acbff552d4f443c5579fc5",
"year": 2021
}
|
pes2o/s2orc
|
Physical Multimorbidity and Social Participation in Adult Aged 65 Years and Older From Six Low- and Middle-Income Countries
Abstract Objectives Multimorbidity is common among older adults from low- and middle-income countries (LMICs). Social participation has a role in protecting against negative health consequences, yet its association with multimorbidity is unclear, particularly in LMICs. Thus, this study investigated the relationship between physical multimorbidity and social participation among older adults across 6 LMICs. Method Cross-sectional, community-based data including adults aged 65 years and older from 6 LMICs were analyzed from the WHO Study on Global AGEing and adult health survey. The association between 11 individual chronic conditions or the number of chronic conditions (independent variable) and social participation (range 0–10 with higher scores indicating greater social participation; dependent variable) was assessed by multivariable linear regression analysis. Results 14,585 individuals (mean age 72.6 [SD 11.5] years; 54.9% females) were included. Among individual conditions, hearing problems, visual impairment, and stroke were significantly associated with lower levels of social participation. Overall, an increasing number of chronic conditions was dose-dependently associated with lower levels of social participation (e.g., ≥4 vs 0 conditions: β = −0.26 [95% CI = −0.39, −0.13]). The association was more pronounced among males than females. Discussion Older people with multimorbidity had lower levels of social participation in LMICs. Future longitudinal studies are warranted to further investigate temporal associations, and whether addressing social participation can lead to better health outcomes among older people with multimorbidity in LMICs.
Rapid population aging is occurring in low-and middleincome countries (LMICs), and approximately 80% of the older population will be living in LMICs by 2050 (World Health Organization [WHO], 2013). This will inevitably be accompanied by an increase in noncommunicable diseases and multimorbidity in this setting (Afshar et al., 2015). Multimorbidity is defined as the presence of two or more chronic conditions and is an important risk concept due to its association with functional decline (Jindai et al., 2016), poorer quality of life (Peters et al., 2018), increased risk of premature mortality, and health care costs (Kingston et al., 2018). Studies from LMICs have reported a high prevalence of multimorbidity (e.g., 53.8%; Khanam et al., 2011). This is a concern in terms of health care costs, with evidence suggesting that achieving global chronic disease prevention would present an important benefit for the economy of LMICs (Abegunde et al., 2007). Indeed, the prevention of multimorbidity in the older population is becoming a key priority in these regions to avoid further burdening of the economy of LMICs (WHO, 2005).
A potential risk factor for multimorbidity, as well an exacerbating factor of multimorbidity, is social participation. According to a recent content analysis, social participation is mostly defined as a person's involvement in activities which provide interactions with others in society or the community, and these involvements can happen when taking part in an activity to connect with others or contribute to society, as well as when interacting with others without doing a specific activity with them (Levasseur et al., 2010). Encouraging social participation in the aging population has been highly recommended by the WHO (2002) due to its protective role against chronic conditions (Holmes & Joseph, 2011) such as coronary heart disease (Sundquist et al., 2006) and hypertension (Tu et al., 2018). Previous literature has suggested that the social influence on health could happen through the shaping of social norms, such as encouraging healthier behaviors, as well as through provision of education and information on health (Pellmar et al., 2002), while one review suggested that the beneficial effect of social participation on self-reported health in older adults may be explained by social support and social cohesion within the wider community (Douglas et al., 2017). On the other hand, it is also possible for chronic conditions or multimorbidity to impede social participation, via factors such as limitations in physical function, pain, and discomfort (Bowling, 1995;Zimmer et al., 1997). Thus, it is possible that chronic diseases may lead to lower levels of social participation, and this in turn can lead to further worsening of chronic conditions by depriving patients of information related to health or the social support that they need to treat the chronic conditions. To date, the few studies on social participation and multimorbidity have yielded mixed results. Some cross-sectional research has acknowledged multimorbidity as a risk factor for lower social participation in older European adults (Galenkamp et al., 2016), and some evidence suggests that symptoms play a key role in predicting social participation restrictions (Griffith et al., 2019). A longitudinal study found a negative association between social participation at baseline and number of chronic conditions developed 4 years later, a relationship mediated by quality of life and depressive symptoms (Santini et al., 2020). However, other studies reported no significant associations on the matter (Alaba & Chola, 2013;Chen et al., 2018;Singer et al., 2019). Given the conflicting results of previous studies and the fact that the majority of these studies have been conducted in high-income countries, clearly more research on this matter is necessary from diverse settings including LMICs.
Studies examining social participation and multimorbidity among older adults are important, as they are one of the most vulnerable populations in terms of access to health information and health care services (Makara, 2013). Furthermore, older individuals are more likely to live in more impoverished areas, lack access to nutritious food, be socially excluded, and experience more daily stress (Makara, 2013), while they require more support for their daily activities as they age (Avlund et al., 2004). These income and social inequalities experienced by older populations may be even more salient in LMICs, where there is presumably limited availability of public infrastructures (e.g., education, social welfare), financial restraints, high unemployment rates, and limited diagnosis and treatment services (Li et al., 2020). Thus, the aim of this study was to examine this association in adults aged 65 years and older from six LMICs (China, Ghana, India, Mexico, Russia, and South Africa), which broadly represent different geographical locations and levels of socioeconomic and demographic transition.
Method
Data from the Study on Global AGEing and adult health (SAGE) were analyzed. These data are publicly available through http://www.who.int/healthinfo/sage/en/. This survey was undertaken in China, Ghana, India, Mexico, Russia, and South Africa between 2007 and 2010. All countries were LMICs based on the World Bank classification at the time of the survey.
Details of the survey methodology have been published elsewhere (Kowal et al., 2012). In brief, in order to obtain nationally representative samples, a multistage clustered sampling design method was used. The sample consisted of adults aged 18 years and older with oversampling of those aged 50 years and older. Trained interviewers conducted face-to-face interviews using a standard questionnaire. Standard translation procedures were undertaken to ensure comparability between countries. The survey response rates were China: 93%, Ghana: 81%, India: 68%, Mexico: 53%, Russia: 83%, and South Africa: 75%. Sampling weights were constructed to adjust for the population structure as reported by the United Nations Statistical Division. Ethical approval was obtained from the WHO Ethical Review Committee and local ethics research review boards. Written informed consent was obtained from all participants.
Social Participation Index
As in a previous SAGE publication (Zamora-Macorra et al., 2017), a social participation index was created based on nine questions on the participant's involvement in community activities in the past 12 months (e.g., attended religious services, club, society, union, etc.) with answer options "never (coded = 1)," "once or twice per year (coded = 2)," "once or twice per month (coded = 3)," "once or twice per week (coded = 4)," and "daily (coded = 5)." The answers to these questions were summed and later converted to a scale ranging from 0 to 10 with higher scores corresponding to higher levels of social participation (Cronbach's α = 0.79).
Chronic Conditions and Multimorbidity
We included all 11 chronic physical conditions (angina, arthritis, asthma, chronic back pain, chronic lung disease, diabetes, edentulism, hearing problems, hypertension, stroke, and visual impairment) for which data were available in the SAGE. Chronic back pain was defined as having had back pain every day during the last 30 days. Respondents who answered affirmatively to the question "Have you lost all of your natural teeth?" were considered to have edentulism. The participant was considered to have hearing problems if the interviewer observed this condition during the survey. Hypertension was defined as having at least one of the following: systolic blood pressure ≥140 mmHg, diastolic blood pressure ≥90 mmHg, or self-reported diagnosis. Visual impairment was defined as having severe/extreme difficulty in seeing and recognizing a person that the participant knows across the road (Freeman et al., 2013). Diabetes and stroke were solely based on lifetime self-reported diagnosis. For other conditions, the participant was considered to have the condition in the presence of either one of the following: self-reported diagnosis or symptom-based diagnosis based on algorithms. We used these algorithms, which have been used in previous studies using the same data set, to detect undiagnosed cases (Arokiasamy et al., 2017;Garin et al., 2016). Specifically, the validated Rose questionnaire was used for angina (Rose, 1962), and other previously validated symptom-based algorithms were used for arthritis, asthma, and chronic lung disease (Arokiasamy et al., 2017). Further details on the definition of chronic physical conditions can be found in Supplementary Table S1. The total number of chronic physical conditions was calculated and categorized as no chronic conditions or one, two, three, and four or more chronic conditions. Multimorbidity was defined as having two or more chronic physical conditions, in line with previously used definitions (Garin et al., 2016).
Control Variables
The control variables were selected based on past literature (Kristensen et al., 2019), and included age, sex, wealth quintiles based on income, level of highest education achieved, marital status (married/cohabiting, never married, separated/divorced/widowed), living arrangement (alone or not), body mass index (BMI), physical activity, smoking (never, current, former), alcohol consumption (never, nonheavy, heavy), loneliness, and depression. BMI (kg/m 2 ) was based on measured weight and height and was categorized as: <18.5 (underweight), 18.5-24.9 (normal weight), 25.0-29.9 (overweight), and ≥30.0 (obese). Physical activity levels were assessed with the Global Physical Activity Questionnaire (Bull et al., 2009). The total amount of moderate-to-vigorous physical activity in a typical week was calculated based on self-report. Those scoring ≥150 min of moderate-to-vigorous intensity physical activity were classified as meeting the recommended guidelines (coded = 0), and those scoring <150 min (low physical activity) were classified as not meeting the recommended guidelines (coded = 1) (WHO, 2010). Consumers of at least four (females) or five drinks (males) of any alcoholic beverage per day on at least 1 day in the past week were considered to be "heavy" drinkers. Those who had ever consumed alcohol but were not heavy drinkers were categorized as "nonheavy" drinkers (Koyanagi et al., 2015). Loneliness was assessed with the question "Did you feel lonely for much of the day yesterday?" with answer options "yes" or "no." Questions based on the World Mental Health Survey version of the Composite International Diagnostic Interview (Kessler & Üstün, 2004) were used for the endorsement of DSM-IV depression (American Psychiatric Association, 2000).
Statistical Analysis
The statistical analysis was performed with Stata 14.1 (Stata Corp LP, College station, TX). The analysis was restricted to those aged 65 years and older. The difference in sample characteristics between those with and without multimorbidity (i.e., two or more chronic physical conditions) was tested by chi-squared tests and Student's t tests for categorical and continuous variables, respectively. Multivariable linear regression analysis was conducted to assess the association between the individual 11 chronic physical conditions or number of chronic physical conditions (independent variable) and the social participation index score (dependent variable). In order to assess whether the association between the number of chronic physical conditions and social participation differs by sex, we tested for interaction by sex by including an interaction term (Number of chronic physical conditions × Sex) in the model. Because preliminary analysis showed that there is a significant interaction by sex, we stratified the analysis by sex for this analysis.
All regression analyses were adjusted for age, sex, wealth, education, marital status, living arrangement, BMI, physical activity, smoking, alcohol consumption, loneliness, depression, and country, except for the sexstratified analysis which was not adjusted for sex. For the analysis on individual chronic conditions, all conditions were included simultaneously in the model. Adjustment for country was done by including dummy variables for each country in the model as in previous SAGE publications (Koyanagi et al., 2019). All variables were included in the models as categorical variables with the exception of age and the social participation index score (continuous variables). The sample weighting and the complex study design were taken into account in all analyses. Results from the regression analyses are presented as b coefficients with 95% confidence intervals. The level of statistical significance was set at p < 0.05.
Results
The final sample included 14,585 individuals aged 65 years and older (5,360: China; 1,975: Ghana; 2,441: India; 1,375: Mexico; 1,950: Russia; 1,484: South Africa). The sample characteristics are provided in Table 1. The mean (SD) age was 72.6 (11.5) years, while 54.9% were females. The frequency of each social activity included in the social participation index by each country is reported in Supplementary Table S2. The prevalence of different types of chronic conditions by sex is shown in Table 2. The level of social participation, as expressed in terms of the mean social participation index score, was lower among those with a greater number of chronic physical conditions ( Figure 1). The association between the individual chronic physical conditions and the social participation index score estimated by multivariable linear regression is shown in Figure 2. Significantly lower levels of social participation were observed for those with hearing problems, visual impairment, and stroke. In terms of the number of chronic physical conditions, overall, levels of social participation decreased with increasing number of chronic physical conditions (Table 3). However, when the analyses were stratified by sex, only four or more (vs no chronic conditions) chronic physical conditions were significantly associated with lower levels of social participation among women, while for men, one to four chronic conditions were all significantly associated with lower levels of social participation.
Main Findings
To the best of our knowledge, this is the first study to investigate the association between social participation and multimorbidity among older adults from LMICs. Our results demonstrate a concerning linear decrease in the level of social participation with increasing number of chronic conditions. Significant sex differences were also observed, with having any number of chronic conditions being significantly associated with lower levels of social participation among men, whereas among women, only four or more chronic conditions were significantly associated with lower levels of social participation. These study results highlight the importance of taking social participation into consideration in public health interventions to tackle multimorbidity, and its health and social consequences in LMICs.
Interpretation of Findings
In terms of individual chronic conditions, hearing problems, visual impairment, and stroke were significantly associated with lower levels of social participation in our study. The lower level of social participation found in people with hearing problems and visual impairment may be explained by the limited ability of people with these conditions to establish social ties as vision and hearing abilities are essential for communication. For example, visual impairment in later life has been associated with increased social isolation and diminished social skills (Thurston et al., 2010), while hearing loss or impairment negatively impact the quality of one's social life, especially for older adults, due to associated psychosocial consequences of hearing loss and diminished ability in understanding speech (Picinini et al., 2017). One survey of seniors aged 65 and over reported a significantly lower level of social participation in seniors with visual impairment, compared to those without, even after adjusting a range of covariates (Jin et al., 2019). In terms of stroke, the associated physical disabilities and psychosocial problems (e.g., fatigue, depression, and anxiety) may hinder social interaction poststroke (Bergersen et al., 2010;Hackett et al., 2014;Pan et al., 2011), as the limited mobility accounted by this condition may restrict social participation especially outside of home.
In the current study, social participation was associated with several individual chronic conditions and multimorbidity. This is in line with previous research, which has found that older adults with health difficulties tend to report poor engagement in social activities (Strain et al., 2002). In particular, diminished functional ability (Bowling, 1995), severity of physical conditions, and associated pain and discomfort could contribute to poor social participation (Zimmer et al., 1997). However, we may speculate that the association between social participation and multimorbidity is bidirectional. In a meta-analysis determining the relationship between social relationships and the risk for mortality, the authors found a 50% increased likelihood of survival for those with stronger social relationships even after adjustment for a variety of confounders, and the impact of social relationships on mortality was found to be comparable with a number of lifestyle risk factors, including smoking, alcohol consumption, and BMI (Holt-Lunstad et al., 2010). Based on the buffer effect model (Cohen, Gottilieb, and Underwood, 2000), social participation could potentially buffer against the negative impact from life stressors, and thus impede the Physical multimorbidity referred to two or more chronic physical conditions. b p Value was calculated by Student's t tests and chi-squared test for continuous and categorical variables, respectively. c The social participation index score ranged from 0 to 10 with higher scores representing higher levels of social participation.
detrimental effect of social stressors on physical health. It is also possible that regular and active social participation could motivate a healthy lifestyle, including more physical activities, maintaining a healthy weight, and seeking health care when in need. Given this, regardless of temporal associations, the mere fact that individuals with multimorbidity have lower levels of social participation may be an issue since they may be exposed to more stressors, lack information on how to maintain a healthy lifestyle, or lack support that is necessary to treat their chronic diseases, and this may lead to worsening in health status. In our study, social participation was found to be negatively associated with loneliness, but surprisingly positively linked to living alone. One explanation could be that for those who live alone, they may attempt to maintain their level of social engagement and social relationships with others by increasingly participating in social activities or events that are outside their households. Most importantly, our results also underline that, although low social participation could cause loneliness (Pettigrew & Roberts, 2008), the relationship between living alone and social participation is not the same as that of loneliness, hence living alone should not be considered synonymous as loneliness. Loneliness is an unpleasant experience that occurs when there is a mismatch between a person's desired and perceived availability and quality of social interactions/relationships (Peplau & Perlman, 1982), and a systematic review of loneliness interventions (Ma et al., 2020) has emphasized that simply increasing social opportunities or social participation is not an effective approach in improving loneliness. Therefore, our study further highlights the importance that living alone and loneliness should not be used interchangeably in literature, and strategies to address loneliness specifically should be warranted.
The association between multimorbidity and lower social participation was also stronger in men than women. This sex difference was unlikely to be explained by the different patterns of chronic conditions among men and women as conditions that are particularly strongly associated with lower levels of social participation (e.g., hearing problem, visual impairment, stroke) were not more prevalent among men. Although the reasons for this finding can only be speculated, several mechanisms may be suggested. First, compared to men, women in LMICs may be more confined in their traditional gender role, for example, taking care of (grand-)children and domestic chores (Kuper & Marmot, 2003) and be more likely to have had social interaction at home throughout their life. In this case, developing diseases that hinder leaving the house may have less impact on their level of social participation. Men in LMICs, on the other hand, may be more likely to work full-time, even at an older age, and chronic health conditions may therefore limit their opportunities to engage in regular work Full color version is available within the online issue.
Full color version is available within the online issue. activities or interact with people at work, further affecting their level of social participation. An alternative explanation for this pattern could be that women are less likely to lose their social contacts even when their mobility is impacted by chronic physical conditions, as they tend to have a larger social network and more social contacts with their children and friends compared to men (Beach & Bamford, 2014). This greater social connectedness of women may be reflected in the findings of our study where we found that social participation was negatively linked to being separated/divorced/widowed and never married in men, but such relationship was not significant in women. Finally, men and women may cope with life stressors differently. When facing decreased social participation in their social environment or stressors from workplace, men may be more likely to engage in prolonged high effort coping in order to overcome these perceived barriers in their lives (Subramanyam et al., 2013), and this type of coping strategy may lead to negative health outcomes (James et al., 1992). Another interesting sex difference that we found was that higher education levels were significantly associated with greater levels of social participation only among women but not men. This may be because women with low levels of education are more likely to engage in their traditional caring role at home, while this may not be the case in men. This suggests that education among women may protect them from being socially isolated, as this may enable them to increase their chances of obtaining a job and consequently have a more diverse social network consisting of friends, family, clients, and colleagues at work.
Policy Implications
The current study reports several significant findings on multimorbidity and social participation, with important implications for future research and clinical practice. In particular, these results have crucial implication in LMICs, where expenditure for health care may be highly burdensome especially in countries without universal health insurance schemes, even leading to catastrophic health expenditure (Kinfu et al., 2009). Facilitating social participation should be recognized as an ultimate goal at a national level, in order to buffer against increased income disparities and health inequalities in LMICs (Hu et al., 2014). Multidimensional initiatives, including those focusing on social (e.g., cultural recreation, volunteering opportunities), psychosocial (e.g., well-being, quality of life), and material (e.g., access to public transportation) could also be broadly introduced, all of which could be significant contributors to successful ageing (WHO, 2002).
Another key implication is that health care providers should be mindful about those populations with a high likelihood of poor social participation, for example, those who suffer from stroke, visual impairment, hearing problems, and most importantly, people with multimorbidity. By collaborating with the government, health care providers should recognize any difficulties that may hinder social participation (e.g., reduced physical function, financial difficulties, housing problems, poor transportation) in people with multimorbidity and certain physical conditions and refer them to relevant social services or introduce them to community-based programs during routine care (e.g., peer support groups and befriending programs), with the potential to improve social participation among patient groups.
Strengths and Limitations
This is the first study addressing the existing knowledge gap in terms of the association between social participation and multimorbidity among older people from LMICs. This is in line with the WHO Commission on Social Determinants of Health framework (2009), which emphasizes the inadequacy of current focus on biological or physical factors as singular determinants of health. The strength of the study includes the large sample size and the use of nationally representative data sets. However, several limitations should be considered. First, the evaluation of chronic conditions was mostly based on self-reported measures, which may potentially lead to reporting bias. Second, although this study included a list of chronic conditions common in old age, we lacked data on diseases such as cancer. Thus, the results may differ with more chronic diseases being included. Third, there is no conventional way of assessing social participation, but it is a common method to construct participation variables from summary participation indices . Lastly, the cross-sectional nature of the current study hampered our interpretation of causality and temporality between social participation and physical multimorbidity. Therefore, future longitudinal studies are warranted to further investigate temporal associations, as well as the mechanisms by which social participation may impact physical multimorbidity or vice versa.
Conclusions
In summary, the results of our study on older adults from six LMICs suggest that low social participation is associated with multimorbidity. Although the temporal association could not be established in our study, the mere fact that people with multimorbidity are more likely to have lower levels of social participation is problematic, as both multimorbidity and low social participation are associated with adverse outcomes, while it is possible that low levels of social participation may exacerbate multimorbidity. Enhancing social participation in people with multimorbidity may create a sense of belonging and resilience (Choi & Matz-Costa, 2018), enhance access to leisure activities and health care services (Fone et al., 2007), strengthen emotional and instrumental support, make older people feel that they are loved and being cared for (Jen et al., 2010), and ultimately promote successful and healthy ageing (Choi & Matz-Costa, 2018). Future studies should investigate how social participation can be promoted among people with multimorbidity, while studies on whether the promotion of social participation may lead to a reduction in multimorbidity are also warranted.
Supplementary Material
Supplementary data are available at The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences online.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2010-01-12T00:00:00.000
|
2358530
|
{
"extfieldsofstudy": [
"Medicine",
"Biology"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://silencejournal.biomedcentral.com/track/pdf/10.1186/1758-907X-1-1",
"pdf_hash": "0117d58ed5852477ccbc4714e777c293e91ac9ce",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2370",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "0117d58ed5852477ccbc4714e777c293e91ac9ce",
"year": 2010
}
|
pes2o/s2orc
|
Welcome to Silence
Editorial Welcome to Silence [1], a new open-access journal devoted to RNA silencing and other pathways directed by non-coding RNAs. Silence springs from the extraordinary, yet brief, history of RNA silencing. In just two decades, we have seen the anomalous properties of plant and fungal transgenes connect with a series of amazing experiments in which injected double-stranded RNA triggered silencing in worms. These diverse lines of research revealed the essence of RNA interference (RNAi), and the importance of these discoveries has been recognized through numerous awards and accolades including a Nobel Prize for Fire and Mello [2,3]. Our current understanding of RNA silencing derives from experiments performed in organisms from three kingdoms, experiments that directly inspired billion-dollar investments by biotechnology and pharmaceutical companies to use RNA-silencing both to diagnose and to treat disease in humans. Both small interfering RNAs and microRNAblocking antisense oligonucleotides are now in human clinical trials [4]. Some of the first GM plants to be produced exploited RNA silencing, although the mechanisms were not well understood at the time [5,6]. The study of RNA silencing produced enabling technology that allows each gene in a sequenced genome--even cultured human cells--to be knocked out or knocked down, providing a lifeline to functional genomics. There can be no question that RNA silencing research has had an impact! RNA silencing has excited scientists and non-scientists alike: witness front-page headlines in the American and British press [7-10], even before the Nobel Prize. Such interest, of course, reflects the power of RNA silencing as biotechnology. But equally important is that RNA silencing exemplifies the elegant creativity of natural selection. Just as we might gaze in awe at a blue whale in the ocean (how can such a creature exist?), we marvel at the simple principles and complex molecular machines that underlie RNA silencing pathways. The role of silencing as an antiviral defence in plants and invertebrates illustrates this point: it uses the sequence of the invading virus itself to define the targets to be repressed and so has infinite specificity [11]. As a defense system RNA silencing is unsurpassed. The study of RNA silencing has now travelled far from its posttranscriptional roots. The link between RNA and epigenetic silencing by chromatin modification, for example, is well established in many organisms [12]. In other developments the discovery of novel families of small silencing RNAs continues to expand the universe of guides far beyond the original microRNA and small interfering RNA pioneers [13]. This diversity is not mere molecular icing on the RNAi cake, because silencing underpins biological phenomena as diverse as virus resistance, control of chromosome architecture, transposon activity, genome rearrangement, and development, as well as responses to biotic and abiotic stimuli [14]. In parallel, other types of RNA-mediated mechanisms have been discovered, from CRISP RNAs [15] in bacteria to unexpectedly large families of non-coding RNAs derived from the intergenic regions of animals and plants [16]. These discoveries have been informed by, and in turn enrich the intellectual framework of RNAi. Thus, Silence will enthusiastically publish papers on these and other RNAbased mechanisms in addition to studies of the canonical RNA silencing pathways. Papers with (RNA) AND (silence OR silencing) in their titles or abstracts first appeared in the mid 1990s; there are now more than 1,400 each year and the trend is increasing (source: Web of Science) [17]. So why introduce a new journal if these papers are already finding a home? Two answers explain our motivation in founding Silence. First, the history of silencing is one of extensive cross-fertilization among different research communities. Such interorganism as well as inter-disciplinary collaboration and discussion explains the remarkable productivity of our field. * Correspondence: phillip.zamore@umassmed.edu
can such a creature exist?), we marvel at the simple principles and complex molecular machines that underlie RNA silencing pathways. The role of silencing as an antiviral defence in plants and invertebrates illustrates this point: it uses the sequence of the invading virus itself to define the targets to be repressed and so has infinite specificity [11]. As a defense system RNA silencing is unsurpassed.
The study of RNA silencing has now travelled far from its posttranscriptional roots. The link between RNA and epigenetic silencing by chromatin modification, for example, is well established in many organisms [12]. In other developments the discovery of novel families of small silencing RNAs continues to expand the universe of guides far beyond the original microRNA and small interfering RNA pioneers [13]. This diversity is not mere molecular icing on the RNAi cake, because silencing underpins biological phenomena as diverse as virus resistance, control of chromosome architecture, transposon activity, genome rearrangement, and development, as well as responses to biotic and abiotic stimuli [14].
In parallel, other types of RNA-mediated mechanisms have been discovered, from CRISP RNAs [15] in bacteria to unexpectedly large families of non-coding RNAs derived from the intergenic regions of animals and plants [16]. These discoveries have been informed by, and in turn enrich the intellectual framework of RNAi. Thus, Silence will enthusiastically publish papers on these and other RNAbased mechanisms in addition to studies of the canonical RNA silencing pathways.
Papers with (RNA) AND (silence OR silencing) in their titles or abstracts first appeared in the mid 1990s; there are now more than 1,400 each year and the trend is increasing (source: Web of Science) [17]. So why introduce a new journal if these papers are already finding a home? Two answers explain our motivation in founding Silence. First, the history of silencing is one of extensive cross-fertilization among different research communities. Such interorganism as well as inter-disciplinary collaboration and discussion explains the remarkable productivity of our field. * Correspondence: phillip.zamore@umassmed.edu † Contributed equally 1 Editors-in-Chief, Silence, BioMed Central, London, UK Unfortunately, the expansion and diversification of RNA silencing research threatens to fragment our intellectual community. Increasingly, the opportunity for a plant researcher, for example, to read a paper on genomic rearrangements in protozoa or for a fly geneticist to learn of the discovery of a novel mechanism of transposon control in fungi is being lost. Of course, existing journals devoted to the study of RNA will always publish some RNA silencing papers, but it is unreasonable to think that these journals can allocate a high proportion of their pages to our field. As RNA silencing research diversifies, we risk losing the excitement generated by the common enthusiasm for RNA silencing that unites distinct research communities. Silence seeks to nurture that enthusiasm by sustaining the interdisciplinary flavor of our field.
Second, Silence is a response to the rise of genomics and high throughput sequencing. These developments challenge molecular biologists because they demand a new, computational outlook on biological data. There is an enormous opportunity for bioinformaticians, mathematicians and statisticians to work together with experimental biologists to meet this challenge. They will extract useful and interesting information from genome sequences and large datasets and integrate them with similarly large datasets dealing with various other "omic" analyses of experimental systems. Modeling as a basis for hypothesis generation and testing will become increasingly important.
Silence can help molecular biologists and geneticists communicate effectively with computational scientists. We would be pleased, for example, to publish computational tools and research papers that use "dry science" to investigate RNA silencing or non-coding RNAs. We welcome reviews and commentaries in which computationalists introduce novel ideas, approaches and concepts in a style that is accessible to experimentalists.
This inaugural issue of Silence presents a selection of articles on different topics and an insightful review to illustrate the type of paper that we would like to include as the journal grows. Our renowned and diverse editorial advisory board [18] ensures fair but rigorous peer review, and our open access publication pipeline provides an efficient and easy-to-use system run by the well established BMC team. All BioMed Central journals are included in PubMed Central [19] and other freely accessible full-text repositories. This complies with the open access policies [20] of many funders including those of the Howard Hughes Medical Institute, NIH, and Wellcome Trust [21][22][23].
We look forward to receiving your manuscripts for publication and your feedback about the journal. Silence is meant to be yours; your comments and submissions will ensure it succeeds as the hub of the RNA silencing field.
Author Details
Editors-in-Chief, Silence, BioMed Central, London, UK
|
v3-fos-license
|
2023-07-11T15:29:49.868Z
|
2023-07-01T00:00:00.000
|
259535120
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/1996-1944/16/13/4858/pdf?version=1688640753",
"pdf_hash": "c07d0c16497437e4e1b2d2f3bd28f8299d5eb5d5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2371",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"sha1": "d3b6a82bfede7226def010aef8f92b313b2693d4",
"year": 2023
}
|
pes2o/s2orc
|
Crack Size and Undermatching Effects on Fracture Behavior of a Welded Joint
Crack size and undermatching effects on fracture behavior of undermatched welded joints are presented and analyzed. Experimental and numerical analysis of the fracture behavior of high-strength low-alloyed (HSLA) steel welded joints with so-called small and large crack in undermatched weld metal and the base metal was performed, as a part of more extensive research previously conducted. J integral was determined by direct measurement using special instrumentation including strain gauges and a CMOD measuring device. Numerical analysis was performed by 3D finite element method (FEM) with different tensile properties in BM and WM. Results of J-CMOD curves evaluation for SUMITEN SM 80P HSLA steel and its weld metal (WM) are presented and analyzed for small and large cracks in tensile panels. This paper is focused on some new numerical results and observations on crack tip fields and constraint effects of undermatching and crack size keeping in mind previously performed experiments on the full-scale prototype. In this way, a unique combined approach of experimental investigation on the full-scale proto-type and tensile panels, as well as numerical investigation on mismatching and crack size effects, is achieved.
Introduction
Welded joint heterogeneity has an important role in the behavior of steel welded joints, particularly if crack-like defects are present, causing local plastic strains. Even in the case of filler metal being the same class as the base metal, a welded joint has different tensile properties, toughness, fracture toughness, and fatigue crack growth rate as a consequence of heterogeneous microstructure, at least in four zones of the joint (base metal-BM, weld metal-WM, coarse-grain heat-affected zone-CGHAZ, fine-grain heataffected zone-FGHAZ), [1][2][3][4][5][6][7][8][9][10][11]. Different tensile properties are analyzed and evaluated in recent papers [1][2][3], where the digital image correlation (DIC) technique was used to measure strains, and the finite element method (FEM) was used to calculate stress distribution in specimens with a rectangular cross-section to evaluate true stress-strain curves more precisely. The effect of material heterogeneity on tensile properties and fracture toughness is presented in paper [4], indicating WM as the weakest zone of welded joint made of SUMITEN SM 80P HSLA steel, while different aspects of fracture toughness were analyzed for welded joints, made of different HSLA steels, and presented in papers [5][6][7][8][9]. Charpy toughness and fracture toughness in different zones of a welded joint were analyzed in paper [10], indicating a strong effect of material heterogeneity and HAZ as the weakest link in SUMITEN SM 80P HSLA steel. Also, fatigue crack growth rate in different zones of two HSLA steel welded joints was evaluated experimentally by using the Paris law, as presented in papers [11,12].
Fracture behavior of cracked undermatched welded joints made of HSLA steel was analyzed and presented in number of papers, where so-called strength mismatching was defined as the ratio between WM and BM yield strength (YS). In [13,14], HSLA steel in a quenched and tempered condition, corresponding to the grade HT 80, was investigated. The flux cored arc welding process (FCAW), with CO 2 as shielding gas, was used and two different tubular wires were selected as filler metals. Three differently undermatched welded joints were analyzed using results of testing the notched specimens with throughthickness crack front positioned partly in WM, partly in HAZ, and partly in BM. It was shown that the presence of different microstructures along the pre-crack fatigue front had an important effect on the critical crack tip opening displacement (CTOD), indicating that the fracture behavior strongly depends on the proportion of ductile base material, as well as on the size and distribution of the mismatching factor along the vicinity of crack front. In paper [15], the fracture mechanics analysis of specimens, with surface notch tips completely embedded in the heat affected zones, was performed. The results showed that the strength of mismatching of a welded joint caused a redirection of the crack propagation towards the low-strength region of the welded joint. It was also shown that even in the case of overmatched welded joints, but with a soft root layer, it was possible to achieve satisfying crack resistance, proving that such a type of welded joint is preferable for the welding of HSLA steels, because it enables the manufacturing of a welded joint without preheating.
More recently, full-scale experimental investigation was conducted on welding joints made of APL X80 wide plates [16]. Tensile tests were performed on Ø1422 mm × 25.7 mm X80 pipeline with original and repaired welding joints, equipped with strain gauges and using digital image correlation (DIC) method to measure strains and evaluate difference in loading capacity. In paper [17], effects of multiple defects on an overmatched welded joint fracture behavior under static loading were investigated numerically, by FEM, and experimentally, by DIC. It was shown that even in the case of a ductile structural steel (S235), fracture can occur at a relatively low stress level. Another study with DIC was performed to obtain the strain distribution in undermatching X80 pipe weld joints under uniaxial tensile loading, [18]. The results showed that the maximum strain was in the WM.
The yield strength mismatch in X80 pipeline steel welds, obtained by gas metal arc welding (GMAW) process, was estimated using instrumented indentation [19]. All three different levels of WM yield strengths (even, over, and undermatched) were investigated. In [20], a method for testing the local properties of girth welded joints in pipelines is proposed based on DIC measurement to identify the true stress-strain curves and local mechanical properties. Also, FEM, based on the GTN model, was used to verify the local mechanical properties of girth welded joints obtained by using DIC.
The focus here is on crack size and undermatching effects on fracture behavior of a welded joint of SUMITEN SM 80P HSLA steel. From a design point of view, strength overmatching is preferable, so that weld metal (WM) has higher YS compared to base metal (BM), but this is not always good idea from a structural integrity point of view, as explained in papers [21,22]. As a general rule, HSLA steel's sensitivity to cracking increases with increasing level of strength, so an undermatching effect is a more likely design solution for YS above 700 MPa [21,22]. Anyhow, it is not as simple as just avoiding cracking, since eventual plastic strain (due to stress concentration and low YS) would be localized in the weld metal until its strain hardening capacity is partly or fully exhausted before base metal would even start to yield [21][22][23][24].
In welded pressure vessels, stress concentrations caused by geometrical changes, including inevitable weldment imperfections, such as angular distortion or misalignment, can produce local plastic strains, possibly exhausting a portion of the strain hardening capacity. In these circumstances, the question arises of how cracks would behave [21][22][23][24]. As an example of such a problem, one can use the penstock in Reversible Hydro Power Plant "BAJINA BASTA" (RHPP BB), designed with a reduced safety factor [21,22] to fulfill the basic requirement-to make one instead of two penstocks. Consequently, HSLA steel was used, SUMITEN 80P, with YS around 700 MPa, but only after extensive experimental research of the prototype, as shown in Figure 1, to prove its fitness-for-purpose, as described in [21,22]. Later on, this approach was named the structural integrity [25]. Although this issue was a topic of a number of papers decades ago, only recently some of the most intriguing results have been explained in detail by using the finite element method for precise analysis of the stress-strain state, both for the prototype [26][27][28] and for tensile panels with large and small cracks, as shown in [29,30] for the BM. Here, the attention is focused on some new numerical results for the WM and observations on crack tip fields and constraint effects due to undermatching and crack size effects, obtained by comparison with the previous results for the BM. The novelty in this approach is the unique combination of experimental investigation on the full-scale prototype and tensile panels, as well as numerical investigation of mismatching and crack size effects in the case of an undermatched welded joint with a crack in the weld metal. A similar approach was applied in [6,31,32] but focused on cracks in HAZ and constraint effects. Mismatching and constraint effects in a different HSLA steel (Niomol 490) with differently positioned cracks in weld metal were analyzed in [33] by using a micromechanical approach to simulate crack growth. Such an approach requires determination of parameters, which are beyond the scope of this investigation but could be of interest for a future work.
Materials 2023, 16, x FOR PEER REVIEW 3 of 12 As an example of such a problem, one can use the penstock in Reversible Hydro Power Plant "BAJINA BASTA" (RHPP BB), designed with a reduced safety factor [21,22] to fulfill the basic requirement-to make one instead of two penstocks. Consequently, HSLA steel was used, SUMITEN 80P, with YS around 700 MPa, but only after extensive experimental research of the prototype, as shown in Figure 1, to prove its fitness-for-purpose, as described in [21,22]. Later on, this approach was named the structural integrity [25]. Although this issue was a topic of a number of papers decades ago, only recently some of the most intriguing results have been explained in detail by using the finite element method for precise analysis of the stress-strain state, both for the prototype [26][27][28] and for tensile panels with large and small cracks, as shown in [29,30] for the BM. Here, the attention is focused on some new numerical results for the WM and observations on crack tip fields and constraint effects due to undermatching and crack size effects, obtained by comparison with the previous results for the BM. The novelty in this approach is the unique combination of experimental investigation on the full-scale prototype and tensile panels, as well as numerical investigation of mismatching and crack size effects in the case of an undermatched welded joint with a crack in the weld metal. A similar approach was applied in [6,31,32] but focused on cracks in HAZ and constraint effects. Mismatching and constraint effects in a different HSLA steel (Niomol 490) with differently positioned cracks in weld metal were analyzed in [33] by using a micromechanical approach to simulate crack growth. Such an approach requires determination of parameters, which are beyond the scope of this investigation but could be of interest for a future work.
Base Metal and Welding
The base metal in this research was SUMITEN SM 80P HSLA steel produced in Japan, used for construction of a large penstock in RHPP BB in Serbia, as well as for the full-scale prototype, as shown in Figure 1. Chemical composition of BM and WMs is given in Table
Base Metal and Welding
The base metal in this research was SUMITEN SM 80P HSLA steel produced in Japan, used for construction of a large penstock in RHPP BB in Serbia, as well as for the full-scale prototype, as shown in Figure 1. Chemical composition of BM and WMs is given in Table 1, while the tensile properties are given in Table 2, indicating undermatched welded joint both in the case of shielded manual arc welding (SMAW) and submerged arc welding (SAW), which were used alternatively to produce the full-scale prototype. The mismatching ratio was 0.91 for SAW and 0.95 for SMAW, being in accordance with the high YS of the BM. Both welding processes, SMAW and SAW, were used for penstock welding, and also applied under the same conditions to produce the full-scale prototype, which was used for extensive testing to prove fitness-for-service, [21,22]. The basic coated low-hydrogen electrode LB 118 for MAW and core wire US 8013 with M38F flux for SAW welding, produced by "Kobe Steel", Kobe, Japan, were used. Post-weld heat treatment was applied to release residual stresses.
Tensile Panels
Tensile panels (TP) were made from the base metal (SM 80P) and also from welded joints of different mismatching levels, with the so-called large surface crack (LSC), 5 × 24 mm, and small surface crack (SSC), 2.5 × 16 mm, as shown in Figure 2. They were tested in the scope of fitness-for-service experimental investigation, to obtain better insight of mismatching effects on stress-strain behavior, as shown in [21,22].
1, while the tensile properties are given in Table 2, indicating undermatched welded joint both in the case of shielded manual arc welding (SMAW) and submerged arc welding (SAW), which were used alternatively to produce the full-scale prototype. The mismatching ratio was 0.91 for SAW and 0.95 for SMAW, being in accordance with the high YS of the BM. Both welding processes, SMAW and SAW, were used for penstock welding, and also applied under the same conditions to produce the full-scale prototype, which was used for extensive testing to prove fitness-for-service, [21,22]. The basic coated low-hydrogen electrode LB 118 for MAW and core wire US 8013 with M38F flux for SAW welding, produced by "Kobe Steel", Kobe, Japan, were used. Post-weld heat treatment was applied to release residual stresses.
Tensile Panels
Tensile panels (TP) were made from the base metal (SM 80P) and also from welded joints of different mismatching levels, with the so-called large surface crack (LSC), 5 × 24 mm, and small surface crack (SSC), 2.5 × 16 mm, as shown in Figure 2. They were tested in the scope of fitness-for-service experimental investigation, to obtain better insight of mismatching effects on stress-strain behavior, as shown in [21,22].
Numerical Analysis-FEM
Three-dimensional FE models were developed to simulate behavior of tensile panels with SSC and LSC. The effect of the crack tip fields, mismatching, and constrains were carefully studied using Abaqus, as described in [29,30,34]. Base and weld metal were assumed to behave in an isotropic elastic-plastic manner. The finite element mesh was made of regular elements and refined in the vicinity of the crack tip with 0.2 × 0.2 mm elements. As an example, such a mesh is shown in Figure 3 for TP with SSC. Crack growth was not simulated, i.e., the analysis was performed for stationary cracks.
of regular elements and refined in the vicinity of the crack tip with 0.2 × 0.2 mm elements. As an example, such a mesh is shown in Figure 3 for TP with SSC. Crack growth was not simulated, i.e., the analysis was performed for stationary cracks.
Only a quarter of the specimen is modeled due to symmetry conditions. The 20-node quadratic isoparametric elements, C3D20R, were used-26,932 of them for TP with SSC in weld metal (WM SSC model) and 19,176 for TP with LSC in weld metal (WM LSC model). Some details of the FE mesh for TP with LSC and SSC are shown in Figure 4.
The CMOD is obtained by tracking the positions of the two nodes located at the crack mouth, while the values of the J integral are obtained by the domain integral method. The domain was sufficiently distant from the crack front to ensure the convergence of the J integral values.
Stress-Strain Curves
The most intriguing part of FEM simulation is how to evaluate stress-strain curves for all zones of a welded joint (BM, WM, and HAZ, with both coarse and fine grain subzones). It was shown in [1][2][3]9] how it can be done for true stress-strain curves, based on the iteration procedure originally introduced for engineering stress-strain curves in [35,36]. In the case analyzed here, a slightly simplified procedure was adopted, since the stress-strain curves were evaluated for BM and WM only, having in mind that the Only a quarter of the specimen is modeled due to symmetry conditions. The 20-node quadratic isoparametric elements, C3D20R, were used-26,932 of them for TP with SSC in weld metal (WM SSC model) and 19,176 for TP with LSC in weld metal (WM LSC model). Some details of the FE mesh for TP with LSC and SSC are shown in Figure 4.
Numerical Analysis-FEM
Three-dimensional FE models were developed to simulate behavior of tensile panels with SSC and LSC. The effect of the crack tip fields, mismatching, and constrains were carefully studied using Abaqus, as described in [29,30,34]. Base and weld metal were assumed to behave in an isotropic elastic-plastic manner. The finite element mesh was made of regular elements and refined in the vicinity of the crack tip with 0.2 × 0.2 mm elements. As an example, such a mesh is shown in Figure 3 for TP with SSC. Crack growth was not simulated, i.e., the analysis was performed for stationary cracks.
Only a quarter of the specimen is modeled due to symmetry conditions. The 20-node quadratic isoparametric elements, C3D20R, were used-26,932 of them for TP with SSC in weld metal (WM SSC model) and 19,176 for TP with LSC in weld metal (WM LSC model). Some details of the FE mesh for TP with LSC and SSC are shown in Figure 4.
The CMOD is obtained by tracking the positions of the two nodes located at the crack mouth, while the values of the J integral are obtained by the domain integral method. The domain was sufficiently distant from the crack front to ensure the convergence of the J integral values.
Stress-Strain Curves
The most intriguing part of FEM simulation is how to evaluate stress-strain curves for all zones of a welded joint (BM, WM, and HAZ, with both coarse and fine grain subzones). It was shown in [1][2][3]9] how it can be done for true stress-strain curves, based on the iteration procedure originally introduced for engineering stress-strain curves in [35,36]. In the case analyzed here, a slightly simplified procedure was adopted, since the stress-strain curves were evaluated for BM and WM only, having in mind that the
Stress-Strain Curves
The most intriguing part of FEM simulation is how to evaluate stress-strain curves for all zones of a welded joint (BM, WM, and HAZ, with both coarse and fine grain subzones). It was shown in [1][2][3]9] how it can be done for true stress-strain curves, based on the iteration procedure originally introduced for engineering stress-strain curves in [35,36]. In the case analyzed here, a slightly simplified procedure was adopted, since the stress-strain curves were evaluated for BM and WM only, having in mind that the mismatching between BM and WM was in our focus, so the effect of HAZ and its subzones was neglected. Both cracks, SSC and LSC, positioned in WM, grew only through WM, i.e., they did not enter into the HAZ, and the same holds for the BM, which means that bi-material modeling approach can be applied.
True stress-strain curves, as used in this research, are shown in Figure 5, indicating better agreement between numerical and experimental values for BM than for WM. mismatching between BM and WM was in our focus, so the effect of HAZ and its subzones was neglected. Both cracks, SSC and LSC, positioned in WM, grew only through WM, i.e., they did not enter into the HAZ, and the same holds for the BM, which means that bimaterial modeling approach can be applied. True stress-strain curves, as used in this research, are shown in Figure 5, indicating better agreement between numerical and experimental values for BM than for WM. Figure 6 shows distribution of von Mises stress in the WM for LSC (Figure 6a) and BM for LSC (Figure 6b), whereas Figure 7 shows its distribution for SSC in the same way. Figure 6 shows distribution of von Mises stress in the WM for LSC (Figure 6a) and BM for LSC (Figure 6b), whereas Figure 7 shows its distribution for SSC in the same way. Based on the procedure for CMOD and J calculations described in #2.3, the J-CMOD curves are obtained for BM SSC and WM SSC, Figure 8, as well as BM LSC and WM LSC, Figure 9. One can see that differences between experimental and numerical values increase with increasing J, which is a consequence of numerical modeling without taking crack growth into account. Anyhow, crack growth was not in focus of this research.
Numerical Results of BM and WM with SSC and LSC Von Mises Stress Distribution
Materials 2023, 16, x FOR PEER REVIEW Based on the procedure for CMOD and J calculations described in #2. curves are obtained for BM SSC and WM SSC, Figure 8, as well as BM LSC Figure 9. One can see that differences between experimental and numer crease with increasing J, which is a consequence of numerical modeling w crack growth into account. Anyhow, crack growth was not in focus of this (a) [mm] (b)
Discussion
Fracture behavior of the undermatched welded joint was analyzed regardin fect of BM and WM mismatching on the crack tip fields, as well as the effect of cr (SSC vs. LSC). Figure 6 shows significant difference in stress distribution around to the mismatching effect, as shown in Figure 6a (WM) compared with Figure 6 Namely, contrary to the BM, where the maximum stress is located at the crack ti mum stress in the undermatched welded joint appears both at the crack tip in and in the BM, next to it. Such a redistribution of stresses is beneficial for weld resistance to cracking, since it provides reduced crack driving force in WM.
The same comparison can be made for the SSC, Figure 7. As one can see from 7a, the maximum stress in the undermatched welded joint appears both at the cra the WMand in the BM, next to it, whereas the maximum stress in the BM is locat crack tip. Such a redistribution of stresses indicates more favorable fracture beh the WM, as in the case of LSC. One should notice that in both cases, SSC and L beneficial effect of mismatching is possible only if the WM is capable of sustaining
Discussion
Fracture behavior of the undermatched welded joint was analyzed regarding the effect of BM and WM mismatching on the crack tip fields, as well as the effect of crack size (SSC vs. LSC). Figure 6 shows significant difference in stress distribution around LSC due to the mismatching effect, as shown in Figure 6a (WM) compared with Figure 6b (BM). Namely, contrary to the BM, where the maximum stress is located at the crack tip, maximum stress in the undermatched welded joint appears both at the crack tip in the WM and in the BM, next to it. Such a redistribution of stresses is beneficial for welded joint resistance to cracking, since it provides reduced crack driving force in WM.
The same comparison can be made for the SSC, Figure 7. As one can see from Figure 7a, the maximum stress in the undermatched welded joint appears both at the crack tip in the WMand in the BM, next to it, whereas the maximum stress in the BM is located at the crack tip. Such a redistribution of stresses indicates more favorable fracture behavior of the WM, as in the case of LSC. One should notice that in both cases, SSC and LSC, the beneficial effect of mismatching is possible only if the WM is capable of sustaining at least small amount of plastic strain. This condition is fulfilled in the analyzed case, as shown in Table 2, since the WM elongation is at least 22%.
On the other side, one can see from Figures 6 and 7 that differences in stress fields for the LSC and SSC are not significant, both for the WM and BM. In the case of WM, the maximum stress is the same, while in the case of BM, maximum stress is somewhat higher for the SSC compared to LSC. Obviously, the effect of mismatching is dominant in the case analyzed here.
As one can see from Figure 8a, experimental and numerical values for maximum J in the case of WM with SSC are at the same level, cca. 1000 N/mm, with a small difference for maximum CMOD values (experimental value 1.8 mm, numerical 1.6 mm), probably due to pop-in effect. In the case of WM with LSC, numerical and experimental results for maximum CMOD value agree well, but maximum J value is significantly higher when calculated. One can also notice from both Figure 8a,b that FEM values for CMOD (at the same level of J) are consistently lower than the experimental ones, probably due to crack growth effects, as already mentioned. Obviously, for a shorter crack one obtains smaller values for CMOD. Nevertheless, the differences are not significant.
Agreement between experimental and numerical results is better in the case of the BM, Figure 9, which was expected since the modeling of BM tensile behavior is simpler and thus more precise than the modeling of WM, as already shown in Figure 5. One should notice that in the case of BM, both for SSC and LSC, numerical CMOD values are higher than experimental ones for the same level of J integral, contrary to the WM behavior. Obviously, crack growth does not play an important role in the case of BM, as also shown in [5], indicating only 1 mm of crack growth, compared with more than 5 mm in the case of WM.
From Figures 8 and 9 it is also clear that the agreement between experimental and numerical results is better for BM than for WM, as one could expect due to better agreement of BM stress-strain curves than of WM ones, Figure 5.
Another important aspect of fracture behavior of undermatched welded joint is the comparison with overmatching effect, which was analyzed and presented in [37] in two cases-the crack tip positioned in the course-grain (CG) HAZ and in the fine-grain (FG) HAZ. For both cases, it was shown that the overmatching effect was beneficial for the overall welded joint resistance to crack growth, even though local crack growth was promoted by a high tri-axial stress state in the case of crack tip in the FG HAZ.
One should notice that in both cases, under-and overmatching effects are favorable for fracture behavior since BM acts as barrier to crack growth. Actually, heterogeneity in this case is beneficial since welded joint behaves better then WM and/or HAZ would behave as homogeneous structures.
Conclusions
Experimental and numerical methods have been used to characterize fracture behavior of undermatched welded joints, made of HSLA steel SM 80P. Based on this research, the following conclusions are obtained:
•
Mismatching effects play a more significant role in fracture behavior of undermatched welded joints than crack size, since the crack tip fields are influenced mostly by mismatching, and to smaller a extent, by crack size. • Crack tip fields in the case of an undermatched welded joint are favorable, since high stresses are re-distributed from the crack tip to the BM.
•
Numerical results agree well with the experimental ones, with increasing differences in the case of WM due to crack growth, which was not taken into account in numerical modeling.
•
Differences between numerical and experimental results in the case of WM are larger than in the case of BM, which is attributed to the modeling of stress-strain curves, being less precise in the case of WM.
|
v3-fos-license
|
2021-12-23T14:23:39.394Z
|
2021-12-01T00:00:00.000
|
245425834
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://progressinorthodontics.springeropen.com/track/pdf/10.1186/s40510-021-00391-3",
"pdf_hash": "77ea73400c9f702852b2c4a6d3b60a8e06b5efa2",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2374",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "f33cfd2ef4409b3eec86136848d9d76d9d465397",
"year": 2021
}
|
pes2o/s2orc
|
Three-dimensional oropharyngeal airway changes after facemask therapy using low-dose computed tomography: a clinical trial with a retrospectively collected control group
Aims This study aimed to evaluate the short-term oropharyngeal airway volumetric changes in growing Class III maxillary-deficient patients treated by facemask without expansion compared with untreated Class III controls, using low-dose computed tomography. Methods Eighteen maxillary-deficient children (9 boys, nine girls) with a mean age of 7.81 ± 0.84 years were treated with maxillary bonded bite block and facemask (FM). Pre- (T1) and post-treatment (T2) low-dose CT images were acquired. Sixteen untreated Class III patients with a mean age of 7.03 ± 0.56 years had previously two low-dose CT scans within a one year of follow-up. Volumetric and minimal cross-sectional area measurements were obtained to assess the oropharyngeal airway changes. Quantitative mean, minimum, and maximum displacement of superimposed 3D models were estimated from a point-based analysis. Paired-samples t-tests were used for the intragroup comparisons, and an independent samples t-test and the Mann–Whitney U tests were carried out for the intergroup comparisons. Results A statistically significant increase in the total and retropalatal volumes oropharyngeal airway volume were observed in the control group (302.23 ± 345.58 and 145.73 ± 189.22 mm3, respectively). In the FM group, statistically significant increases in the total and retropalatal volumes were observed (738.86 ± 1109.37 mm3 and 388.63 ± 491.44 mm3, respectively). However, no statistically significant differences were found between the two groups, except for the maximum part analysis which was significantly greater in the FM group (p = 0.007). Conclusions FM therapy appeared to have no additional effects on the oropharyngeal airway other than those induced by growth.
Introduction
Although the effects of the protraction facemask (FM) on upper airway have evaluated previously by many studies, the results remain controversial and unclear [1][2][3][4][5]. This can be attributed partially to the use of traditional lateral cephalograms [6,7], and to the probable effects of the rapid maxillary expansion (RME) in increasing the oropharyngeal airway dimensions when used in conjunction to the FM [8][9][10][11]. Isolated effects of the RME in increasing the nasopharynx and oropharynx volumes has been well established [12]. In contrast, the results of studies evaluating the FM-only treatment have been conflicting [1][2][3]13]. While Baccetti et al. [1] demonstrated no Open Access *Correspondence: myhajeer@gmail.com significant changes in the sagittal upper airway dimensions, Hiyama et al. [3] and Kaygısız et al. [4] showed an increase in the superior upper airway space. Lee et al. conducted a meta-analysis [5] and stressed the need for more 3D cohort studies research with untreated class III controls to determine the potential effects of FM on the upper airway.
The new imaging techniques, such as low-dose computed tomography (CT), is a performed tool in evaluating the upper airway, especially in the transverse dimension [6,14]. Moreover, volume, surfaces, and cross-sectional area extracted from 3D radiographic imaging by using commercial software offer the possibility to make a more precise evaluation of upper way [6,7]. Over these measures, the superimposition of 3D model generated from 3D images and point-based analysis explain the changes in size and shape of structures involved in the treatment [15].
To the best of our knowledge, there is no study that has evaluated the volumetric changes of the oropharyngeal airway space following FM-only treatment. Accordingly, the purpose of this clinical study was to evaluate and compare the changes of oropharyngeal airway dimensions after FM-only therapy with those changes induced by growth in a control group of untreated patients using conventional 3D measurements and 3D-model superimposition analysis.
Study design
This study was a non-randomized controlled clinical trial (CCT) study design. The study protocol was reviewed and approved by the Regional Ethical Committee on Research of the Damascus University (UDDS-1091-1900PG). Informed consent was obtained from each patient's family prior to the patriciate. Funding was provided by the University of Damascus Postgraduate Research Budget (Ref no: 7301144710DEN).
Sample size calculation
The current study was based on assuming a 1000 mm 3 difference to be clinically significant between the two groups, and taking into account the observed standard deviation of lower-pharyngeal airway volume 1104.04 in a previous publication [16]. We estimated the need to recruit a sample of 32 children (in the two groups) employing Minitab ® v.18.1 software (Minitab, Inc., State College, PA, USA), with a power of 80% and a significance level of 0.05. We added 10% to this value in the experimental group to address the risk of sample attrition. Therefore, the number was raised to 18 children.
Patients' recruitment and follow-up
Initially, growing patients with ages between 7 and 9 years were screened from those seeking orthodontic treatment at the Department of Orthodontics and Dentofacial Orthopedics, Faculty of Dental Medicine, Damascus University. The clinical inclusion criteria were as follows: Anterior cross-bite or edge-to-edge incisor relationship, Class III relationships of the permanent first molars, normal or deep overbite, straight or concave profile, no temporomandibular joint disorders, and absence of severe maxillary transverse constriction. Twenty patients who met the clinical inclusion criteria were selected and their parents/guardians were approached. The information sheet was given to them and the need for two low-dose CT images was elaborately explained before taking their informed consents. Consequently, maxillary retrognathism was confirmed by lateral cephalograms (N perpendicular to A point < − 1 mm, 0 ≤ ANB ≥ 4-), and that the growth pattern was normal or horizontal (Bjork´s sum ≤ 401°). Two patients were excluded, one because of a vertical growth pattern and the other because he had only mandibular prognathism. Thus, the final group consisted of 18 patients, 9 boys and 9 girls.
The control group was collected retrospectively, and consisted of 16 untreated class III patients (7 boys and 9 girls) whose low-dose CT records were collected form the database of CT images at the Departments of Orthodontics and Pedodontics. For each patient, the two lowdose CT images were taken with an average of 12 months apart. The rationale for taking their CT images was related to different reasons beyond being classified as Class III skeletal patients, e.g. the presence of supernumerary tooth or misplaced tooth, assessment of the status of the alveolar support, evaluation of the airway competency, or assessment of the nasal cavity structures. The second CT image was captured to assess the progress of the case after approximately one year of the first image. The sample was matched closely for inclusion criteria of the experimental group regarding basically to age, sex, and radiological inclusion criteria.
All experimental group participants were treated with a modified maxillary bite block and Delaire-type (M0774-00, Leone, Firenze, Italy) facemask (Fig. 1a). The individually fabricated splint consisted of a metal framework of double-soldered buccal and palatal wires to pediatric bands, with vestibular hooks distally to laterals for elastic traction and covered posteriorly with a 2 mm acrylic cap (Fig. 1b). A total of 400 g/side force was applied in a direction approximately 30° inferiorly to the occlusal plane, at least 16 h per day [17]. The active treatment with facemask (FM) therapy was considered 'complete' when three conditions were met: (1) correcting the overjet to achieve three to 4 mm of positive horizontal overlap between incisal edges, (2) achieving a Class I or slightly Class II molar relationship, (3) undergoing the active treatment for not less than 8 months. The removable mandibular retractor (RMR) appliance was used in the second phase to preserve the achieved results. The RMR was a removable appliance that rested on the upper jaw and had an inferiorly extended labial bow that touched the cervical regions of the lower anterior teeth. It has been shown to be an effective appliance in the early correction of skeletal class III malocclusions in patients aged between 5 -8 year [18] and 9-12 year [19] (Fig. 2).
Computed tomography acquisition
Two sets of low-dose CT images were acquired, one prior to the treatment (T1) and one at the end of active treatment (T2). The CT scans were taken by one certified radiologist using a Philips Brilliance 64 detectors scanner (Philips Medical Systems, Best, The Netherlands), with the patients were seated in a supine position and their Frankfort horizontal (FH) plane perpendicular to the floor. They were instructed to keep their teeth in maximum intercuspation and their tongue behind the upper incisors during the exposure. The CT parameter values used were suggested by Ballanti [20] as follows; 80 kV, 100 mAs, 1 pitch, 2.5 mGy (CTDIvol), and 1.25 mm slice thickness. All data were stored in the Digital Imaging and Communications in Medicine (DICOM) format and then transferred into MIMICS 21.0 software (Materialise, Leuven, Belgium) for preliminary 3D geometry creation.
Each image was exactly oriented as follows: A new reslice plane (RP) was built so that the Frankfort horizontal (FH) plane (defined by right Porion and inter-orbital line in the sagittal and frontal views, respectively) was parallel to the floor, and the axially midsagittal line (defined as the anterior nasal spine to the posterior nasal spine) was perpendicular to the floor (Fig. 3). The image was resliced in the anterior-posterior direction parallel to the FH plane with a 150 mm × 150 mm field of view using the software's reslice function.
Segmentation of oropharyngeal airway
Semiautomatic segmentation was applied. The upper airway mask was built by threshold tool of the software. Region of Interest (ROI) was selected to cover the upper airway anatomy. The threshold value was adjusted individually between − 1024 and 200 Hounsfield units [HU] to improve the accuracy of the segmentation. Generally, no manual segmentation was implemented. The only manual editing was applied for evident artefacts or uninterested structures like oral cavity. Once an airway mask and a high-quality 3D model was created, the upper airway was segmented in the midsagittal plane in the sagittal view of the software by the orthogonal-to-screen tool (Fig. 4). As in the Chang et al. [21] study, the superior boundary of the upper airway was determined by a plane passing posterior to the nasal spine (PNS) to the Basion (Ba) [P plane], and inferiorly by a parallel plane to EP passing through the most superior point of the epiglottis [EP plane]. Then the upper airway was divided into a retropalatal (upper segment) and a retroglossal airway (lower one) using a parallel plane to the P plane passing the posteroinferior point of the soft palate [SP plane]. (Fig. 4). Finally, the 3D models were smoothed and warped using a voxel-based technique with an existing MIMICS algorithm. The software automatically calculated the total, retropalatal and retroglossal airway volumes. A minimal cross-sectional area was obtained manually and computed using the software's Area measuring tool.
3D superimposition and comparison analysis
Landmark-based registration (LBR) was used in this study. The serial resliced projects of each patient containing the CT data set and the 3D models of the upper airway were registered using the image registration tool in Mimics. The six landmarks used in the registration are given in Table 1 and shown Fig. 5. These landmarks were validated in a previous study [22]." The registered 3D models were then exported to 3-matic software (3-matic13.0, Materialise NV, Leuven, Belgium).
The 3D comparison between time points T2-T1 was performed using the part comparison analysis tool in 3-Matic software. Details of used tool is described in the work of Alsufyani et al. [22] A color-coded map for each comparison was produced with the threshold set at 2 mm: green areas indicated differences within 2 mm (between − 2 and 2 mm), red surfaces indicated outward (positive values) displacement more than 2 mm between two 3D models), and blue surfaces indicated inward (negative values) displacement greater than − 2 mm (Fig. 6). Table 1 Definition of the anatomical landmarks used in the registration of the pre-and post-treatment images † † According to Alsufyani et al. [15] Mean, minimum, and maximum values of part analyses were reported.
Statistical analysis
Statistical analyses were carried out using IBM SPSS Statistics for Windows version 26.0 (IBM Corp., NY, USA). The Shapiro-Wilks test showed normal distribution for all parameters except age parameter of the control group. Accordingly, the paired-samples t-test was used for the in-group comparisons, and an independent-sample t-test was used for the intergroup comparisons of parameters with a normal distribution. The Mann-Whitney U was used for inter-group comparison of age. Significance level was set at the 5%.
Error of the method
All measurements of 17 (25%) randomly selected patients were repeated after one week by the same examiner (AH), and an intraclass correlation coefficient (ICC) (two-way mixed with absolute agreement) was used to assess intra-rater reliability. Errors of the measurements were analyzed with Dahlberg's formula [23].
Results
The ICCs showed a high level of agreement, ranging from 0.93 to 0.99. The error of the method was 6.09 mm 2 for area measurement, ranged from 0.11 to 0.23 mm for linear measurements, and from 52.24 to 54.7 for the volumetric measurements ( Table 2). The two groups' characteristics, treatment/observation period are given in Table 3. The control group consisted of 7 boys and 9 girls. The mean age was 7.03 ± 0.56 years, and the average observation time was 12.25 ± 1 months. The facemask group comprised 9 boys and 9 girls with a median age of 8 ± 0.84 years. The active treatment period was 11.17 ± 2.18 months. No statistically significant differences between the two groups were found at the beginning of treatment (T1) as shown in Table 4.
The results of intra-group comparisons indicated a significant increase in the total oropharyngeal airway volumes by a mean of 302.23 ± 345.58 mm 3 (p = 0.003) and 738.86 ± 1109.37 mm 3 (p = 0.012) in the control and facemask groups, respectively. The retropalatal region of the control and the facemask group increased significantly, p = 0.008 and p = 0.004, respectively. However, no statistically significant differences were found between the T2-T1 changes for the remaining parameters of both groups (p > 0.05), as shown in Table 5.
No statistically significant difference was observed between the two groups regarding the volumetric changes in the oropharyngeal airway and minimal crosssectional area measurements, as shown in Table 6. The average maximum part analysis of the point-based analysis was significantly greater in the FM group than the control group (p = 0.007). However, there was no statistically significant difference in the average mean and minimum part analysis between the two groups ( Table 6).
Discussion
Previous studies into the effects of a protraction facemask on the upper airway dimensions have poor evidence and conflicting outcomes due to using 2D imaging to evaluate this complex anatomical region and having no control samples [3,5,9,13,20]. Addressing this limitation requires the use of a 3D imaging technique with the presence of an untreated control group.
Obtaining an ethical approval to recruit patients in the control group where no treatment would be provided was deemed difficult. Therefore, Class III patients in the control group were selected from the archives of the Departments of Orthodontics and Pediatric Dentistry who had been referred to the Radiographic Department for CT imaging. On the other hand, patients in the experimental group were intentionally imaged using the same lowdose imaging apparatus. In this context, the low-dose CT protocol has been used in previous published papers and it has been advocated as an alternative method from the conventional CT scanning with a mean absorbed dose similar to that related to conventional radiographic exams for an ordinary orthodontic patient [14,20]. Therefore, in the current study, the CT radiation exposure was deemed acceptable and below the threshold for harm [24]. In this study, the FM was used only in nonmaxillary transverse constriction patients, understanding that RME showed no improvement in maxillary protraction results [25]. Moreover, RME has been established to increase the size of the upper airway [12], and this may affect the accuracy of the results. The end point of our evaluation was at the end of the active treatment (i.e., T2). Of course, it would be more beneficial to the treating orthodontist if there were some measures of the post-traction changes in the short and long run when using this appliance following FM therapy. However, this was not the objective of the current study. The main changes of the RMR when used in the early treatment of skeletal Class III patients include: 1) an anterior morphogenetic rotation of the mandible, (2 an increase in the maxillary length, and (3) a decrease in mandibular dentoalveolar protrusion [18]. Therefore, the use of the RMR following the active phase may have affected the oropharyngeal airway space in a way or another, but this issue requires additional research work.
Definition of upper airway plays a foundation role in the volumetric assessment. Many methods were suggested previously [6,11,16,21], but the points and planes used in this study were least affected by the patients' supine position and the neck posture. Moreover, the nasopharynx was not included in this study, because of the potential difference in volumetric assessment due to particular reaction and growth pattern of the nasopharynx's adenoid tissue [13,26].
There are three main methods for serial image registration: landmark-based, surface-based, and voxel-based techniques. Every technique has inherent limitations, advantages, and disadvantages. Furthermore, all these techniques have been suggested in the literature to work properly [27]. The superimposition method used in this study has been considered as a validated method based on a previous study [22]. The landmarks used in the superimposition have been considered anatomically stable structures by the age of five years as 85% of growth is completed in this area [28]. This method has been used in previous studies interested in the upper airway regions [15,22]. Moreover, the reliability test of the measurements extracted from the fusion 3D models (i.e., mean part analysis (mm), minimum part analysis (mm), and maximum part analysis (mm)) showed a high level of agreement (ICCs were greater than 0.959) in the current investigation. Additionally, the error of the method regarding these parameters was very small and less than 0.25 mm (ranged between 0.11 and 0.23 mm). Many other benefits could be seen beyond the use of the superimposition in this study, such as the location and distribution of the changes of the upper airway induced by the growth, and the facemask [15]. Therefore, superimposition and part comparison analysis (point-based superimposition analysis) have been used in this study in addition to the volumetric and cross-sectional area measurements.
Although there was a significant difference between the two groups regarding the mean age (p = 0.006), the other baseline characteristics of the included patients, such as the sex distribution, and the observation period were found generally homogenous (Table 3). It is worth mentioning that the comparisons made in the current study were focused on evaluating changes (T2-T1) in the first group against changes (T2-T1) in the second group. In other words, no comparison was made between T2 values in the second group versus T2 values in the first group. By comparing change versus change, any discrepancy that could have existed between the two groups at T1 would not affect the validity of the comparisons made. Moreover, the comparison between the two groups at the baseline data regrading to the severity of the sagittal (Table 4). This meant that the intervention and control arms were comparable in the current study.
The average active treatment time in the treatment group was months 11.17 (standard deviation of 2.18). Some patients had a minimum of 9 months of treatment, whereas 12 patients had a full one-year of active treatment. On the other hand, the average observation time in the control group was 12.25 months. The difference between the two groups regarding the observation period was statistically insignificant (P = 0.069). Generally, the mean observation period in the control group can be considered relatively enough to detect possible small changes that should be attributed to growth. However, growth effects can be better demonstrated if longer periods of observation was implemented in the current study.
The mean increase in the airway volume produced by growth was 302.23 mm 3 , which was less than the amount of expected growth of 897.2 mm 3 mentioned in a previous study [29]. Pamporakis et al. [11] attributed the difference to inhibition development of airway in class III patients compared to normal ones. Another reason may relate to the limitation of cross-sectional study results, which were considered an airway volume-related age study.
After FM treatment, total upper airway and retropalatal volumes increased significantly (i.e., a mean of 738.86 mm 3 and 388.63 mm 3 , respectively). However, the volume of the retroglossal region increased insignificantly. Unfortunately, the previous studies on the non-expansion FM therapy assessed the airways two-dimensionally using cephalograms [1,4,9,13]. The sample without expansion from the Mucedero et al. [9] study and the study of Baccetti et al. [1] were the closest to our study in terms of mean age and mean treatment time. They reported that the FM caused no sagittal changes in the oropharyngeal and nasopharyngeal regions, which contrasts with our results. The results of Baloş et al. [13] and Danaei et al. [2] indicated a positive alteration in the nasopharyngeal space after FM treatment, which were similar to the current findings. Hiyama et al. [3] reported that the superior upper airway dimension was probably influenced by the maxillary protraction, although that study found non-statistically significant results. In fact, the direct comparison between 2 and 3D measurements is difficult [5].
Interestingly, Pamporakis et al. [11] showed no statistically significant difference in the lower total airway volume (the "total volume" in the current study) after RME/FM treatment. The segmentation method of the airway may be a reason for the difference. Another possible reason is the respiration phase effects due to the Fig. 7 Frontal, lateral and back views of point-based analysis color maps wide range of capture time from 7.8 to 40 s [21]. In the current study, no statistically significant differences were observed between growth or FM and growth-induced changes regarding the total upper airway and its regions. The current results were in agreement with previously mentioned studies with untreated class III patients [1,9].
Qualitatively, some changes were noticed in the upper airway shape indicated by the red and blue areas in the color mapping from the superimposition of pre-and post-treatment models shown in Fig. 7.
Quantitative results of the point-based analysis showed positive mean displacement in both control and FM groups, at 0.25 ± 0.30 mm and 0.57 ± 0.61 mm, respectively. In other words, most triangle nodes of the tested model at T2 were outside of matched triangle nodes from the reference model at T1. According to the results of the point-based analysis, the observed changes due to growth (i.e. the control group) or growth + facemask therapy (i.e. the experimental group) increased the oropharyngeal airway in similar amounts. However, these amounts were clinically insignificant. Intergroup comparisons, according to the maximum point part analysis, showed significant differences between the two groups. The difference might be explained by the effects of neck flexion on this measure. It was evident that the head position was reproducible for a 2-year period [30]. On the other hand, Yagci et al. [31] proved a significant cranial flexion of about 6.4 degrees after 1-year treatment of 45 patients: 9.6 ± 1.4 years of age by FM/RME. Moreover, Alsufyani et al. [22] found a strong positive correlation between the minimum/maximum part analysis results and distance from the second cervical vertebrae to the third. Contrary, volumetric measures showed weak correlations. Accordingly, the potential difference in the registration of T1 and T2 between the two groups might be the reason for the difference.
The limitations of this study were the absence of randomization, long-term observation after the retention phase of the RMR, and respiratory functional examination. However, the volumetric measurements and the quantitative data of the superimposition explained the actual effects of FM on upper airway space.
Conclusions
• FM therapy appeared to have no other effects on the upper airway than those induced by growth only.
|
v3-fos-license
|
2018-07-23T17:16:42.739Z
|
2016-07-01T00:00:00.000
|
51849255
|
{
"extfieldsofstudy": [
"Chemistry"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1049/hve.2016.0019",
"pdf_hash": "3bfe378f91529538888f46e060449e8fa45d6031",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2376",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "98e56069079f30966aac9b92cc33ec258cb6f34f",
"year": 2016
}
|
pes2o/s2orc
|
Comparison between electropositive and electronegative cold atmospheric-pressure plasmas: a modelling study
: Cold atmospheric-pressure He + N 2 and He + O 2 plasmas are chosen as the representatives for electropositive and electronegative plasmas, of which the discharge characteristics are studied and then compared to each other by fluid models. As the increase of the impurity (N 2 or O 2 ) fraction from 0 to 10%, for He + N 2 plasmas the electron density and ion density increase, the spatiotemporal distributions of electron density, ion density, electron temperature and electron generation rate change a little. On contrast, for He + O 2 plasmas the electron density decreases, the ion density first increases and then decreases, the electron temperature increases in the bulk region, but decreases in the sheath region, and the plasmas transform from g mode to α mode as the significant change of electron generation rate distributions. Larger electric field is needed in the bulk region to sustain the electronegative plasma, so the electrical characteristics of He + O 2 plasmas transform form capacitive to resistive with increasing O 2 fraction. Meanwhile, the ion-coupling power increases dramatically, which can be estimated by a formula based on the electronegativity. A new criterion for determining the sheath boundary, | ∇ E | = 5 kV/cm 2 , is put forward, which is found suitable for both the electropositive and electronegative plasmas.
Introduction
Cold atmospheric-pressure plasmas (CAPs) have great application prospects in the fields of environmental protection [1], biomedicine [2], nano-technology [3] and so on. Most CAPs are operated in a noble gas, but with a small amount of molecular gases such as N 2 and O 2 [4]. The molecular gases are sometimes artificially mixed into the noble gas to make the plasmas more reactive and hence beneficial for various applications. For example, the typically small fraction between 0.5 and 3% of O 2 is added into helium to optimise the production efficiency of reactive oxygen species [5]. On the other hand, the molecular gases are inevitably existed due to the impurity of industrial noble gases, as well as the inclusion of air when the plasmas are not well sealed [6][7][8][9][10]. The electropositive nature of noble gases allows the plasmas to be sustained in relatively low electric field, which is one of the main reasons that keeping the plasmas cold, diffusive and stable [11]. However, the molecular gases such as O 2 and H 2 O are strongly electronegative, and hence inhibit the plasma generation by absorbing the electrons. The plasma characteristics such as the volt-ampere characteristics can be changed significantly even when the fraction of electronegative gas is as low as 0.1% [4]. The electropositive CAPs and their electronegative counterparts have much different characteristics, which have not yet been well understood. This motivates us to compare the characteristics of those plasmas in quantitative level.
In this paper, He + N 2 and He + O 2 CAPs are chosen as the representatives of the electropositive and electronegative plasmas, respectively. The impurity (N 2 or O 2 ) fraction in the working gas is varied from 0 to 10%, covering most cases of practical applications. The electronegativity of He + O 2 CAPs keeps increasing with the O 2 fraction, allowing the plasma characteristics to be studying for a large electronegative range. A fluid model is used for this study, which has been used for He + N 2 and He + O 2 CAPs as reported previously [12][13][14][15][16]. The spatiotemporal evolution of the electron density, the electron temperature, the electron generation rate as well as the dissipated power density is obtained as a function of the impurity fraction, and they are compared with respect to the electropositive and electronegative plasmas. Moreover, the sheath dynamics is found to be much different for the two kinds of plasmas, and a new criterion is suggested for determining the sheath edge for those plasmas.
The paper is organised as follows: the computation model is described in Section 2, the simulation results are presented and discussed in Section 3, and at last conclusions are given in Section 4.
Description of the computational model
The discharge considered in this study is generated between two circular electrodes with a narrow separation of 0.2 cm and a large electrode width [to facilitate the use of one-dimensional (1D) model], similar to those used in experimental study [17]. One electrode is connected to a sinusoidal voltage with radio frequency of f = 13.56 MHz, the other one is grounded. The dissipated power density is kept constant to be 40 W/cm 3 . For the purpose of comparing the He + N 2 and He + O 2 CAPs, all the discharge conditions are kept the same, and the impurity (N 2 or O 2 ) fraction is varied from 0 to 10%.
1D fluid models are used for He + N 2 and He + O 2 CAPs with their details previously reported [12][13][14][15][16] so just briefly described here. Nine species and 18 chemical reactions are incorporated for He + N 2 plasmas, while 17 species and 60 chemical reactions are incorporated for He + O 2 plasmas. The plasma chemistry used in this study is recommended by the authors [6,13], and the plasma species are listed in Table 1.
The fluid models solve the mass conservation equation for each species (1), the Poisson's equation (2), and the electron energy conservation equation (3). Given the high collisionality of the discharge, the particles inertia is neglected and the drift-diffusion where n i , G i , μ i , D i , q i , S i , and m i are the density, flux, mobility, diffusion coefficient, charge, net gain/loss rate, and mass of species i, respectively. E is the electric field, ɛ is the mean electron energy, ɛ 0 is the vacuum permittivity, and k B is the Boltzmann constant. R el is the momentum transfer collisional rate between electrons and background gases and T is the temperature of plasma species. ΔE j and R j are the electron energy loss due to inelastic collision j and its corresponding reaction rate, respectively. Subscripts e, +, −, and k represent electron, positive ion, negative ion and background gas species (He, O 2 and/or N 2 ), respectively. The gas temperature is set to be 350 K.
Regarding fluxes to the electrodes, the following boundary conditions are used for charged species G e · n = −sm e E · nn e + 0.25v th,e n e − g p G +,p G + · n = sm + E · nn + + 0.25v th,+ n + (6) where n is the normal vector pointing towards the wall, g is the secondary emission coefficient and v th is the thermal velocity. g is set to 0.03 for positive ions and zero for other species, following the simplistic approach previously used by Shi and Kong [18]. The switching function σ takes a value of one when the drift velocity is directed towards the electrode and zero otherwise [19] For neutral species, the electrode losses are determined by incoming fluxes and surface reactions on the electrodes. These reactions, however, are difficult to predict and reaction rates are often unknown. We assume here that species reaching the electrodes will be adsorbed with a certain probability p i , regardless of what reaction they may undergo [13]. Then, the boundary conditions are G n · n = 0.25v th n i p i Although the value of p i is rarely known, it is predicted that the electrode loss is almost independent of p i when p i > 0.01, and even in that case the electrode loss has little influence on the plasma dynamics [13]. So, in this paper the p i is set to be 0.01 for modelling study.
The electron energy flux to the electrodes is given by [20] G 1 · n = 5 3 where ɛ g is the energy of secondary electron emitted from the electrodes and fixed at 5 eV [20]. The electron mobility and diffusivity are calculated as a function of mean electron energy using Bolsig+ [21], a Boltzmann solver. As to the transport coefficients for other species, please refer to our previous publications [13,15]. The set of equations described above is solved using a time-dependent finite-element partial differential equation solver, COMSOL Multiphysics, and results have been post-processed with MATLAB. Fig. 1 shows the spatiotemporal distributions of electron density, electron generation rate, electron temperature and ion (including He + , He + 2 , N + 2 and N + 4 ) density in He + N 2 CAPs with respect to the N 2 fraction ([N 2 ] for abbreviation). The white curves in the sub-figures indicate the sheath boundaries of the plasmas, which are defined by ∇E | | = 5 kV/cm 2 . This criterion for defining the sheath boundary will be discussed below. The sheath boundary benefits the following discussions on the plasma characteristics. For example, it helps to distinguish the α mode or g mode of a plasma, because in α mode the electron generation rate dominates in the plasma bulk region, but in g mode it dominates in the sheath region [18].
Results and discussions
It can be seen from Figs. 1a-d that the electron density increases with the N 2 fraction, the same as reported in [6]. There are two main pathways for electron generation in He + N 2 CAPs: one is the electron-impact ionisation of the feeding gases (He and N 2 ), and the other is exciting helium first and then generating electrons by Penning ionisation and/or electron-impact ionisation of the helium metastables (He* and He * 2 ).
Pathway 1: As the increase of N 2 fraction, R1 and R5 dominate the electron production at first, and then it changes to R6 and R7, at last it transfers to R2 because the density of helium metastables decreases sharply when [N 2 ] > 0.01%. The electrons oscillate between the two electrodes, which has a density boundary nearly overlaps with the sheath boundary (see Figs. 1a-d ). Although the electrons mainly exist in the plasma bulk, their generation is dominated in the sheath region regardless of the N 2 fraction (see Figs. 1e-h), suggesting that the plasmas keep in g mode. This is because the electron energies needed for ionisation of helium and nitrogen in pathway 1 and an excitation of helium in pathway 2 are high, at least 15.4 eV for the ionisation of nitrogen (R2), and consequently the electrons in the sheath region are most involved in the two pathways for electron generation. The average electron energy in sheath region is much larger than that in the bulk region according to the electron temperature distribution as shown in Figs. 1i-l.
The ion density increases with the N 2 fraction as shown in Figs. 1m-p. This is in accordance with the trend of electron density because of the electropositive and quasi-neutral nature of the He + N 2 CAPs, i.e. the electron density roughly equals to the ion density. The ion densities peak at the interface of the bulk and sheath regions, because the ions are mainly generated in the sheath and they stay there almost regardless of the oscillation of the sheath boundary. The distribution of ions keeps invariable with time, much different to that of electrons, because the ion mobilities are typically less than the electron mobility by ∼50 folds [22].
Compared to the electropositive CAPs in He + N 2 mixtures, the plasma characteristics of He + O 2 CAPs are much different due to their electronegative nature, as shown in Fig. 2. Some sub-figures in Fig. 2 are similar to our previous reports in [4], but the sheath boundary curves are different due to the change in the criterion. The sub-figures are plotted here to facilitate the comparison between He + O 2 and He + N 2 CAPs.
As shown in Figs − ) are much different to that of the electropositive plasmas, especially for the ion density peaking at the centre position of the discharge gap (see Figs. 2n-p) rather than at the bulk-sheath interface (see Figs. 1n-p). This is mainly because the anions are confined in the bulk region by the ambipolar electric field. The ion density first increases and then decreases with the increasing O 2 fraction, but the electronegativity keeps increasing due to the continuous drop of electron density.
The spatial distributions of the half-cycle averaged electric fields are shown in Figs. 3a and b for He + N 2 and He + O 2 CAPs, respectively. Each curve corresponds to an impurity (N 2 or O 2 ) fraction of 0, 0.1, 1, or 10%. It can be seen that the spatial distribution of the averaged electric field in He + N 2 discharge is relatively independent of the impurity fraction. In contrast, in He + O 2 discharge it decreases in the sheath region and increases in the bulk region as the increasing impurity fraction. The dependence of the electric field on the impurity fraction is similar to that of the electron temperature either for the electropositive or the electronegative CAPs, as shown in Figs. 1 and 2. Determining the boundary of plasma sheath is very important for characterising a plasma, for which several criteria have been reported, such as 14% of the maximal electric field [23], E = 1 kV/ cm [24], and n e = 0.3(n + -n − ) [4]. These criteria are suitable for electropositive CAPs, but maybe not for electronegative CAPs as reported here. For example, when the O 2 fraction is 10% the averaged electric field in the plasma bulk is ∼2.5 kV/cm, more than one-third of the maximal electric field (see Fig. 3b). One typical feature of CAPs is that the electric field varies little in the bulk region, but keeps rising or dropping from the bulk-sheath interface to the electrode surface, as shown in Figs. 3a and b. This suggests that the gradient of the electric field can be used as a criterion for determining the sheath boundary. In this paper, the absolute value of such gradient, ∇E | | = 5 kV/cm 2 , is chosen as the criterion, which is capable of capturing the turning point of each electric field curve (see the inset images in Figs. 3a and b), and hence suitable for both the electropositive and electronegative CAPs.
Based on the sheath boundary criterion, it is found that the sheath thicknesses of both He + N 2 and He + O 2 CAPs decrease with the increasing impurity concentration, but for He + O 2 CAPs the decrement is larger (see Fig. 3c). This trend also applies for the voltage drops in the sheaths as shown in Fig. 3d. As the increase of oxygen fraction in He + O 2 CAPs, the decrease of electron density in the bulk region (see Figs. 2a-d) leads to the increase of plasma resistance, while the decrease of voltage drop across the sheath region indicates the decrease of plasma capacitance, and therefore the electrical feature of the discharge transforms from capacitive to resistive as it changes from electropositive to electronegative. A similar difference of electrical feature has been reported for low-pressure argon (electropositive) and SF 6 (electronegative) plasmas [25].
In low-pressure plasmas, the total current in the bulk region is carried mostly by electrons [25,26]. This is mainly because the electron mobility is larger than that of ions by ∼50 folds, and the electron density is comparable to the ion density. This situation is changed in atmospheric-pressure plasmas, because most voltage drops in the sheath where the ion density is much larger than the electron density, so much energy is coupled to the ions which may not be ignored. In particular, CAPs are much easier to be electronegative compared to their low-pressure counterparts due to the frequency collisions between electrons and the working gases, and in that case the ion density may be much larger than the electron density, resulting in even larger portion of the discharge energy coupled to the ions. In order to elucidate the energy dissipation characteristics in different kinds of CAPs, we plot in Fig. 4 the spatiotemporal distributions of electron-and ion-coupling power densities for He + N 2 and He + O 2 CAPs, with respect to the impurity fractions of 0, 0.01, 0.1, 1, and 10%. P e and P i are the numerical results for electron-coupling power density and ion-coupling power density, respectively.
For He + N 2 CAPs, most of the input power is coupled to the electrons, but in the sheath a unnegligible portion of the input power is coupled to the ions (see Fig. 4). This is in accordance to the He + O 2 CAPs when the oxygen fraction is low (electronegativity < 1). However, when the electronegativity is high, much more power will be coupled to the ions, and even when [O 2 ] = 10% the ion-coupling power dominates.
The spatiotemporal averaged power dissipations on electrons and ions in He + N 2 and He + O 2 CAPs are shown in Fig. 5, as a function of the impurity fraction from 0.01 to 10%. Besides the numerical results (P e and P i ) of solid curves, two dash curves namely P e,ohm and P i,es are also plotted in Fig. 5 for the purpose of theoretical analysis. P e,ohm represents the power dissipated for ohmic heating of electrons, which can be calculated as follows [27] S ohm = 0.5m e n m l B n e G 2 e (11) where v m is electron neutral collision frequency (∼10 12 s −1 at atmospheric pressure) and l B is the plasma bulk length. P i,es represents the power dissipation on the ions that estimated according to their densities and mobilities. In the bulk regions of plasmas, the electrons and ions are assumed to have densities invariable with the electrode gap position, and therefore their density relationship can be roughly estimated by the neutralisation feature of the plasmas, as given by (n + + n e )/n e = 2j + 1 (12) where j represents the electronegativity. Assuming the ions have the same mobility of μ ion = 20 cm 2 /s, while for electrons it is 1056 cm 2 /s [28]. The ion-coupling power density in the bulk region can be calculated as follows where P ion,b represents the ion-coupling power density in the bulk region and P in the input power density which is assumed invariable with the locations. In the sheath region, the relationship of ion density and electron density cannot be roughly estimated by the electronegativity. However, the ion current should equal to the electron current due to the quasi-neutral nature of the plasmas, and hence the ioncoupling power can be roughly estimated the same as the electron-coupling power in the sheath. For the entire plasma, the spatiotemporal averaged power coupled by ions can be given by where l s is the sheath thickness and l is the total gap between electrodes. It can be seen from Fig. 5 that about 90% of the input power is coupled to the electrons in He + N 2 CAPs, relatively independent of the N 2 fraction. On contrast, the electron-coupling power decreases with the increasing O 2 concentration in He + O 2 CAPs, in particular, it is smaller than the ion-coupling power when [O 2 ] ∼0.5%. The calculated results of electron-coupling power densities are similar to the numerical ones, suggesting that the ohmic heating is the main electron heating mechanism. Moreover, the calculated results of ion-coupling power densities agree well with the numerical ones for He + N 2 CAPs, and they are a little smaller, but with a similar trend to the numerical results for He + O 2 CAPs. Therefore, formulas (11) and (14) for calculating the electron-and ion-coupling powers are reliable, which can be used for estimating the power dissipation characteristics of CAPs.
Concluding remarks
In this paper, He + N 2 and He + O 2 CAPs are chosen as the representatives for electropositive and electronegative plasmas, of which the discharge characteristics are studied and then compared to each other by fluid models. As the increase of the impurity (N 2 or O 2 ) fraction, for He + N 2 CAPs the electron density and ion density increase, the spatiotemporal distributions of electron density, ion density, electron temperature, and electron generation rate change a little, e.g. the ion density has two peaks at the bulk-sheath interface, and the electron generation rate dominates in the sheath which indicates that the plasmas are kept in g mode. On contrast, for He + O 2 CAPs the electron density decreases, the ion density first increases and then decreases, the electron temperature increases in the bulk region, but decreases in the sheath region, and the plasmas transform from g mode to α mode as the distribution of electron generation rate changes a lot.
The He + N 2 CAPs are capacitive in nature, but the He + O 2 CAPs becomes more and more resistive as the increase of O 2 fraction. This is because the electron density in the bulk region decreases sharply and hence larger electric field is needed to sustain the plasmas. The increase of electric field in the bulk region makes the sheath boundary more difficult to be determined, and several criteria reported in the literature are found to be not applicable. A new criterion of the sheath boundary, ∇E | | = 5 kV/cm 2 , is put forward, which is found suitable for both the electropositive and electronegative CAPs.
Most of the input power is dissipated into electrons in He + N 2 CAPs via ohmic heating, but for He + O 2 CAPs more and more input power is coupled to ions with increasing O 2 fraction. The ion-coupling power even dominates when [O 2 ] > 0.5%. A formula is put forward to estimate the ion-coupling power, of which the calculated results are similar to the numerical ones. 6 References
|
v3-fos-license
|
2018-09-20T16:28:32.000Z
|
2018-05-29T00:00:00.000
|
118856123
|
{
"extfieldsofstudy": [
"Physics"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1016/j.physletb.2018.09.040",
"pdf_hash": "098ebf8a6277938c153943ac43ba5235724b2213",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2377",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "d4ff4e783ec7ed5274b12d77c034cc6d46c48e46",
"year": 2019
}
|
pes2o/s2orc
|
Probing relic neutrino decays with 21 cm cosmology
We show how 21 cm cosmology can test relic neutrino radiative decays into sterile neutrinos. Using recent EDGES results, we derive constraints on the lifetime of the decaying neutrinos. If the EDGES anomaly will be confirmed, then there are two solutions, one for much longer and one for much shorter lifetimes than the age of the universe, showing how relic neutrino radiative decays can explain the anomaly in a simple way. We also show how to combine EDGES results with those from radio background observations, showing that potentially the ARCADE 2 excess can be also reproduced together with the EDGES anomaly within the proposed non-standard cosmological scenario. Our calculation of the specific intensity at the redshifts probed by EDGES can be also applied to the case of decaying dark matter and it also corrects a flawed expression appeared in previous literature.
Introduction
With 21 cm cosmology we are entering a new exciting phase in the study of the history of the universe and how this can be used to probe fundamental physics [1,2]. Observations of the redshifted 21 cm line of neutral hydrogen 1 from the emission or absorption of the cosmic microwave background radiation (CMB) by the intergalactic medium, can test the cosmic history at redshifts z ∼ 5-1100. 2 This range corresponds to those three periods, after recombination, on which we have fragmentary information: the dark ages, from recombination at z rec 1100 to z 30, when first astrophysical sources start to form; the cosmic dawn, from z 30 to the time when reionisation begins at z 15; the Epoch of Reionisation (EoR), from z 15 to z 6.5 when reionisation ends. 3 In this way observations of the cosmological 21 cm line global signal can test the standard CDM cosmological model during a poorly period of the cosmic history, considering that the most distant galaxy is located at z = 11.1 [4].
Intriguingly, the EDGES (Experiment to Detect the Global Epoch of Reionisation Signature) collaboration claims to have discovered an absorption signal in the CMB radiation spectrum corresponding to the redshifted 21 cm line at z 17.2 with an amplitude about twice the expected value [5]. This represents a ∼3.8σ deviation from the predictions of the CDM model and for this reason the EDGES anomaly has drawn great attention. It should be said that another group [6], re-analysing publicly available EDGES data and using exactly their procedures, finds almost identical results but they claim that 'the fits imply either non-physical properties for the ionosphere or unexpected structure in the spectrum of foreground emission (or both)' concluding that their results 'call into question the interpretation of these data as an unambiguous detection of the cosmological 21-cm absorption signature.' Therefore, more observations will be necessary to confirm not only the anomaly but even the absorption signal.
In the light of these recent experimental developments, it is anyway interesting to think of possible non-standard cosmological scenarios that can be tested with 21 cm signal observations at high redshifts and that might either explain the EDGES anomaly (if confirmed) or in any case be constrained. The EDGES anomaly can be expressed in terms of a value of the photon-to-spin temperature ratio T γ (z)/T S (z) at redshifts z = 15-20, where the absorption profile is observed, that is about twice what is expected in a standard cosmological scenario. This can be of course due either to a larger value of T γ (z) or a smaller value of T S (z) or some combination of the two. In this Letter, we show how radiative decays of the lightest relic neutrinos can explain the EDGES anomaly producing, after recombination, a non-thermal early photon background able to rise T γ (z) above the CMB value. A similar scenario, recently revisited in [8], where heavier relic neutrinos decay radiatively into lighter ordinary neutrinos [9][10][11] is ruled out since it requires degenerate neutrino masses now excluded by the Planck upper bound i m i 0.17 eV (95% C.L.) [12] and since, it requires a too large effective magnetic moment responsible for the decay. In our scenario the lightest relic neutrinos decay radiatively into sterile neutrinos and this allows to circumvent both bounds. 4 The paper is organised as follows. In Section 2 we briefly review 21 cm cosmology and the EDGES results. In Section 3 we discuss how lightest relic neutrinos radiative decays can explain the EDGES anomaly. Finally, in Section 4, we draw the conclusions.
21 cm cosmology and EDGES results
The 21 cm line is associated with the hyperfine energy splitting between the two energy levels of the 1s ground state of the hydrogen atom characterised by a different relative orientation of electron and proton spins: anti-parallel for the singlet level with lower energy, parallel for the triplet level with higher energy. The energy gap between the two levels and, therefore, of the absorbed or emitted photons at rest, is E 21 = 5.87 μeV corresponding to a 21 cm line rest frequency ν rest 21 = 1420 MHz.
A shell of neutral hydrogen at a given redshift z z rec , after recombination, can then act, thanks to the 21 cm transitions, as a detector of the background photons produced at higher redshifts. In standard cosmology this background is just given by the This possibility relies on the brightness contrast between the intensity of the 21 cm signal from the shell of neutral hydrogen gas at redshift z and the background radiation at the observed (redshifted) frequency ν 21 (z) = ν rest 21 /(1 + z). The brightness contrast can be expressed in terms of the 21 cm brightness temperature (relative to the photon background) [14]: where B h 2 = 0.02226 and m h 2 = 0.1415 [15] are respectively the baryon and matter abundances, δ B is the baryon overdensity, x H I is the fraction of neutral hydrogen, T γ (z) is the effective temperature, at frequency ν 21 (z), of the photon background radiation (coinciding with T C M B (z) in standard cosmology) and T S (z) is the spin temperature parameterising the ratio of the population of the excited state n 1 to that one of the ground state n 0 in such a way that where g 1 /g 0 = 3 is the ratio of the statistical degeneracy factors of the two levels. Clearly if x H I vanishes, then there is no signal, since in that case all hydrogen would be reionised and there cannot be any 21 cm transition. The spin temperature is related to T gas , the kinetic temperature of the gas, by 5 where x α and x c are coefficients describing the coupling between the hyperfine levels and the gas. In the limit of strong coupling, for x α + x c 1, one has T S = T gas , while in the limit of no coupling, for x α = x c = 0, one has T S = T γ and in this case there is no signal.
The evolution of T 21 with redshift can be schematically described by five main stages [1,2]: (i) In a first stage after recombination, during the dark ages, the gas is still coupled to radiation thanks to a small but non negligible amount of free electrons that still interacts via Thomson scatterings with the photon background. In this case one has T γ = T gas = T S and consequently T 21 = 0, i.e., there is no signal. 6 This stage lasts until the gas starts decoupling from radiation above z gas dec 150. At this time the gas temperature cools down more rapidly than CMB radiation, with (ii) In a second stage, approximately for 250 z 30, still during the dark ages and with the precise boundary values depending on different cosmological details, one has approximately T S T gas , since gas collisions are efficient enough to couple T S to T gas . In this case one has T 21 < 0 and an early absorption signal is expected. (iii) At z 30 the gas becomes so rarified that the collision rate becomes too low to enforce T S T gas and in this case one enters a regime where x a + x c 1 and T S T γ . In this stage, during the cosmic dawn, one has T 21 0 and again the 21 cm global signal is suppressed. 7 (iv) At z 30, gas also starts collapsing under the action of dark matter and first astrophysical sources start to form with emission of Lyα radiation that is able, through Wouthuysen-Field effect [17], to gradually couple again T S to T gas . In the redshift range z h z 25, where z h 10-20 is the redshift at the heating transition [1] (this stage starts during the cosmic dawn and can last until the epoch of reionization has begun at z 15), one can again have T 21 < 0, implying an absorption signal. This is within the range tested by EDGES whose results seem to confirm the existence of the absorption signal.
(v) In a fifth stage, for z z h (depending on the precise value of z h this stage can either start during cosmic dawn and ending during the epoch of reionization or entirely occurring during the latter), the gas gets reheated by the astrophysical radiation and T S T gas > T γ , so that T 21 turns positive and one has an emission signal from the regions that are not fully ionised. Eventually all gas gets ionised until the fraction of neutral hydrogen vanishes and the signal switches off again. 8 EDGES High and Low band antennas probe the frequency ranges 90-200 MHz and 50-100 MHz respectively overall measuring the 21 cm signal from between redshift 6 and 27, which corresponds to an age of the universe between 100 Myr and 1 Gyr and includes the epochs of reionization and cosmic dawn, when first astrophysical sources form and a second stage of absorption signal is 6 This conclusion is approximate and a very small signal is present even at high redshifts mainly due to the fact that T c deviates slightly from T gas . This has been studied recently in detail in [16] and it was found −T 21 2.5 mK at z 500. 7 A detailed description and in particular how suppressed the signal is in this stage depends on various astrophysical parameters [3]. 8 In this stage the signal crucially depends on astrophysics and it should be said that not in all scenarios T gas becomes larger than T γ and in this case the emission signal is missing [18].
predicted (the fourth and fifth stage in the description above). The EDGES collaboration found an absorption profile approximately in the range z = 15-20 with the minimum at z E 17.2, corresponding to ν 21 (z E ) 78 MHz, with a 21 cm brightness temperature On the other hand, at the centre of the absorption profile detected by EDGES, one expects, assum- gas dec 150 and T gas dec 410 K respectively the redshift and the temperature at the time when the gas decoupled from radiation. From Eq. (1) one then immediately finds 7. Therefore, the best fit value for T 21 (z E ) is about 2.5 lower than expected within the CDM. Even at 99% C.L. it is still 50% lower.
If this anomalous result will be confirmed and astrophysical solutions ruled out, then, very interestingly, it can be regarded as the effect of some non-standard cosmological mechanism. For example, it has been proposed that a (non-standard) interaction of the baryonic gas with the much colder dark matter component would cool down T gas , and consequently T S , below the predicted CDM value [7]. Another possibility is that T gas is lower because the gas decouples earlier so that z gas dec > 150. For example for z gas dec 300, one has T gas (z E ) 3.5 K, i.e., halved compared to the value predicted within the CDM, and this would reconcile the tension between CDM prediction and the EDGES result. Models of early dark energy have been proposed to this extent, but these are strongly ruled out by observations of the CMB temperature power spectrum [19]. A third possibility is that some non-standard source could produce a non-thermal additional component of soft photons effectively increasing T γ above T at frequencies around ν 21 (z E ). For example, these could be produced by dark matter annihilations and/or decays [20,21] and also give a signal at other frequencies for example addressing the ARCADE 2 excess at higher (∼GHz) frequencies [22] that, however, has not been confirmed by another group using ATCA data [23]. Conversion of dark photons into soft photons has also been proposed as a solution to the EDGES anomaly [24].
In the next section we present a mechanism for the production of a non-thermal soft photon component relying on relic neutrinos radiative decays into sterile neutrinos. Even if the EDGES anomaly will not be confirmed, we show that the EDGES results tighten the existing constraints [10,25] on the parameters of the scenario.
Relic neutrino radiative decays
The 21 cm CMB photons absorbed at z E fall clearly in the Rayleigh-Jeans tail since E 21 T (z E ). In this regime the specific intensity depends linearly on temperature, explicitly Only photons with energy E 21 at z z E could be absorbed by the neutral hydrogen producing a 21 cm absorption global signal. The EDGES results can be explained by an additional non-thermal where T γ nth is defined in terms of I nth in the same way as T γ is defined in terms of I γ in Eq. (5). We consider the radiative decay of active neutrinos ν i with mass m i and lifetime τ i into a sterile neutrino ν s with mass m s , i.e., ν i → ν s + γ . For definiteness we will refer to the case of lightest neutrino decays corresponding to i = 1. We will comment at the end how our results simply change if one considers heavier neutrinos. If decays occur after matterradiation decoupling, photons produced from the decays will not distort CMB thermal spectrum but will give rise to a non-thermal γ background [10] contributing to R. For a given m 1 one has two limits for m s : a quasi-degenerate limit for m 1 m s and a limit m s m 1 . 9 For m s m 1 the bulk of neutrinos, with E ∼ T C M B , necessarily decay when they are relativistic. This is easy to understand. Let us introduce the scale factor a = (1 + z) −1 and its corresponding value a E ≡ (1 + z E ) −1 at z E . In the matter-dominated regime we can write a(t) a E (t/t E ) 2 3 , where t E 222 Myr is the age of the universe at z = z E [26]. For neutrinos that decay at rest at time t one has to impose m 1 = 2 E 21 a E /a(t) in order to have photons with the correct energy at t E . Imposing that decays occur after recombination, otherwise the non-thermal component would thermalise or produce unacceptable distortions to the CMB spectrum, and of course before the time when photons are absorbed by neutral hydrogen, corresponding to a condition z E < z(t) < z rec 1100, one finds 0.012 meV m 1 0.71 meV, showing that the ν 1 's are too light to be treated non-relativistically for m s m 1 . 10 On the other hand the non-relativistic case can be realised in the quasi-degenerate limit for m 1 m s since one can then have m 1 T ν (z) (4/11) 1/3 T (z) at the time when they decay. Indeed at z = z E one has T ν (z E ) 3 meV. Since the current upper bound on the sum of neutrino masses implies m 1 50 meV, one can well have m 1 m s 3 meV. This implies 50 m 1 /meV 10, 11 a window that will be fully tested by close future cosmological observations [28]. Moreover in this (testable) non-relativistic and quasi-degenerate case not only it is easy to calculate R, as we will see, but also one obtains the most conservative constraints on τ 1 and m 1 ≡ m 1 − m s since, as we will 9 The origin and properties of neutrino masses and mixing would be related to extensions of the SM (e.g., grand-unified theories). Simplest models usually require the existence of very heavy sterile neutrinos (m s 100 GeV) in the form of right-handed neutrinos. However, the existence of light sterile neutrinos cannot be excluded and many models have been proposed especially in connection with different neutrino mixing anomalies (for a review see for example [27]). 10 This also shows that the two heavier neutrinos radiative decays would produce photons at too high frequencies, considering that m 2 ≥ m sol 9 meV and m 3 ≥ m atm 50 meV, where the lower bounds are saturated in the normal hierarchical limit. One could consider radiative decays ν 2,3 → ν 1 + γ and in the quasidegenerate limit m 1 0.12 eV photons with the correct energy would be produced. However, the upper bound m 1 0.07 eV placed by the Planck collaboration now rules out this possibility [12]. Moreover these processes need values of the effective magnetic moment ruled out by current experimental upper bound [11,8]. 11 The lower bound m 1 10 meV corresponds to m 1 3 T ν (z E ) that is quite a conservative condition to enforce that the bulk of neutrinos are non-relativistic when they decay since remember that for a Maxwell-Boltzmann distribution v 2 = √ 3 T /m. In this way the bulk of neutrinos have a kinetic energy that is negligible compared to the rest energy. point out, if neutrinos decay relativistically, constraints get more stringent. Let us then calculate R for m 1 m s T ν (z). An emitted photon has an energy at decay E = m 1 . Moreover let us suppose first that all neutrinos decay instantaneously at t = τ 1 corresponding to a redshift z decay such that a decay = (1 + z decay ) −1 a E (τ 1 /t E ) 2/3 . A sketchy representation of this toy model is given in Fig. 1. Requiring that photons produced from neutrino decays are then (21 cm) absorbed at z = z E , one has to impose m 1 = E 21 a E /a decay , implying an unrealistic fine-tuned relation between τ 1 , E 21 and m 1 . In addition it is easy to see that, since all photons produced in the decay contribute to the signal, one obtains a far too high value of R. This is because in this instantaneous decay description one has simply where n ∞ many orders of magnitude larger than R E . However, this simplistic instantaneous description has the merit to show the natural normalisation for the specific intensity of the non-thermal photons in terms of n ∞ ν 1 (z E ) and for R itself in terms of R .
Let us now calculate R removing the instantaneous assumption. Writing the fluid equation for the energy density of non-thermal photons produced by ν 1 decays [10,25,29], where H ≡ȧ/a is the expansion rate, one easily finds a solution in terms of a Euler integral The integral is done over all times t when photons are produced by neutrino decays with energy m 1 that is redshifted to an energy E(t, t E ) = m 1 a(t)/a E at t E . Photons with the correct energy E 21 at t E are produced at a specific time t 21 such that a 21 /a E = E 21 / m 1 , where a 21 ≡ a(t 21 ). Of course notice that imposing z rec > z 21 ≡ a −1 21 − 1 > z E , one would find E 21 m 1 0.35 meV, analogously to the range found for m 1 in the case m s m 1 . However, since we are assuming that neutrinos are non-relativistic at decays, and this implies T ν (z 21 ) 0.18 eV (1 + z 21 )/(1 + z dec ) m 1 50 meV, one finds z 21 275, implying an even more restrictive range E 21 m 1 0.9 × 10 −4 eV . (11) We can now easily switch from time to energy derivative finding where . From the definition of R (see Eq. (6)) and R (see Eq. (8)), one immediately obtains The condition R ≤ R E , where the equality corresponds to the condition to explain the EDGES anomaly and the inequality implies constraints on τ 1 and m 1 , can be put in the simple form 12 where we defined There are clearly two solutions. A first one (referred to as 'EDGES A' in Fig. 2) for τ 1 t E is simply given by x = 2.2 +4 −1 × 10 −6 , from which one finds where t 0 = 13.8 Gyr = 4.35 × 10 17 s is the age of the universe.
For this second solution one has to consider that decays should occur mainly after matter-radiation decoupling time in order to have a photon non-thermal background and one has to impose τ 1 t dec 3.71 × 10 5 yr. Moreover, though photon energies are much below the thermal bath temperature, they might produce too large deviations of CMB from a thermal spectrum. Even though this second solution is less appealing and likely not viable, it is also interesting that one could in principle expect a number of neutrino species at recombination lower than three if only a fraction of the decays are allowed to occur before recombination. 12 Of course one should also not forget that m 1 is constrained within the range in Eq. (11). We have now also to consider whether photons produced from neutrino decays might give visible (wanted or unwanted) effects at other frequencies. First of all one should worry of the CMB spectrum tested by the COBE-FIRAS instrument in the range of frequencies (2-21) cm −1 corresponding to (60-600) GHz or to energies (0.25-2.5) meV [30]. However, since m 1 < 0.09 meV (see Eq. (11)), in this non-relativistic scenario one completely circumvents the constraints from CMB thermal spectrum measurements.
Radio background observations in the GHz frequencies can also test the scenario either constraining it, as ATCA data do [23], or even providing, with the ARCADE 2 excess [22], another signal to be explained together with the EDGES anomaly. 13 Let us see how they can be combined with 21 cm observations to test relic lightest neutrino decays. In this case the results are given in terms of an effective temperature T rb (E rb ) of the radio background compared to the Rayleigh-Jeans tail of the CMB spectrum. This time the detection of the produced photons is made directly at the present time, while in the case of EDGES, as we discussed, photons produced by the decays are absorbed by the intergalactic medium at the time t E . Therefore, now we have to impose where this time a rb = E rb / m 1 . If we focus on the solution at τ 1 t 0 , then the exponential can be neglected and using H(a rb ) = where again the equality holds in the case one wants to explain the ARCADE 2 excess or the inequality in the case one imposes the constrain from the ATCA data. The ARCADE 2 collaboration claims 13 Notice that in [31] the existence of the ARCADE 2 excess is questioned and it is proposed that a more realistic galactic model can reconcile measurements of uniform extragalactic brightness by ARCADE 2 with the expectations from known extragalactic radio source populations.
an excess with T rb = (62 ± 10) mK at a frequency 3.2 GHz corresponding to E rb = 13.2 × 10 −6 eV and from the condition (19) (20) a solution shown in Fig. 2 ('ARCADE A') in orange (at 99% C.L.) together with the corresponding allowed range for m 1 found similarly to Eq. (11) with the difference that now the energy at the production has to be redshifted at z = 0 instead of z = z E .
If this is compared with the condition we found to explain the EDGES anomaly Eq.
shown in Fig. 2 in light green. Notice that this constraint does not apply to the narrow range E 21 < m 1 < E rb 7 × 10 −6 eV (so that EDGES allows to extend the constraints at slightly lower values of m 1 ). 14 Another interesting observation is that in the second stage of the evolution of T 21 that we outlined in Section 2, for redshifts 250 z 30, one expects an early absorption signal at z 100. If we extend the definition of R at a generic redshift z absorption , one can easily see from our expressions that this scales as ∝ 1 + z absorption . Therefore, the scenario predicts a doubled value of R(z 100) compared to the one measured at z E . This would be a powerful test of the scenario, though consider that in order to have a signal also in the early absorption stage, the upper bound on m 1 in Eq. (11) gets more stringent: from the requirement z 21 275, now one obtains m 1 3 E 21 .
14 Recently a study of the radio background data from the LWA1 Low Frequency Sky Survey (LLFSS) at frequencies between 40 MHz and 80 MHz [33] has found an excess well described by a power law T rb T rb,0 (ν/ν 0 ) β with ν 0 = 310 MHz and β −2.58, also fitting the ARCADE 2 results at much higher frequencies. For example at ν = 80 MHz the survey finds T rb = (1188 ± 112) K. This excess cannot be explained by our model since from Eq. (19) one can see that it predicts T rb ∝ E −0.5 .
If we fit the ARCADE 2 results, then we have a signal at ν = 80 MHz, approximately the same frequency tested by EDGES, that is about 100 times smaller than what LLFSS finds. Of course the LLFSS results do not exclude our model, they simply require an alternative explanation. More generally, they can be hardly reconciled with the EDGES anomaly within a realistic model since one would need a mechanism where the intensity of the produced radiation increases by about 20 times between z z E and today and this despite the fact that the expansion dilutes a matter fluid number density, such as primordial black holes, by a factor (1 + z E ) 3 . Even if one finds a model where this huge enhancement of the intensity is realised, this has to be strongly fine-tuned to match both results and this without considering the ARCADE 2 excess.
The derivation of the constraints could be further extended going beyond the quasi-degenerate limit m 1 m s implying necessarily going beyond the non-relativistic regime. In this case one has to take into account the thermal distribution function of neutrinos and from this derive the non-thermal distribution of photons solving a simple Boltzmann equation [32]. The factor R gets reduced for fixed τ 1 since the photon energy spectrum spreads at higher energies and at the energy E 21 at z E there are less photons and so the values of the lifetime that are necessary to explain the EDGES anomaly become shorter and this tends to generate a conflict with the constraints from radio observations and likely with the FIRAS-COBE data as well since photon energies can be much higher.
Finally, let us comment that though we have considered for definiteness decays of the lightest neutrinos, the results are also valid for heavier neutrinos of course with the replacement (τ 1 , m 1 ) → (τ 2,3 , m 2,3 ). The only difference is that now they automatically respect the condition m 2, 3 3 meV to be non-relativistic and of course in this case the lower bound m 1 10 meV does not hold so that the lightest neutrino mass can be arbitrarily small since lightest neutrinos do not play any role.
We should also say that of course, even though for definiteness we considered radiative decays into sterile neutrinos, our results are valid for any other decay mode involving a light exotic particle. Our results can also be easily exported to the case of quasi-degenerate dark matter recently proposed in [21] though notice that the correct way to calculate the specific intensity is Eq. (12) (replacing of course neutrino with dark matter number density) that takes into account that only those photons produced before t E can be responsible for the signal while the authors of [21] incorrectly use an expression valid for photons detected at the present time. However notice that in the case of decaying dark matter the fact that the intensity of non-thermal photons has to be comparable to that of CMB photons, as required by EDGES, is a coincidence. On the other hand, in the case of decays of active to light sterile neutrinos, the abundance of relic active neutrinos is fixed by thermal equilibrium and this naturally produces a nonthermal photon intensity comparable to that of CMB photons. One can think of a simple model for example in terms of singular seesaw [34] extended with a type II contribution [35]. In this case an active-sterile neutrino mixing is expected and one can have interesting phenomenological consequences that can help testing the scenario. 15 For example, in addition to obvious possible effects in neutrino oscillation experiments and in particular in solar neutrinos, the fact that m s < m 1 makes possible a mechanism of generation of a large lepton asymmetry in the early universe [36] with possible testable effects in big bang nucleosynthesis and CMB acoustic peaks [37]. 16
Conclusion
We discussed a scenario where relic neutrinos can radiatively decay into sterile neutrinos. This can be probed with 21 cm cosmology and, from EDGES results, we derived constraints on the mass and lifetime of the decaying active neutrino and on the difference of masses between active and sterile neutrino in the quasidegenerate case. Interestingly, the scenario can explain the EDGES 15 Radiative decays would still generate an effective magnetic moment for active neutrinos but if the mixing the sterile neutrino is sufficiently small, a condition easily realised especially for the A solution with very long lifetime, this can be well below the upper bound from stellar cooling. 16 One could investigate whether such dynamical generation of the asymmetry might suppress the thermalisation of a ∼eV sterile neutrino [38] anomaly if this will be confirmed. The scenario could also potentially have other testable phenomenological effects such as the excess at higher radio frequencies claimed by the ARCADE 2 collaboration. Our results can be also straightforwardly extended to the case of decaying quasi-degenerate dark matter. Additional independent results on the global 21 cm signal from experiments such as SARAS [40] and LEDA [41] might provide independent tests of the EDGES anomaly. If this will be confirmed, a precise determination of the dependence of the absorption signal on redshift could potentially be used to test even more strongly our proposed scenario. Certainly 21 cm cosmology opens new fascinating opportunities to test models of new physics and might in a not too far future finally provide evidence of non-standard cosmological effects.
|
v3-fos-license
|
2018-04-03T01:50:08.978Z
|
2006-11-30T00:00:00.000
|
31128862
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "http://www.jbc.org/content/282/6/3738.full.pdf",
"pdf_hash": "e973e304ea8a70fc65b8989167f332fa4f1be160",
"pdf_src": "Highwire",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2383",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "ae718cd754748b5ef251cdd45bc855496410fe47",
"year": 2007
}
|
pes2o/s2orc
|
Functional Dissection Identifies a Conserved Noncoding Sequence-1 Core That Mediates IL13 and IL4 Transcriptional Enhancement*
Conserved noncoding sequence (CNS)-1 has been shown to coordinately regulate the expression of the Th2 cytokine genes IL4, IL13, and IL5. We have used the interaction between CNS-1 and the human IL13 and IL4 promoters as a model to pursue the molecular mechanisms underlying CNS-1-dependent regulation of Th2 cytokine gene transcription. CNS-1 potently enhanced the activity of IL13 and IL4 promoter reporter vectors upon full T cell activation. Analysis of CNS-1 deletion mutants mapped enhancer activity to a short core (CNS-1-(270–337)) that contains three closely spaced cyclic AMP-responsive elements (CRE). CRE site 2 bound CRE-binding protein (CREB) and activating transcription factor (ATF)-2 in vitro and was essential for CNS-1-dependent up-regulation of IL13 transcription. Cotransfection of an IL13 reporter construct with expression vectors for wild type or mutant CREB and ATF-2 showed that CREB, but not ATF-2, regulates CNS-1 enhancer activity. Notably, chromatin immunoprecipitation analysis showed T cell activation recruits CREB and the coactivator CREB-binding protein (CBP)/p300 to the endogenous CNS-1. Moreover, CBP/p300 activity was essential for CNS-1-mediated enhancement of IL13 transcription. Collectively, these data define the region within CNS-1 responsible for enhancement of IL13 and IL4 transcription and suggest CREB/CBP-dependent mechanisms play an important role in facilitating Th2 cytokine gene expression in response to T cell receptor signaling.
genes, closely arrayed within 150 kb of human chromosome 5q31 and the syntenic region of mouse chromosome 11, typically demonstrate coordinated expression (1, 2), a feature critical for the emergence of a bona fide allergic phenotype in experimental and clinical models. However, the molecular mechanisms underlying the concerted expression of Th2 cytokines remained elusive despite intense investigation.
A breakthrough came as a result of comparative genomics analyses to identify noncoding regions highly conserved (Ն70% identity) between humans and evolutionarily distant mammalian species (3). These elements, abundant in the human genome, display characteristics indicative of regulatory function. In particular, they tend to demonstrate higher selective constraint than genomic regions that encode translated or noncoding RNAs (4,5) and contain short, alternating stretches of sequence with high or low divergence, a pattern typical of protein-binding sites (5).
The search for highly conserved noncoding sequences in ϳ1 Mb of human chromosome 5q31 identified several elements (3). The largest of these, conserved noncoding sequence (CNS)-1, mapped within the IL4/IL13 intergenic region of the Th2 cytokine locus. Deletion of CNS-1, either from a transgene or the native murine locus, led to a marked decrease in the expression of all three cytokine genes (3,6), establishing CNS-1 as a vital regulatory element for coordinated Th2 cytokine expression. Consistent with this role, epigenetic changes in the endogenous CNS-1 chromatin, including changes in levels of histone acetylation (7) and DNA methylation (8,9) and appearance of DNase I-hypersensitive sites (10), were found to accompany Th2 cytokine expression.
More recently, analysis of long range intrachromosomal interactions within the murine Th2 cytokine locus highlighted events that accompany the coordinated transcriptional regulation of Th2 cytokine genes and provided clues about the role CNS-1 may play in this process (11). This element was found to come in close spatial proximity with all three Th2 cytokine promoters in both T and non-T cells, suggesting it may be important for the acquisition of the initial "pre-poised" chromatin configuration of the Th2 cytokine locus. Of note, the physical interaction between CNS-1 and the Th2 cytokine promoters persisted through the T cell-and Th2 cell-specific stages of Th2 locus reorganization, pointing to an involvement of this element throughout the regulatory process. The interactions between CNS-1 and the Th2 cytokine promoters need to be further dissected to understand how CNS-1 contributes to their transcriptional regulation.
We chose the interaction between CNS-1 and the IL13 and IL4 promoters as a model to characterize the molecular mechanisms by which CNS-1 regulates Th2 cytokine gene expression in human CD4 T cells. Among the genes targeted by CNS-1, IL13 is essential to mediate Th2 effector functions critical to the pathogenesis of allergic inflammation (12)(13)(14), and IL4 is critical to initiate Th2 cell differentiation (reviewed in Ref. 15). Furthermore, expression of IL-13 and IL-4 was strongly decreased in CNS-1 Ϫ/Ϫ mice (6). We show here that CNS-1 is a potent T cell activation-dependent enhancer of the human IL13 and IL4 promoters. CNS-1 enhancer activity mapped to a short (68 bp) core that bound cyclic AMP-responsive element-binding protein (CREB) and the coactivator CREB-binding protein (CBP)/p300 in activated T cells and required these factors to enhance Th2 cytokine gene transcription.
EXPERIMENTAL PROCEDURES
DNA Constructs-p2.7IL13luc was created by PCR amplification of a 2666-bp region encompassing the human IL13 promoter (Ϫ2672 to Ϫ6, relative to the IL13 ATG; GenBank TM accession numbers AC004041 and L42080) using genomic DNA as a template. We selected this region based on the analysis of a panel of human IL13 promoter reporter constructs. 3 The PCR primers (IL13pro2.7F and IL13proR; all primer sequences are provided in supplemental Table 1) contained KpnI and NheI sites that were used to clone the IL13 promoter fragment upstream of the firefly luciferase gene in pGL3Basic (Promega). p369IL13 was created by amplification of the Ϫ369 to Ϫ6 region using primers IL13pro369F and IL13proR and p2.7IL13luc as template, followed by cloning into pGL3 Basic. p800IL4 contains 800 bp of human IL4 promoter sequence (Ϫ800 to Ϫ1 relative to the IL4 ATG) amplified by PCR using primers IL4pro800F and IL4proR with the human P1 clone H11 (GenBank TM accession number AC004039) as template. The amplified fragment was cloned into the SacI and NheI restriction sites of pGL3 Basic.
To generate the CNS-1 constructs, we initially amplified a 965-bp fragment (ϩ5604 to ϩ6568 relative to the IL13 ATG) of the human IL4/IL13 intergenic region encompassing CNS-1, and we cloned it into the SalI site located downstream of the luciferase gene in p2.7IL13luc. The boundaries of the human CNS-1 element were defined based on a sequence alignment with the murine IL4/IL13 intergenic region (GenBank TM accession number AC005742). The full-length 372-bp CNS-1 element (GenBank TM accession number AC004039; nucleotides 42330 -42701) was amplified by PCR (primers CNS1 1F and CNS1 372R ) and cloned 3Ј of the luciferase gene in p2.7IL13luc in both genomic and reverse orientations. Additionally, fulllength CNS-1 was cloned downstream of the reporter gene in the SalI site of p369IL13luc and p800IL4luc.
Cell Culture and Transfections-Jurkat T cells (ATCC clone E6-1) were cultured in RPMI 1640 supplemented with fetal calf serum (10%, HyClone), penicillin (100 units/ml), streptomycin (100 g/ml), and L-glutamine (2 mM). Jurkat T cells (1 ϫ 10 7 ) in log phase of growth were transfected with endotoxin-free plasmid preparations by electroporation (1 pulse, 240 V, 50 ms). Cells were transfected with either p2.7IL13luc (20 g) or with equimolar amounts of the indicated reporter vectors along with pRL-TK (20 ng; Promega) to control for transfection efficiency. Following electroporation, cells (5 ϫ 10 6 ) were cultured in the presence or absence of phorbol 12-myristate 13-acetate (PMA; 20 ng/ml, Sigma) and ionomycin (1 M; Sigma) or plate-bound anti-CD3 (2.5 g/ml; R&D Systems) and soluble anti-CD28 antibody (1.25 g/ml; R&D Systems) for 16 h. When indicated, Jurkat cells were transfected with expression vectors for WT or mutant CREB (30 ng), ATF-2 (30 ng), and E1A 12S (1 g) or equimolar amounts of empty pcDNA3 to control for total DNA content. Firefly and Renilla luciferase activity was determined using the dual luciferase assay system (Promega). In addition, the protein concentration for each cell lysate was quantitated with a BCA protein assay (Pierce). The relative luciferase activity (RLA) for each sample represents luciferase counts corrected for transfection efficiency and total protein content. Fold induction represents the ratio of RLA values between stimulated and unstimulated cells.
Nuclear Extract Preparation-Nuclear extracts were prepared from Jurkat T cells (1.5 ϫ 10 7 ) cultured in the presence or absence of PMA (20 ng/ml) and ionomycin (1 M) for 3 h. Cells were resuspended in buffer A (3 mM MgCl 2 , 10 mM HEPES, 40 mM KCl, 5% glycerol, 0.2% Nonidet P-40) supplemented with protease and phosphatase inhibitors (1 mM dithiothreitol, 1 mM phenylmethylsulfonyl fluoride, 10 g/ml aprotinin, 10 g/ml leupeptin, 10 g/ml antipain, 10 g/ml pepstatin, 5 mM -glycerophosphate, 1 mM NaF, 1 mM NaV, and 1 mM benzamidine) and incubated on ice for 5 min. Following centrifugation, the nuclear pellets were resuspended in buffer C (1.5 mM MgCl 2 , 20 mM HEPES, 420 mM NaCl, 25% glycerol, 0.2 mM EDTA) supplemented with protease and phosphatase inhibitors as indicated above. After a 30-min incubation on ice, the nuclear lysis solution was centrifuged, and the supernatant fractions were flash-frozen in liquid nitrogen and stored at Ϫ80°C. The protein concentration for each preparation was quantitated with a BCA protein assay (Pierce).
Electrophoretic Mobility Shift Assay (EMSA)-Singlestranded complementary oligonucleotides were annealed and PAGE-purified. Annealed oligonucleotides were end-labeled with [␥-32 P]ATP with T4 polynucleotide kinase. EMSA were performed with 10 g of nuclear extract in binding buffer (100 mM NaCl, 10% glycerol, 200 ng/l bovine serum albumin, 50 ng/l poly(dI-dC), 10 mM HEPES (pH 7), 0.1 mM EDTA, 0.25 mM dithiothreitol, 0.6 mM MgCl 2 ). For competition or supershift assays, the indicated unlabeled oligonucleotide competitor (100-fold molar excess) or antibody (2 g) was added 30 min prior to addition of radiolabeled probe. Following addition of the radiolabeled probe, the samples were incubated for 30 min at room temperature and loaded onto a 5% (w/v) polyacrylamide gel. Electrophoresis was performed at a constant 19 mA for 6 h at 4°C, and the gels were dried prior to autoradiography. Antibodies used for supershift analysis included a polyclonal and a monoclonal anti-CREB (C-21 and X-12, respectively), a polyclonal anti-Jun (D), and a monoclonal anti-ATF-2 (F2BR-1), all from Santa Cruz Biotechnology. Normal rabbit IgG (Upstate Biotechnology, Inc.) or a monoclonal anti-STAT1 antibody (C-136, Santa Cruz Biotechnology) was used as control. The DNA sequences corresponding to oligonucleotide competitors and probes are provided in supplemental Table 3.
Chromatin Immunoprecipitation (ChIP)-Jurkat T cells, unstimulated or treated with PMA (20 ng/ml) and ionomycin (1 M) for 3 h, were incubated with formaldehyde (1%) for 10 min at 37°C. Glycine was added to a final concentration of 125 mM to halt the cross-linking. Cells were harvested, washed with 1ϫ PBS supplemented with protease inhibitors (1ϫ EDTA-free Complete protease inhibitor mixture (Roche Applied Science) and 1 mM phenylmethylsulfonyl fluoride), and lysed in ChIP lysis buffer (Upstate Biotechnology, Inc.) supplemented with protease inhibitors as above. Chromatin was sheared by sonication (five times with 10-s pulses, 30% maximum; Microson XL200, Misonix) and diluted 10-fold in ChIP dilution buffer (Upstate Biotechnology, Inc.) supplemented with protease inhibitors. An aliquot of chromatin (ϳ5-6 ϫ 10 6 whole-cell equivalents) was set aside and used as input DNA. The remaining sample was precleared with salmon sperm DNA/protein A-agarose slurry (Upstate Biotechnology, Inc.) and divided into aliquots (5 ϫ 10 7 whole-cell equivalents) for immunoprecipitation. Immunoprecipitation reactions were performed with 10 g of antibody specific for CREB (C-21), CBP (A-22), p300 (N-15) (Santa Cruz Biotechnology), or with normal rabbit IgG (Upstate Biotechnology, Inc.) overnight at 4°C with rotation.
The chromatin-antibody complexes were collected with salmon sperm DNA/protein A-agarose slurry and washed sequentially with low salt wash, high salt wash, LiCl wash, and TE (ChIP Assay kit; Upstate Biotechnology, Inc.). The chromatin-antibody complexes were eluted (1% SDS, 0.1 M NaHCO 3 ), and the DNA-protein cross-links were reversed at 65°C overnight. All samples were recovered by phenol/chloroform extraction and ethanol precipitation using glycogen (20 g) as a carrier. Real time PCR was performed with the QuantiTect SYBR Green PCR kit (Qiagen) on an ABI Prism 7900 sequence detection system. PCRs were performed in triplicate under the following cycling conditions: 15 min at 95°C followed by 40 cycles of 15 s at 95°C, 30 s at 57°C, and 30 s at 72°C. Dissociation curve analysis and agarose gel electrophoresis confirmed amplification of a single 152-bp product using primers (CNS1ChIPF, 5Ј-CACAGCGTCGTTCAGAAACAC-3Ј; CNS1ChIPR, 5Ј-CAGCCCCCGCACAGTTGT-3Ј) which target nucleotides 221-372 of CNS-1. For each experiment, serial dilutions of input DNA were used to generate a standard curve. Results were expressed as the ratio between the number of targets immunoprecipitated with specific antibodies and the number of targets immunoprecipitated with control IgG.
Comparative DNA Sequence Analysis-Genomic DNA sequence for the human IL13/IL4 locus and the syntenic regions in eight additional mammalian species were obtained either from current whole genome reference sequences (Pan troglodytes, Bos taurus, Canis familiaris, Rattus norvegicus, and Mus musculus) or the National Institutes of Health Intramural Sequencing Center (Papio anubis, Callithrix jacchus, and Otolemur garnettii; www.nisc.nih.gov). Accession numbers are provided in Supplemental Table 4. Multiple sequence alignments were generated using the MUltiple sequence Local AligNment and conservation visualization tool (MULAN) (17), which uses a local alignment strategy with the threaded block set aligner and utilizes the phylogenetic relationships of the sequences provided to build the multiple sequence alignment. The alignment was exported in FASTA format to GeneDoc for visualization.
CNS-1 Is a Potent Enhancer of IL13 and IL4 Promoter Activity-
CNS-1 is strategically positioned at the heart of the Th2 locus ( Fig. 1), consistent with its ability to physically interact with the Th2 cytokine gene promoters throughout the region (11). To start defining the molecular mechanisms underlying the CNS-1-dependent regulation of Th2 cytokine transcription, the CNS-1 element (372 bp) was cloned 3Ј of the luciferase reporter gene in a construct driven by a 2.7-kb fragment encompassing the promoter for human IL13. This construct (p2.7IL13/CNS-1-(1-372)) was transiently transfected into Jurkat T cells, and its activity was compared with that of an IL13 promoter construct lacking CNS-1 (p2.7IL13). Transfected cells were cultured for 16 h in the presence or absence of PMA (20 ng/ml) and ionomycin (1 M), a combination of stimuli that results in a Ͼ100-fold increase of IL13 mRNA levels in Jurkat T cells (data not shown). Fig. 2A shows that under basal conditions the IL13 promoter and CNS-1 were essentially inactive. Stimulation with PMA and ionomycin resulted in strong (16-fold) activa-tion of IL13 transcription, which was amplified dramatically when CNS-1 was linked to the IL13 promoter. Although the activity of a shorter, 369 bp, human IL13 promoter fragment was comparable with p2.7IL13, the CNS-1-dependent up-regulation of transcription was less than half compared with the full-length promoter (data not shown), indicating additional IL13 5Ј regulatory elements are required for optimal interactions with CNS-1. Fig. 2A also shows CNS-1 activity was orientation-independent, because CNS-1 enhanced IL13 transcrip-tion to comparable extents when cloned in the genomic and reverse orientation. Collectively, these results demonstrate in our model CNS-1 acts as a robust bona fide IL13 enhancer.
Of note, optimal synergism between CNS-1 and the IL13 promoter required both PMA-and ionomycin-delivered signals. Fig. 2B shows that although CNS-1 did enhance IL13 transcription in response to PMA alone, activity remained marginal compared with that induced by the two stimuli in combination. T cell activation with anti-CD3 and anti-CD28 antibodies resulted in CNS-1-driven enhancement of IL13 promoter activity similar to that observed with PMA and ionomycin (Fig. 2C). These data show the main signaling pathways that lead to full T cell activation converge on CNS-1.
Multiple lines of evidence support a role for CNS-1 in the regulation of IL4 expression (3,6,18,19). In particular, addition of a 2.7-kb region surrounding CNS-1 to a luciferase transgene driven by an 800-bp murine IL4 promoter markedly enhanced luciferase production (19). We tested whether CNS-1 could directly influence the activity of a human 800-bp IL4 promoter. Fig. 2D shows p800IL4 was modestly (8-fold) induced upon T cell activation but was strongly up-regulated by CNS-1. These results indicate that CNS-1 enhances transcription of two major genes within the Th2 cytokine locus. Fig. 3B shows that remarkably the entire enhancer activity of CNS-1 mapped to a single fragment, CNS-1-(221-372), which encompasses the IL13 distal region of the element. We note that CNS-1-(221-372) was nearly twice as active as full-length CNS-1, suggesting the CNS-1 enhancer core may lie within a region that constrains its activity.
Identification of a 68-bp Enhancer Core within CNS-1-To
To define more closely the boundaries of the CNS-1 enhancer core and guide our subsequent analysis of DNA/pro- Fig. 3C). Fig. 3D shows that virtually all of the IL13 enhancer activity resided within CNS-1-(270 -337). A highly similar pattern was observed when we studied the interactions between discrete CNS-1 domains and the IL4 promoter (Fig. 3D). These results define a 68-bp region of CNS-1, encompassing nucleotides 270 -337, as a potent enhancer for two distinct Th2 cytokine gene promoters.
The CNS-1 Enhancer Core Contains Binding Sites for CREB, ATF-2, and Jun Proteins-To identify the trans-acting factors involved in the enhancer activity of CNS-1-(270 -337) , we analyzed patterns of DNA/protein interactions by EMSA. Nuclear extracts prepared from Jurkat T cells cultured with or without PMA (20 ng/ml) and ionomycin (1 M) for 3 h were incubated with 32 P-labeled probes corresponding to nucleotides 270 -303, 287-320, or 318 -337 of CNS-1 (Fig. 4A). Fig. 4B shows that competition experiments with unlabeled self-related or unrelated oligonucleotides identified four specific nucleoprotein complexes (lanes 1-3 and 19 -21), three of which (complex I-III) bound the 270 -303-nucleotide region (Fig. 4B, left panel). Complex I and II, but not complex III, also bound the overlapping nucleotides 287-320. No additional interactions were detected in this region (Fig. 4B, center panel). Complex IV was detected using the 318 -337 probe (Fig. 4B, right panel).
Comparative analysis of the CNS-1-(270 -337) nucleotide sequence across distant species (Fig. 4C) and prediction of putative transcription factor-binding sites identified three motifs (CRE 1-3; Fig. 4B) each partially homologous to a CRE (TGA-CGTCA) (20) and the related AP-1 family consensus sequence (TGA-CTCA) (21). Antibody supershift experiments were therefore performed in order to test whether the complexes binding to CNS-1-(270 -337) contained CRE-interacting proteins (CREB and ATF) and/or AP-1 family members. Fig. 4B shows that complex I, a faint band up-regulated by stimulation, contained ATF-2 because it was supershifted by an ATF-2-specific antibody (lane 13) but not by an anti-CREB (lane 11) or an anti-Jun antibody (lane 12). The constitutively expressed complex II was formed by CREB because addition of a polyclonal anti-CREB antibody supershifted the complex completely (Fig. 4B, lanes 4 and 11). A monoclonal anti-CREB antibody, which does not cross-react with other CREB family members, also altered the mobility of this complex (data not shown). Complex III appeared to contain CREB because an anti-CREB antibody (Fig. 4B, lanes 4 and 11) reduced the intensity of this complex. Jun proteins may also be part of complex III, because the corresponding band became fainter upon addition of a Jun-specific antibody (lanes 5 and 12) and was competed by an AP-1 consensus oligonucleotide (data not shown). Migration of none of these complexes was affected by control IgG or STAT1 antibodies (Fig. 4B, lanes 7, 14, 15, and 24). Finally, the constitutive complex IV also contained CREB. Indeed, preincubation with the polyclonal or monoclonal anti-CREB antibody supershifted this complex completely (Fig. 4B, lanes 22 and 23), whereas antibodies to STAT1 (lane 24), Jun, or ATF-2 (data not shown) did not. Collectively, these data demonstrate the CNS-1 enhancer core contains binding sites for CREB, ATF-2, and Jun proteins.
To position the observed complexes more closely within the CNS-1 enhancer core and test their interactions with the CRE motifs, each CRE site was mutated, alone or in combination (Fig. 5A). EMSA analysis revealed that each CRE site supported the formation of distinct CREB-containing complexes (Fig. 5B). Mutation of CRE 1 resulted in the loss of complex III (Fig. 5B, lane 2). Both complex I and complex II were lost when the second CRE motif was mutated (Fig. 5B, lane 3). Despite the striking proximity of the first two CRE sites and their organization on opposite DNA strands, mutation of either one of these two sites did not appear to affect the formation of complexes on the other site. Notably, no residual binding was detected when both CRE 1 and CRE 2 were mutated (Fig. 5B, lane 4). Finally, mutation of CRE 3 abolished the binding of complex IV (Fig. 5B, lane 6). Collectively, our results show CREB interacts with each of the three CRE sites in CNS-1-(270 -337), whereas Jun and ATF-2 bind selectively to CRE 1 and CRE 2, respectively (Fig. 5C).
To evaluate the contribution of each CRE motif to CNS-1 enhancer function, the CRE site mutations already characterized by EMSA were introduced in the p2.7IL13/CNS-1-(270 -337) reporter construct. Fig. 6 shows that mutation of the CRE 1 and CRE 3 sites reduced CNS-1 activity substantially but only partially (54.5 and 79%, respectively), whereas mutation of CRE 2 was sufficient to abolish CNS-1-dependent IL13 enhancement. In view of the topology of the CRE motifs, these results suggest that although each CRE site contributes to CNS-1 activity, CRE 2 may be required to coordinate the formation of a supramolecular complex critical for optimal CNS-1-induced enhancement of IL13 transcription.
CREB, but Not ATF-2, Regulates CNS-1 Enhancer Activity-Because CRE 2 and, to a lesser extent, CRE 3 were essential for CNS-1-dependent transcriptional enhancement, we then investigated the role of the CRE 2/3-binding proteins CREB and ATF-2 in the regulation of CNS-1 activity. To this purpose, the p2.7IL13/CNS-1-(270 -337) reporter construct was cotransfected with expression vectors encoding either WT or mutant forms of these factors. Specifically, pCMV-CREB133 encodes a dominant negative CREB variant containing a Ser 3 Ala substitution at position 133, a residue vital to CREB-mediated transactivation (22). pATF-2 ⌬2-107 encodes a truncated ATF-2 protein that lacks the N-terminal trans-activation domain (23) but retains the ability to bind CNS-1 CRE 2 (data not shown). Fig. 7 shows that expression of CREB133 markedly (47%) reduced T cell activation-dependent CNS-1 enhancer activity, whereas cotransfection of ATF-2, either WT or mutant, failed to affect IL13 transcription. CREB133-dependent inhibition was CNS-1-specific because IL13 promoter activity was virtually unaltered (data not shown). These results support a role for CREB, but not ATF-2, in the molecular events underlying CNS-1-mediated transcriptional enhancement.
T Cell Activation Recruits CREB and CBP/p300 to the Endogenous CNS-1-Our in vitro dissection identified CREB as a factor that binds functionally critical sites in the CNS-1 enhancer core and directs robust up-regulation of IL13 and IL4 transcription. Chromatin immunoprecipitation assays were therefore performed to test whether CREB-containing complexes dock onto the endogenous CNS-1. CREB protein-DNA complexes were immunoprecipitated from Jurkat T cells, resting or activated with PMA and ionomycin. Real time PCR was performed to detect a 152-bp region of CNS-1 (nucleotides 221-372) that spans the enhancer core. Although only low levels of target were immunoprecipitated with an anti-CREB antibody under basal conditions, Fig. 8 shows that T cell activation strongly increased CREB binding to CNS-1.
Activation-dependent CREB phosphorylation at serine 133 is known to foster the recruitment of the coactivator CBP (24) and its paralogue p300 (25), which augment CREB-mediated gene transcription. The inhibitory effect of the CREB133 mutant on CNS-1 activity (Fig. 7) raised the possibility these coactivators may contribute to CREB-mediated CNS-1 regulation. We there- FEBRUARY 9, 2007 • VOLUME 282 • NUMBER 6 fore used chromatin immunoprecipitation to test whether CBP/p300 was recruited to CNS-1 in vivo. Fig. 8 shows that occupancy of CNS-1 by CBP-or p300-containing complexes was limited in unstimulated T cells. However, T cell activation resulted in robust recruitment of both factors to CNS-1.
DISCUSSION
Progression of naive CD4 T cells along the Th1 or Th2 differentiation pathway is a multistage process contingent upon antigenic T cell stimulation in an instructive cytokine milieu. In vivo studies defined CNS-1 as a regulatory element critical for optimal expression of Th2 cytokine genes and Th2 differentiation (3, 6) but did not dissect the molecular mechanisms underlying the role of CNS-1 in these processes. Our results characterize CNS-1 as a potent enhancer of human IL13 and IL4 transcription. Surprisingly, activity mapped to a discrete, short domain at the IL13 distal end of the element, raising the possibility that the regulatory properties of CNS-1 may be compartmentalized. CNS-1 appears to control several facets of Th2 cytokine locus expression, from modulation of chromatin accessibility (3,6) to positioning of the locus within a repressive nuclear domain in Th1 cells (28). It remains to be determined whether any of these specialized functions of CNS-1 also reside(s) in the enhancer core or if they map elsewhere, a possibility supported by the finding that the IL13 proximal region of CNS-1 is comparatively even more conserved across mammalian species than the enhancer core itself (data not shown). Surprisingly, the CNS-1 enhancer core does not encompass previously described binding sites for GATA3 (29) or Ikaros (28), even though ectopic overexpression of GATA3 in murine Th1 cells was sufficient to establish DNase I hypersensitivity within the endogenous CNS-1 chromatin (29,30). The relationship between GATA3-induced chromatin remodeling and the activity of the CNS-1 enhancer core remains to be determined.
Our experiments identified CRE site 2 and 3 and the CRE 2/3-binding protein CREB as major regulators of CNS-1 enhancer activity. This finding was somewhat unexpected because a recent analysis of mice in which CREB and ATF-1 had been deleted selectively in T cells showed IL-4 expression to be preserved (31). However, a significant proportion of spleen and lymph node T cells (24 and 12%, respectively) in the knock-out mice still expressed CREB, raising the possibility that residual CREB activity was sufficient to support IL4 mRNA expression in that experimental model. Furthermore, and perhaps more importantly, deletion of CNS-1 delayed and reduced, but did not abrogate, expression of IL-13 and IL-4 (6), suggesting CNS-1 contributes to, but is not absolutely required for, Th2 cytokine expression.
CREB is a ubiquitous transcription factor that resides constitutively in the nucleus (20). CREB activation is known to be necessary for transcriptional regulation and contingent upon phosphorylation of Ser-133, which occurs typically, although not exclusively, in response to elevated intracellular cAMP (20). In line with these requirements, our experiments showed stimulation of T cells with cAMP resulted in vigorous (4.4-fold) CNS-1-dependent enhancement of IL13 promoter activity (data not shown) and expression of a dominant negative CREB variant, which cannot be phosphorylated on Ser-133, reduced activity by nearly half.
Although T cell receptor engagement induces CREB phosphorylation, maximal CREB trans-activation potential is achieved only if the antigen-dependent signal is coupled with CD28-mediated costimulation (32,33). Consistent with the two-signal requirement for CREB trans-activation, our data revealed full T cell activation was essential for maximal CNS-1 activity. Signaling via the T cell receptor and the costimulatory pathway is required for a productive association between CREB and the transcriptional coactivator CBP (33), which recruits the RNA polymerase II holoenzyme (34). Fostering the assembly of this complex may represent a mechanism by which CREB contributes to CNS-1-dependent transcriptional enhancement in activated T cells. This possibility is strongly supported by our finding that CREB, CBP, and p300 were recruited to the endogenous CNS-1 in response to T cell stimulation, and CBP/p300 activity was necessary for CNS-1-mediated enhancement of IL13 transcription. In addition, CBP/p300 can mediate transcriptional activation through intrinsic histone acetyltransferase activity (35,36). Because the latter is required for CREBmediated transcriptional activation in several models (37)(38)(39), the role of CREB in CNS-1 function may also involve CBP/ p300-mediated histone modifications. Indeed, deletion of CNS-1 abrogated the basal acetylation of histone H3 at the IL4 and IL13 promoters in naive CD4 T cells (28), a permissive modification associated with rapid gene transcription (15).
Recent work on long range intrachromosomal interactions within the murine Th2 locus demonstrated CNS-1 behaves as a versatile element that interacts with distinct Th2 cytokine promoters in vivo (11). This essential property was captured by our experimental model, which revealed CNS-1-dependent enhancement of both IL13 and IL4 transcription in T cells. That the CNS-1 enhancer core relied on ubiquitously expressed proteins such as CREB and CBP/p300 for its activity suggests the role of this element in Th2 cytokine transcription may be a permissive one, which links Th2 locus regulation with antigenic T cell stimulation and complements Th2-specific regulatory mechanisms. The CRE sites clustered within the CNS-1 enhancer core could serve as a platform to recruit basal machinery and coactivators to confer transcriptional competence to and/or augment transcription from its target Th2 cytokine promoters. In vivo analysis of CNS-1 will be required to further elucidate the interactions between CNS-1 and Th2-specific transcriptional regulators and dissect the contribution of specific CNS-1 domains to the complex molecular events that orchestrate Th2 cytokine gene expression.
|
v3-fos-license
|
2019-04-02T13:12:28.853Z
|
2018-04-01T00:00:00.000
|
90012777
|
{
"extfieldsofstudy": [
"Environmental Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "http://www.scielo.br/pdf/sa/v75n2/0103-9016-sa-75-02-0144.pdf",
"pdf_hash": "2c89f9b53342b9f96c37304b3e7cfdc2e3ad9d5b",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2384",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"sha1": "2c89f9b53342b9f96c37304b3e7cfdc2e3ad9d5b",
"year": 2018
}
|
pes2o/s2orc
|
Knowledge-based digital soil mapping for predicting soil properties in two
The estimation of soil physical and chemical properties at non-sampled areas is valuable information for land management, sustainability and water yield. This work aimed to model and map soil physical-chemical properties by means of knowledge-based digital soil mapping approach as a study case in two watersheds representative of different physiographical regions in Brazil. Two watersheds with contrasting soil-landscape features were studied regarding the spatial modeling and prediction of physical and chemical properties. Since the method uses only one value of soil property for each soil type, the way of choosing typical values as well the role of land use as a covariate in the prediction were tested. Mean prediction error (MPE) and root mean square prediction error (RMSPE) were used to assess the accuracy of the prediction methods. The knowledge-based digital soil mapping by means of fuzzy logics is an accurate option for spatial prediction of soil properties considering: 1) lesser intense sampling scheme; 2) scarce financial resources for intensive sampling in Brazil; 3) adequacy to properties with non-linearity distribution, such as saturated hydraulic conductivity. Land use seems to influence spatial distribution of soil properties thus, it was applied in the soil modeling and prediction. The way of choosing typical values for each condition varied not only according to the prediction method, but also with the nature of spatial distribution of each soil property.
Introduction
The estimation of soil physical and chemical properties at non-sampled areas is valuable information for land management, sustainability and water yield.Different interpolation techniques have been used with varying degrees of success in order to create more accurate soil property maps (McBratney et al., 2003).From the pedometric approach, most techniques have high sampling density as the main driver for interpolation.In Brazil, where areas with intensive field observations are scarce, another quantitative procedure for spatial prediction should be considered.One approach with the advantage of low density of sampling (Shi et al., 2009) is the knowledge-based digital soil mapping technique, based on similarity vectors and parameters of fuzzy logic in an expert system (Zhu and Band, 1994;Zhu et al., 1997).
Similar to conventional soil survey, the knowledge of soil-landscape relationships is crucial for the accuracy of prediction of soil types and properties (Menezes et al., 2013), which is stablished and formalized by means of fuzzy membership curves (Shi et al., 2009).Spatially continuous soil property maps (Zhu et al., 1997), from only one representative value per soil type, can be generated.Besides its low cost, it overcomes a conventional soil survey limitation, in which each soil-mapping unit assumes a unique value based on a soil profile described, which does not necessarily reflect the variability and continuous nature of soil properties within and between polygon mapping units (Menezes et al., 2014).
Two watersheds were chosen for this study, according to their representativeness in two different representative watersheds physiographical regions of Southern Minas Gerais: Mantiqueira Range and Vertentes Fields physiographical regions.Both study sites are located in the Rio Grande watershed, which is an important water source for hydroelectric energy production, where environmental issue is associated with the native forest that has been replaced by extensive pasture or crops with degraded lands (Viola et al., 2014;Beskow et al., 2013).
This work aimed to model and map soil physical and chemical properties from knowledge-based digital soil mapping, as a study case in two watersheds in contrasting physiographical regions.The role of land use on organisms as a factor to form soils and their influence on predictions of soil physical and chemical properties was also tested.The way of choosing typical values to spatialize each condition and the role that land use plays as an environmental covariate were assessed into the spatial prediction.
Study sites
This study was conducted at Lavrinha Creek Watershed (LCW) and Marcela Creek Watershed (MCW) located in the state of Minas Gerais, southeastern Brazil.Both watersheds are representative of the Rio Grande watershed, but they are located in different physiographical regions: Mantiqueira Range region (LCW) and Vertentes Fields region (MCW).LCW is located between latitudes S 22º6 '53" and 22º8'28" and longitudes W 44º26'21" and 44º28'39", with nual temperature is 15 °C and precipitation is 2,000 mm, with the native vegetation of the Atlantic Forest (Tropical Forest) and geology of gneiss.MCW is located between latitudes S 21º14'27" and 21º15'51" and longitudes W 44º30'58" and 44º29'29", with area of 470 ha and altitudes varying from 958 m to 1,059.The average annual temperature is 19.7 °C and annual precipitation is 1.300 mm, with native vegetation of Cerrado (Brazilian savanna) and geology of mica schist.Both areas are located in Cwb domain, according to Koppen classification (Alvares et al., 2013), where the winter is cold and dry and summer is hot and humid.
Soil-landscape relationship
Considering the soil-landscape relationships at LCW, the alteration of gneiss resulted in predominance of Inceptisols (moderately developed and well-drained soils).The relief is steep with concave-convex hillsides and predominance of linear landforms and narrow floodplains.Endoaquents occupy the toeslope position, where the water table is near the surface in most part of the year (Menezes et al., 2014).
MCW has gentle undulated relief with extreme soil development.Oxisol is the most geographically expressive soil type, formed on stable and very old surfaces conductive to intense weathering-leaching under warm and moist climate, where organisms are very active (Motta et al., 2002).Inceptisols occupy the more dissected positions and more linear portions inside a convex macrolandform (Pelegrino et al., 2016).Endoaquents occupy the youngest surface on the toeslope (Silva et al., 2014).
Acrudox (hue 2.5YR or redder) occupies flatter and convex summit positions.Hapludox (hue 5YR) and Hapludox (hue 7.5YR or 10YR) occur from summit to footslope in the landscape.These colors show a preterit hydrological influence, where the type of orientation of parent material layers, by conditioning a different moisture regime in the two systems, exerted influence on the pedogenesis of the Acrudox and Hapludox.The horizontal orientation of the layers conditioned the genesis of Hapludox, with higher goethite/hematite ratio and consequently, yellowish colors, as the result of former soil moisture conditions that were different from that in redder soils.The inclined orientation of the layers conditioned, under similar topographic conditions, the formation of Acrudox, with better drainage and higher weathering-leaching intensity, higher hematite/goethite ratio and, consequently, reddish colors.Nowadays, due to the current climate conditions, both soils are well drained.
Soil property analyses
The physical and chemical properties analyzed were bulk density, by the volumetric ring method; soil organic matter, according to Walkley and Black (1934); drainable porosity, calculated by the difference between saturation moisture and soil moisture at field capacity; saturated hydraulic conductivity (Ksat) determined in situ by constant flow permeameter; and total porosity, calculated according to the equation: in which particle density was determined by the volumetric flask method (Embrapa, 1997).
Knowledge-based digital soil mapping technique
All steps accomplished since the creation of base maps until the soil property maps are presented in Figures 1A, B, C and D, which show the different function types or curve shapes.The knowledge on the soil-landscape relationships was qualitatively modeled using ArcSIE (Soil Inference Engine, version 9.2.402) (Shi et al., 2009).The Rule-Based Reasoning (RBR) inference method was used to define the relationship between values of environmental variables (soil forming factors) and a given soil type.Considering the scale of variations of the studied sites, relief and organisms are the main drivers of soil variability, and the other soil forming factors are considered a constant.Additionally, terrain derivatives are strongly related to soil properties and have been useful in digital soil mapping for this reason (Akumu et al., 2015).Digital elevation models with pixels of 20 m resolution were generated from contour lines at 1:50,000 (IBGE) scale.DEM derivatives (digital terrain models -DTMs) (slope, altitude above the channel network, plan curvature, profile curvature, and wetness index) were calculated using ArcGIS (ESRI, version 10) and SAGA GIS (System for Automated Geoscientific Analysis, version 2.1.0).DTMs have been frequently used as a proxy of current relief conditions (Heuvelink and Webster, 2001).DEM and DTMs, as well as land use raster maps, are presented in Figure 2 (A -altitude, B -slope, C -wetness index, D -plan curvature, Eprofile curvature and F -land use) and Figure 3 (Aaltitude above the channel network, B -slope, C -wetness index, D -plan curvature, E -profile curvature and F -land use).DTMs and ranges associated with each soil type in the maps were used to define membership or optimality functions (curves), which, in turn, define the relationship between the values of an environmental feature and soil type.The curve shapes bell, S, and Z were used for soil-landscape modelling, presented in Figures 1A, B, and C respectively.The Y axis shows the optimality value varying from 0 to 1, and the X axis the variation of DTMs values.The initial output from the inference process is a series of fuzzy membership maps in raster format, one for each soil type under consideration (Shi et al., 2009), representing similarities of each pixel in the landscape to the soil types.From those maps, the spatially continuous soil property maps derived from similarity vectors are generated, according to the formula (Zhu et al., 1997): where V ij is the estimated physical or chemical property at location (i,j), V k is a typical value of soil type k (e.g.Udepts1), and n is the total number of prescribed soil types for the area.The typical value consists of the central concept of soil property value for each soil type, which is generally obtained at a soil profile in a polygon from conventional soil survey.Different land uses were considered in the prediction, since they represent organisms as a soil-forming factor, which in turn, can influence the soil physical and chemical properties distribution.In order to assess whether soil properties are significantly influenced by different land uses, analyses of variance (ANOVA) were made by the F test (p < 0.01 or p < 0.05).Land uses at LCW are native forest (Atlantic Forest), natural regeneration forest, pasture and wetland (Figure 2F), while land uses at MCW are Cerrado (Brazilian savanna), pasture, maize and eucalyptus crops (Figure 3F).The boxcox procedure was carried out to determine the suitable type of transformation for ANOVA.Ksat was log transformed.The statistical analyses were performed in SAS (Statistical Analysis System Institute, version 9.2).If ANOVA test pointed out the influence of land use, in such cases, the typical value V k came from the combination of soil (modelled using S-, Z-, and bell-shaped optimality curves) and land use k to form soil type, e.g.Udepts1 under pasture.In ArcSIE, land use raster map was used as categorical data (data do not have quantitative meaning, values are only for labeling or categorizing different land uses) and overlaid with all soil types, using the function type Nominal (Shi et al., 2009), and the shape of the optimality curve is presented in Figure 1D.The maximum fuzzy membership value specified for each land use is 1.Since there are four different types of land uses at each watershed, the maps were reclassified in integer values that represent land use types.
In this study, except for organic matter, most of the studied soil properties (bulk density, total porosity, drainable porosity, and hydraulic saturated conductivity) are not frequently analyzed in soil profiles of soil surveys.Thus, the sampling scheme started from a dense grid design, but only a few points were used in the prediction in which different ways of choosing representative values for spatial prediction were tested.The sampling scheme is a current discussion in digital soil mapping community, since it is one of the main drivers of costs and prediction accuracy (Silva et al., 2015).The full data set comprehend the pre-defined topsoil sampling (0-15 cm) at both watersheds.A total of 198 points were sampled at LCW, following the 300 × 300 m regular grids as well as a refined scale of 60 × 60 m and 20 × 20 m, and two transects with the distance of 20 m between points (comprising 54 and 14 sampled points per transect).A total of 165 points were sampled at MCW, following the 240 × 240 m regular grids and a refined scale of 60 × 60 m.This sampling scheme with high density was required to test the way of choosing typical values in this study.Figures 4A, B, C, D, E, F, G and H show the different ways of choosing typical values tested according to the sampled points, as follows: a) mean soil property value into each polygon of a soil type from the hardened map (Figures 4A and B).The data set for prediction was plotted into the soil type hardened map, the mean value was calculated for each soil type, and then used as V k .In this study, we called this method generically as mean.In Figure 4A, the example of mean value shows the hardened or defuzzified map, where x n and y n are sampled soil property values, whose mean within the polygon was used as V k ; b) mean soil property value in each polygon that results from the soil type hardened map overlaid on land use raster map, if the ANOVA test shows that land use influences soil properties (Figures 4C and D).It was generically referred to in this study as mean and the land use method in comparison with the mean method aforementioned, this one promotes more stratification and more typical values were used to generate prediction soil property maps.Thus, according to Figure 4B, c) point geographically located on the pixel with highest membership value for the correspondent soil type, referred to in this study as the landscape method (Fig- ures 4E and F).The fuzzy membership value for a given soil type shows that V x k 7 1 = , since this sampled point is located at the pixel with the highest membership among all points; d) point geographically located on the pixel with highest membership value for the correspondent soil type, but overlaid on land use raster map, if the ANOVA test showed that land use influenced soil properties (Figures 4G and H).It was referred to in this study as landscape and land use method.Thus, membership maps were obtained from the soil-landscape modelling, but is this case, land use was also considered for each soil type, as showed by the black line in Figures 4G and H.The typical value in this case V y k 8 3 = : the highest membership value among all sampled points.
Comparison of methods
In order to create one independent validation data set to evaluate the performance of prediction methods, the total data set was divided into interpolation and validation sets.Of the total number of places sampled, 25 points were used for validation at LCW and 20 points at MCW, both randomly chosen.The validation data set was not used in the models to develop predictions.Two indices were calculated from the observed and predicted values: the mean prediction error (MPE) and the root mean square prediction error (RMSPE).The MPE was calculated by comparing estimated values ( ( )) z s j with the validation points ( ( )) z s j * of Ksat: and the root mean square prediction error (RMSPE): where l is the number of validation points.The MPE measures the bias of prediction, and the RMSPE measures the prediction accuracy.
Results and Discussion
The descriptive statistics of full, interpolation and validation data set (mean, media, skewness, coefficient of variation, minimum and maximum) of soil properties can be viewed at Menezes et al. (2016).Validation and interpolation data sets showed quite similar statistical characteristics.Among the soil properties, Ksat showed higher coefficient of variation and skewness.Skewness quantifies how symmetrical the distribution is in which values far from zero indicate long tails (left or right) and asymmetrical distribution.Thus, Ksat has non-normal distribution at both watersheds (Menezes et al., 2016).
Knowledge formalization by means of optimality curves
The soil-landscape relationship above described in the Materials and Methods section were quantified and formalized by a set of rules that relates to raster maps.The processes of knowledge formalization in ArcSIE Rule-Based Reasoning method means the establishment of optimality or membership curves, setting the parameters to build S-, Z-, and bell-shaped curves.Threshold values related with DTMs were identified and assigned to each soil-mapping unit, according to soil scientists' knowledge, and to a soil map from previous soil survey (Menezes et al., 2014).It is the basis for establishing the membership maps for each soil type.Details on the shape of optimality curves as well as the parameters to stablish them from DTMs are presented in the Table 1.
At LCW, higher values of WI and lower values of slope were used to map Fluvents in flat alluvial areas (footslope).Inceptisols occupy the well-drained portions of the landscape with lower values of WI (wetness index) (summit, shoulder, and backslope) formed by different combinations and ranges of slope, plan and profile curvatures.
In MCW, Acrudox usually occupies flat summit positions in a more convex landform, expressed by higher values of Altitude Above the Channel Network (AACHN), lower values of slope and negative values of plan curvature.AACHN describes the vertical distance between each cell of a raster grid and the elevation of the nearest drainage channel cell connected with the respective grid cell of a DEM.The Hapludox is present on shoulder, backslope, and footslope positions (intermediate values of AACHN and gentle slopes).Two instances were applied to Inceptisols: one considering steeper slopes, and another for plan and profile curvatures.Two instances were necessary in order to formalize the knowledge on Inceptisols in this watershed: they occupy the more dissected positions and more linear portions inside a convex macrolandform.ArcSIE shows a general inference equation that allows integrating optimality values and then, the optimality values are generated for the whole instance, based on individual features (Shi, 2013).In this case, this integration is necessary through the multiplication function, since there are two different soil-landscape relationships for the Inceptisol instance.Endoaquents are located in lower AACHN and higher WI values.The ranges of DTMs are presented in Table 2.These instances are only related to terrain and soil types and have been frequently used to map soil properties (Brown et al., 2012;Adhikari et al., 2013;Vaysse and Lagacherie, 2015).Thus, whether land use maps could improve the accuracy of mapping is further discussed.
ANOVA test
The ANOVA test was used to support the decision to apply land use as categorical information to map soil property.In other words, the test was run to verify whether there were differences between the different types of land use (categorical map) according to soil physical properties.Abrupt changes in boundaries provided valuable categorical information to interpret soil property variation, and the variation between and within polygons are frequently assessed by the ANO-VA test (Oberthür et al., 1996;Liu et al., 2006;Molin and Castro, 2008).Except for Ksat in MCW, the variance between land uses was statistically significant in both watersheds, meaning that land use affected physical properties.Soils in MCW are mainly Oxisols, whose structure helps to explain the pattern variability.The adequate physical properties of Oxisols for soil management and intensive uses are mainly influenced by their high aggregate stability (Ajayi et al., 2009).For other soil properties, not only was the soil type modelling considered, but also land use as a categorical optimality curve.
Summary statistics of soil physical properties for the data stratified into four land uses are listed in Table 3.These results guided the way of using maps of land use in ArcSIE in which different types of land use were joined or treated separately, based on the mean test for separation.For example, the soil organic matter mean test in LCW showed that native vegetation is statistically different from other land uses.Thus, the raster map was reclassified into two different nominal categories for each soil type with crisp boundaries: one nominal value for native vegetation and another nominal value for natural regeneration, pasture and wetland.Thus, a soil unit was created by the combination of soil type and land use.However, the issue here is whether the categorical maps of land use can indeed improve accuracy to predict physical properties.
Assessment of prediction methods
Table 4 presents the statistical accuracy indexes for predictions, considering different ways to choose typical values at LCW and MCW.For each of soil property, a method with suitable accuracy was found, with MPE and RMSPE closer to zero.However, making comparisons within same soil property, the methods performed in contrast, with extreme high values of MPE and RMSPE in some cases.Considering results from the literature related with soil physical properties, most results showed suitable accuracy indexes, similar to those presented in this work, but they are mostly related to soil texture mapping (Akumu et al., 2015;Qi et al., 2006;Zhu et al., 2010).Thus, it is not possible to make any specific comparison between MPE and RM-SPE values.
The use of similarity vectors and fuzzy logics for mapping soil texture in a work developed by Ashtekar et al. (2014) resulted in model and validation sets rather biased, and failed to capture the spatial variability of properties.Such results highlight the importance of choosing a representative data set, contrary to this case in which sampling was constrained to places nearby roads due to the access limitation throughout the study site.A weighted average of the fuzzy membership values and the typical soil property values of the soil types is done pixel by pixel.This fact highlights the importance of choosing sampling places that represent the central and representative value of the soil type, in order to avoid the pixel population of each instance with unreal values.The representative values obtained from a mean value of sampled point data outlying property values may cause over (positive MPE) or under (negative MPE) estimation of predicted soil properties (Ashtekar et al., 2014).Fuzzy membership maps represent the uncertainty of prediction.The higher the value, the closer the central concept used for modeling soil type in the landscape.In this sense, if knowledge-based method postulates that a representative value should be chosen, membership maps have a potential for guiding sampling in the field campaign.For that, deep knowledge of the area is necessary to create accurate models and consequently membership maps.
Not only the way of choosing typical values should be considered to compare accuracy of predictions, but also the variation nature of each soil prop-erty.Bulk density showed low coefficient of variation and lower values of RMSPE and MPE at both watersheds (Menezes et al., 2016).In this case, the lesser variation might result in better agreement between predicted and observed (validation) values for all the prediction methods tested.The opposite trend was found for drainable porosity.
LCW shows a general trend that maps of land use applied to modelling promoted better accuracy.This could be related with the number of points used in the modelling, which are higher in models that use soil type and land use information, improving representativeness of spatial variability.In this case, one typical value is required for each combination of soil type and land use, whose number of combinations is dictated by ANOVA, whereas those models developed only considering soil types, required only 5 points for the spatial prediction (one typical value per each soil type).The types of land use are very contrasting when comparing pasture to tropical Atlantic Forest, which has a dense canopy and higher soil organic matter content.At MCW, where the relief is gentle with predominance of Oxisols under pasture, the prediction accuracy is overall better in which the use of mean typical values showed some of the best accurate predictions.
Ksat values at LCW and MCW (Menezes et al., 2016), as well as in other studies (Moustafa, 2000), have been recognized for their high spatial variability, skewed frequency and non-normality of distribution.Data normality can influence the estimation of certain spatial interpolation methods that assume input data are normally distributed around the mean (Li and Heap, 2011), e.g., kriging or linear regression.In these cases, data transformation is required and back-transformation brings back the predictions to the original scale.However, back-transforming the estimated values can be problematic because exponentiation tends to exaggerate any interpolation-related error (Goovaerts, 1999).In this study, as already pointed out by Zhu et al. (2010), similarity vectors have an inherent non-linearity and can be used to describe and model non-linear variation of Ksat, overcoming the limitation of some interpolation methods.The MPE and RMSPE values closer to zero show high accuracy of the presented method to map Ksat in both watersheds.
Prediction maps
The best prediction maps of each soil property are presented in Figures 5A, B, C, D, E, F, G, H, I and J. Figures 5A, C, E, G, and I show respectively landscape and land use used in prediction of soil organic matter, mean and land use for bulk density and total porosity, landscape and land use for drainable porosity prediction and landscape for Ksat prediction at LCW. Figures 5B, D, F, H, and J show respectively landscape and land use for soil organic matter prediction, mean for bulk density, mean and land use for total porosity, and mean for drainable porosity and Ksat at MCW.
Since the knowledge-based technique incorpored the pedologist knowledge into the modelling, it is possible to observe the continous nature of spatial distribution in each soil type (Tables 1 and 2) and/or land use (when it was used in the prediction), providing a realistic portrayal of variation without a smoothing effect.This continuous variation is an advantage to capture spatial prediction, since soil property maps generated from conventional soil survey maps (polygon-based) are not sufficient due to general low level of detail (Zhu et al., 2010;Menezes et al., 2014).
There is a tendency for predicted soil property values to be stratified by soil type, especially those where a polygon raster map is used.In some cases, this artefact of polygon in spatial prediction is clear, as in Figure 5A, due to the use of polygon of eucalyptus land use (Figure 3F).Because knowledge-based digital soil mapping technique uses only one typical value per soil type for spatial prediction, the range of predicted properties are rather different from the interpolation data set range (Menezes et al., 2016), which can compromise prediction accuracy.
At LCW, the relief seems to influence the vegetation cover indirectly, since pasture is preferably implanted in flatter and lower areas.In addition, the higher soil organic matter content detected at higher altitudes (Figure 2A) was probably due to lower tem-peratures.Soil organic matter has been identified as a major controlling factor in aggregate stability (Angers et al., 1997).Vegetation distribution influences soil or-Sci.Agric.v.75, n.2, p.144-153, March/April 2018 ganic matter (Gessler et al., 2000), which, in turn, may explain the lower bulk density, higher total porosity, higher drainable porosity and Ksat in the same portions of the landscape, where the land use is native forest or natural regeneration (Figure 2F).The opposite situation happens in pasture areas.
At MCW, the soil organic matter prediction might be influenced by land use, revealing higher values in Cerrado and eucalyptus areas in the eastern side of the watershed (Figure 3F).Lower values of soil organic matter were found under pasture areas, which is the predominant land use in this watershed.Water distribution in landscapes stricly controls soil carbon dynamics (Gessler et al., 2000), even though the floodplain did not show higher values of soil organic matter, which may be due to the very high vertical and lateral spatial variability of characteristics, typical of these lowland environments.Soil organic matter maps showed higher values in the convex summit.Gessler et al. (2000) highlight the combination of higher weathering-leaching, very low natural fertility, low temperatures in the past, and limited activity of microorganisms might have contributed for the organic matter accumulation in this landscape position.
Differently from the other physical properties studied, the Ksat values are also influenced by soil properties at depth.Therefore, the spatial variability of this soil property may be related to properties better expressed in the B horizon of soils.Higher values of Ksat were found in Oxisols, where the adequate physical properties are mainly influenced by aggregate stability, as mentioned before.This trend was not followed by the total porosity and drainable porosity (topsoil).In the topsoil, even for Oxisols, the frequent wetting and drying cycles could be responsible for the decrease in aggregate stability (Caron et al., 1992), where the granular structure behaves as a blocky structure (Ajayi et al., 2009).Lower values of bulk density and higher values of total porosity were found in areas with relatively higher values of soil organic matter, as well as in the Cerrado area.Total porosity seems to be related with land use as well, as observed in areas under maize crops.
Conclusions
The knowledge-based digital soil mapping is an accurate option for spatial prediction of soil properties considering: 1) a less intense sampling scheme; and 2) scarce resources for high density samplings in Brazil; 3) adequacy to properties with non-linearity distribution, as Ksat.
Land use influences the spatial distribution of soil properties thus it was applied in the soil modelling and prediction.The way of choosing typical values varied not only according to the prediction method, but also with the nature of spatial distribution of each soil property.
Figure 1 -
Figure 1 -Flowchart showing all the steps accomplished to generate soil property maps.DTMs = digital terrain models; RBR = rule-based reasoning.Function types: A) bell shape curve, B) S shape curve, C) Z shape curve, D) nominal or categorical.
Figure 2 -
Figure 2 -Digital terrain models and land use map of Lavrinha Creek Watershed.A) digital elevation model; B) slope; C) wetness index; D) plan curvature; E) profile curvature; F) land use.
Figure 3 -
Figure 3 -Digital terrain models and land use map of Marcela Creek Watershed.A) AACHN (altitude above the channel network); B) slope; C) wetness index; D) plan curvature; E) profile curvature; F) land use.
Figure 4 -
Figure 4 -Schematic representation of sampled points distribution and the way of choosing typical values (Vk).A) mean value sampling points, B) mean typical values, C) mean and land use sampling points, D) mean and land use typical values, E) landscape sampling points, F) landscape typical values, G) landscape and land use sampling points, H) landscape and land use typical values.
Figure 5 -
Figure 5 -The best prediction maps of soil physical and chemical properties.SOM = soil organic matter; Ksat = saturated hydraulic conductivity; A) landscape and land use used for SOM, C) mean and land use for bulk density, E) mean and land use for total porosity, G) landscape and land use for drainable porosity, and I) landscape for Ksat prediction at Lavrinha Creek Watershed; B) landscape and land use for SOM, D) mean for bulk density , F) mean and land use for total porosity, H) mean for drainable porosity, J) mean for Ksat at Marcela Creek Watershed.
Table 1 -
Ranges of optimality curves of soil types at Lavrinha Creek Watershed.
Table 2 -
Ranges of optimality curves of soil types at Marcela Creek Watershed.
Table 3 -
Statistics of soil properties at Lavrinha Creek Watershed (LCW) and Marcela Creek Watershed (MCW).
Table 4 -
Comparison of interpolation methods at Lavrinha Creek Watershed (LCW) and Marcela Creek Watershed (MCW).RMSPE = root mean square of prediction error; SOM = soil organic matter; Ksat = saturated hydraulic conductivity.
|
v3-fos-license
|
2023-11-15T17:46:18.683Z
|
2023-09-01T00:00:00.000
|
265202634
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://drpress.org/ojs/index.php/fcis/article/download/12442/12111",
"pdf_hash": "128a23e6833b37bdbc5bbfe3d59d22912e2d58b9",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2385",
"s2fieldsofstudy": [
"Computer Science",
"Economics"
],
"sha1": "abafa9c79b078ded4fd21db9953e92ba16641a16",
"year": 2023
}
|
pes2o/s2orc
|
Research on Knowledge Creation and Innovation Method based on Deep Learning and Human ‐ machine Collaboration Model
: Knowledge creation and innovation have reached a stage of high-quality development since the widespread application of deep learning and human-machine collaboration models. Proposing better theoretical propositions to meet knowledge creation and innovation needs in the knowledge economy is necessary by adhering to knowledge-driven innovation ideas. Based on the dynamic evolution of deep learning and human-machine collaboration model development, a theoretical analysis framework for the sustainable development of the knowledge industry is constructed according to the inherent logic of knowledge creation and innovation, which can explain the knowledge creation and innovation development mechanism jointly generated by the knowledge generation mechanism and knowledge circulation mechanism involving deep learning and human-machine collaboration mode. And continue to explore the possibility of moving towards the goal of high-quality knowledge development from the perspective of challenges, changes, and practical deductions in the development of the knowledge industry. Knowledge creation and innovative development aim to provide content that meets expected standards for the knowledge economy and is committed to continuously improving knowledge quality and enhancing knowledge satisfaction. Therefore, it is necessary to strengthen knowledge control based on the deep learning quality internal cycle, build an interaction and feedback mechanism between human-machine collaboration mode and knowledge quality perception, and establish a knowledge and economic evaluation system to achieve knowledge creation and innovate high-quality development, promote knowledge economy, and truly meet social needs.
Introduction
Knowledge creation and innovation are one of the main responsibilities of the knowledge economy and are also the collective name of the knowledge industry, which can be divided into basic knowledge creation and innovation and non-basic knowledge creation and innovation, composed of deep learning and human-machine collaboration models, respectively.In order to improve knowledge quality and satisfaction, the knowledge industry also entrusts knowledge evaluation systems to evaluate knowledge content.Based on the deep learning and human-machine collaboration models, the knowledge generation mechanism is the key to knowledge creation and innovation, and the knowledge circulation mechanism has become an evaluation indicator.In contrast to traditional knowledge creation and innovation, deep learning and human-machine collaboration models emphasize diversity, flexibility, and intelligence.Therefore, the issue of sustainable development of the knowledge industry is proposed, and high-quality development provides a new goal for the knowledge industry.
High-quality development originates from innovative ideas driven by Knowledge.Its connotation contains value reversion and is also a tool for the knowledge economy.From the perspective of knowledge structure, high-quality development pursues balance and realizes knowledge modernization through the combination of deep learning and a human-machine collaboration model.However, this is only in theory.Today, the knowledge industry has practiced a unique path of two-way optimization.Promoting high-quality development not only rewrites knowledge content and reflects the quality of Knowledge but also changes the knowledge system to meet social needs.Consequently, a global perspective and pattern are necessary when discussing high-quality development.Therefore, a common sharing proposition is proposed in deep learning and human-machine collaboration mode.In short, high-quality development is the necessary condition and guarantee to achieve sustainable development of the knowledge industry.From a real perspective, the knowledge industry has made progress, but there are also shortcomings.However, deep learning has not yet fully succeeded in eliminating knowledge challenges, and it continues to work hard in this area.Therefore, high-quality development requires strengthened measures, which are requirements for deep learning and human-machine collaboration models and expectations for the knowledge economy.
Based on the above background analysis, this paper proposes a knowledge creation and innovation method based on deep learning and human-machine collaboration model to promote high-quality development.At the same time, the problem of sustainable development of the knowledge industry is solved through theoretical analysis and practical deduction.The main content is to analyze the knowledge challenges of deep learning and the human-machine collaboration model.Then, corresponding response strategies are proposed to effectively deal with the emergence of derivative risks under the rule of algorithms, which have theoretical significance and practical value.
The "Knowledge Creation and
Innovation" of Deep Learning and Human-machine Collaboration Model Realize the New Change in the Knowledge Economy
The Creation and Innovation of Knowledge Content
Knowledge content creation and innovation have developed simultaneously with the knowledge economy, which is "soaked" in innovation concepts, representing the value orientation of the knowledge industry and reflecting the knowledge content generation strategy since deep learning and human-machine collaboration models were first developed.However, it is still difficult to achieve consensus when constructing the definition and essence of knowledge content using certain quality criteria.The creation and innovation of knowledge content involve multiple levels and dimensions, such as the form, structure, function, attributes, value, etc., of Knowledge.Different perspectives and roles may have different understandings and evaluations of it [1].This paper aims to explore the connotation, characteristics, mechanisms, and impacts of knowledge content creation and innovation from the perspective of deep learning and humanmachine collaboration models to provide some theoretical reference and practical guidance for the sustainable development of the knowledge industry.
Knowledge, Innovation and Economy
Knowledge quality is an important criterion for knowledge content and an objective expression of the knowledge value.Different definitions of knowledge quality are discussed from the perspectives of knowledge form, structure, function, and attributes.Some scholars believe that knowledge quality is the degree of knowledge validity or satisfaction [2].It is precisely because the quality of Knowledge is more measurable to a certain extent and belongs to applied science with the purpose of high-quality development.A history of research on knowledge quality dates back to ancient Greece, and its main activities include knowledge classification, logical reasoning, dialectics, etc.However, the concepts and goals of high-quality development are closely related to the rise of the knowledge economy.Through deep learning and human-machine collaboration models, knowledge creation and innovation have become important responsibilities of the knowledge economy.The main contribution of the knowledge quality theory in the 20th century was to propose concepts such as knowledge generation mechanism and knowledge circulation mechanism.Thus, high-quality development was initially defined primarily regarding knowledge quality measures based on deep learning and standardized attributes of human-machine collaboration models.
Knowledge Challenges Brought by
Deep Learning and Human-machine Collaboration Model
Chaos of Knowledge: The Excessive Generation of Deep Learning Causes the Imbalance of Knowledge System
Compared with traditional knowledge creation and innovation, deep learning emphasizes the relationship between data and algorithms and is characterized by automation, intelligence, and diversification.Most scholars maintain that deep learning can rationally evaluate knowledge content, even if some doubt the direct correlation between deep learning and knowledge creation and innovation [3].Goodfellow et al. proposed the classic deep learning model of a generative adversarial network, which contains two elements: a generator and a discriminator.Since then, this model has become a typical tool for knowledge generation mechanisms, thus developing the concept of knowledge generation.These scholars believe knowledge generation is creative and "creates something from scratch."Only when the generator and discriminator reach Nash equilibrium will the knowledge content be high quality.Thus, knowledge generation is the result of deep learning.Some scholars have also summarized knowledge generation as a two-way model: a data-based and goal-based knowledge generation model.The former focuses on data-driven, while the latter focuses on goal-oriented, that is, knowledge circulation.From the perspective of high-quality development, although deep learning has experienced some practical failures, it can improve knowledge content's diversity, flexibility, and intelligence.As a result, knowledge generation has gradually become a consensus in the research and practice of sustainable development of the knowledge industry [4].
The diagram of the deep learning and human-machine collaboration model is shown in Figure 1.
Knowledge Prison: The Knowledge Operation under the Human-machine Collaboration Model
The conceptual essence of knowledge prison is to focus on the problem of knowledge control.Knowledge control is the application of knowledge management thinking in the knowledge industry.In order to overcome the shortcomings of traditional knowledge creation and innovation, the humanmachine collaboration model framework has entered the research field as a new alternative model.According to the framework, the human-machine collaboration model should ensure the effective realization of knowledge content, establish professional standards for knowledge creation and innovation output, "capture" knowledge quality through technologies such as knowledge assessment systems, and measure knowledge satisfaction using knowledge cycle methods.The human-machine collaboration model framework reconstructs the knowledge generation mechanism, emphasizing enhancing human and machine interaction and building reliability, sustainability, scalability, and adaptability for high-quality development.
The Birth of Derivative Risks under Algorithmic Domination
Algorithmic domination is the main risk of deep learning, mainly reflected in algorithms' impact on knowledge content.That is, the actual situation of knowledge quality and satisfaction is directly reflected through the knowledge generation mechanism.Some factors that constitute the development of algorithmic domination are gradually taking shape, such as algorithm transparency, algorithm responsibility, algorithm fairness, etc., and various evaluation systems are gradually receiving attention.Nevertheless, from the perspective of high-quality development, some deep learning practices are still at the data-driven stage and do not align with the logical framework and generation mechanism of the human-machine collaboration model, leading to knowledge barriers, knowledge distortions, and knowledge biases etc.
Two-way Optimization: Deep Learning Gets Rid of Knowledge Challenges
From the perspective of high-quality development, knowledge generation is integral to knowledge creation and innovation, and it also represents the core of deep learning.Therefore, knowledge generation regards data-driven as the main generation logic.Knowledge control is the main guarantee of knowledge quality and the executive subject of the human-machine collaboration model [5].At this stage, knowledge control strengthens deep learning control from the perspective of knowledge content.There are three main forms: One is to optimize the generator and discriminator.It is possible to achieve Nash equilibrium between knowledge generation and knowledge circulation in explicit generators and discriminators.The second is to formulate knowledge quality standards.Establish data quality standards, algorithm quality standards, and evaluation standards to achieve standardized control of deep learning.Third, the internal process reengineering of deep learning.In recent years, Google and others have used artificial intelligence to improve deep learning efficiency and knowledge satisfaction.However, in contrast to the human-machine collaboration model, current deep learning requires further improvement in terms of sustainability [6].
Shared Responsibility: Human-machine Collaboration Model for System Optimization and Governance
The fundamental difference between the human-machine collaboration model and deep learning lies in its target attributes.The quality standards and evaluation criteria of deep learning are data-driven, and knowledge generation mainly reflects diversity and flexibility.In the logical framework of the human-machine collaboration model, accurate feedback, interactive collaboration, goal orientation, and value reversion are the core values and highest standards of high-quality development [7].Due to the diversity of current knowledge content types and the differences in social needs, knowledge circulation has become increasingly complex.Although the human-machine collaboration model can improve knowledge satisfaction, the knowledge evaluation system remains imperfect, and there is no effective mechanism for knowledge control.Therefore, this produces the "shortboard" of knowledge, affecting knowledge quality [8].
Value Reversion: Correct Algorithm, Restore Knowledge
From the perspective of value reversion, deep learning cannot accurately provide the knowledge content required by social needs.The main form of social demand for knowledge content is satisfaction evaluation, but deep learning lacks relevant information and feedback mechanisms for social demand.The core of this problem may be algorithmic domination.Algorithms are described as "black boxes" in knowledge generation mechanisms, and their impact on knowledge content directly influences the quality of Knowledge.While algorithmic domination is primarily concerned with information, such as data quality and algorithm quality, social needs are relatively rare.Usually, social needs are difficult to obtain or measure.Information asymmetry and imperfect evaluation systems directly lead to obstacles in knowledge prisons.
Deep learning cannot avoid the "black box" of the algorithm when it comes to generating Knowledge.Algorithms are standard and effective tools in the knowledge generation mechanism and play an important role in knowledge content.As a result, algorithmic domination is not only a technical concept but also a social one.Therefore, "data-driven" deep learning has become the decisive mechanism for knowledge quality.Deep learning is generally interpreted as a knowledge path that is gradually developed based on data, although it may also contain innovative approaches.From data quality to algorithm quality, deep learning is closely centered on knowledge quality from beginning to end.Although deep learning should be committed to improving knowledge satisfaction to adapt to social needs, when algorithmic domination is amplified, it also brings about a dilemma: the phenomenon of knowledge prison.In general, there is still room for improvement in knowledge circulation and other aspects of deep learning, and its value reversion needs to be further improved, which is also an important task of high-quality development.
Conclusion
Knowledge creation and innovation have entered a stage of high-quality development, posing new challenges and requirements to the knowledge industry.High-quality development is a symbol of the "quality" of knowledge content and an important means of the knowledge economy, which is also an urgent need to realize social needs and maintain value reversion, thus embodying the inherent requirements of knowledge-driven innovative ideas.Under the guidance of this idea, based on deep learning and humanmachine collaboration models, a theoretical analysis framework and practical mechanism for the sustainable development of the knowledge industry will be constructed.In recent years, modern information technologies such as artificial intelligence have promoted knowledge generation, empowered knowledge circulation through knowledge control, and improved the accuracy and scientificity of knowledge quality and satisfaction.Its value is in line with the inherent logic of high-quality development.Therefore, deep learning and human-machine collaboration models also provide new knowledge creation and innovation paths.Generally, the sustainable improvement and development of the knowledge industry will help better meet social needs and promote the knowledge economy.
Figure 1 .
Figure 1.Diagram of deep learning and human-machine collaboration model
|
v3-fos-license
|
2023-05-24T15:09:23.293Z
|
2023-05-01T00:00:00.000
|
258860199
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.1016/j.childyouth.2023.107022",
"pdf_hash": "d1a2a577998ff6211b20a3b1ffee7fbf33fdc555",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2387",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"sha1": "358ca3789d08e0dd4e46614e00d03fd64350a3e4",
"year": 2023
}
|
pes2o/s2orc
|
The child – Object or subject of child care?
The term “ subject ” and its theoretical implications are essential to German tradition of socialpedagogy and social work. If we look back in history, there is a sharp contrast to the practices of social work, especially in the field of child and youth welfare. This applies to Switzerland, Germany and internationally. In Swiss history, objectifi-cation is most clearly expressed in the German term “ Verdingkinder ” , which terminologically indicates the active process of objectification (Ding = object). Informed by historical research, questions can be addressed to current practice of child and youth welfare. Against the background of current research results, it seems too easy to dispose of the critical questions in the past. The article looks in the historical rear-view mirror to assess the Swiss “ state as a parent ” and to develop conclusions for contemporary questions of child and youth Welfare. The analytical framework will focus on the categories of objectification and subjectification, informed by Martha Nussbaums theories and theories of the German-speaking socialpedagogy by Michael Winkler.
Introduction
Current discourses on children's rights, including participation, reflect the changed cultural approach to children in many societies around the globe.In these discourses, children appear as bearers of rights and as subjects, equal to adults.However, the processes of change, which seem positive at first glance, conceal fundamental subjecttheoretical questions of growing up from a socio-pedagogical perspective.In distinction from political or legal questions of the implementation of children's rights, this paper understands the orientation towards the subject as a fundamental prerequisite of educational processes (Gabriel & Tausendfreund, 2019).More radically formulated, subject orientation is the necessary precondition of any child care intervention that works.
The paper takes a historical perspective based on knowledge of the history of children in care settings and on empirical material from the Swiss National Science Foundation (SNSF) study "Life trajectories after residential care placements in the canton of Zurich 1950-1990 ′′ (2014-2018).This historical retrospective reveals patterns of reification that raise central questions about the practice of child and youth care today.These patterns are critically contrasted with results of a recent study on "Placement Break down in Foster Care" (founded by the Jacobs Foundation, [2015][2016][2017][2018] which included the children's perspectives on breakdown.In addition, the paper discusses theoretical approaches that represent subject-theoretical premises of German-speaking social pedagogy (Winkler, 2021) as well as writings on subject and recognition theory (Honneth, 1992;Ricoeur, 2006), and questions of integrity (Pollmann, 2018) and links them to the empirical data.Against this backdrop, the paper is a first approach to pursuing questions of subjectification and objectification in the context of children and youth growing up in care settings.
History of child care in Switzerland -A frame to current practice
The history of children in care and care interventions worldwide provides numerous examples of children who were exploited for economic, political, colonialist, or religious reasons.The so-called "Home Children" (Harrison, 1979), for instance, was a relocation scheme where more than 100,000 children were sent by governmental foundations for the poor and charities from the United Kingdom to Canada, Australia, New Zealand, and South Africa as cheap migrant workers.This often happened without their parents' consent.Although the programme was largely discontinued in the 1930s, it was not entirely terminated until the 1970s (Harrison, 1979).
De Mause (1974) stated on the history of childhood in general that it "is a nightmare from which we have only recently begun to awake.The further back in history one goes, the lower the level of child care and the more likely children are to be killed, abandoned, beaten, terrorized and abused" (de Mause 1974, opening paragraph).Considering various studies from diverse countries, this is true for the history of children in care and the history of care settings for children as well.For more than three decades, there has been a public debate in various countries about the fate of children, in residential and foster care in the last centuries.These debates were often triggered by cases of abuse that came to light.
In Switzerland, similar debates began as early as the 1980s, initially in connection with the charitable foundation Pro Juventute 1 .In Switzerland, over the past century, tens of thousands of children and young people have been placed in foster care and residential care.Swiss research situates placements in child care in the context of poor relief and guardianship systems.Well into the 20th century, the responsible authorities regarded the dissolution of families and the placement of children in residential or foster care as an effective remedy against poverty.It was also a means of establishing social conformity.From this perspective, placing children in out of home care was part of a social welfare policy that more highly valued discipline or costs than participatory rights and equal opportunity (Hauss, Gabriel & Lengwiler 2018).The high degree of pressure to adapt and conform that both public and private welfare institutions exerted on socially marginal groups and on individuals living in precarious circumstances has been extensively documented in diverse studies that touch systems for children in care, e. g. guardianship, child protection, and welfare practices (Galle & Meier, 2009;Hauss and Ziegler, 2010), administrative containment (Rietmann, 2012), compulsory education (Furger, 2008), eugenics and the medicalization of social deviance (Dubach, 2013;Wecker & Braunschweig, 2012, cf. Hauss et al., 2018).
Much attention in Swiss research on the history of children in care was placed on involved actors of the state and church.Thus, the experience of children in the regular residential care placement process came somewhat out of academic focus in Switzerland even though many of them suffered from lifelong vulnerability due to their experiences.Little attention was paid to their personal integrity and development.
Research also shows that they were often subject to social isolation, forced labour, or even sexual or physical abuse: Current research shows that rights, safety, and well-being of children in care were unimportant or only of marginal importance once the placement decisions had been taken (Hauss et al., 2018, Lengwiler et al., 2013).Due to their experiences, many of those children suffered from lifelong vulnerability (Gabriel, Keller & Bombach, 2021).Between 1950 and 1990, many child protection measures ended up in penal institutions, and sometimes in the adult penal systema common administrative practice in Switzerland (UEK, 2019).Research has shown that the children's needs and perspectives did not play a crucial role in placement decisions (Businger & Ramsauer, 2019).More emphasis was placed on maintaining social order and conformity and the established power balance, entirely following the logic of those within the system who had the power, authority, and the right to act on behalf of the state (Ammann & Schwendener, 2019).
Beside these findings on residential institutions, one particular striking key example for the history of children in care in Switzerland is the so-called "Verdingkinderwesen".The German term "Verdingkinder" terminologically indicates the active process of objectification of the child (German: Ding = English: Object).Due to widespread poverty in large parts of the population in Switzerland children of poor families were placed mainly in rural areas."Verdingung" refers to the placement of children as cheap labour force (contract children) -especially in agriculture.Until 1910, many of these children were even auctioned off at fairs to those who demanded the lowest boarding fees.Many of these children died of mistreatment or even starvation and thirst.Others survived but suffered throughout their lives from violence, contempt, and lack of affection they had experienced (Zatti, 2005, Leuenberger & Seglias, 2008).The contract children scheme was common practice in Switzerland up to the mechanisation of agriculture at the end of the 1960s, and in some cases even later.Since the 1950s, these child labourers, were mostly called "foster children".The Swiss government officially apologized to those affected in 2013.
The contradictory relationship between economic and social disciplinary measures on the one hand and the child welfare-related mandate of child care on the other hand can clearly be derived from history.A look at historical research also shows that the orientation toward the child and its needs is recent in Switzerland (Lengwiler et al., 2013).In recent years, the discussions have revolved primarily around foster children and institutionalised children, as well as other victims of coercive measures (UEK, 2019).
Looking at today's child care, we have known since 2008 that the degree of children's participation in placement decisions is low in Switzerland: 53.3% of six to 12-year-olds and 23.6% of 13 to 18-yearolds in out-of-home placements say they have not been informed about the reasons for the placement (Arnold et al., 2008, p. 106).From an international perspective, this finding is no exception.Similar results can also be found in other countries (Balsells, Fuentes-Peláez & Pastor, 2017, Križ & Roundtree-Swain, 2017;Cossar, Brandon & Jordan, 2016;ten Brummelaar et al., 2018).Overall, it can be concluded that the existence of children's rights alone does not ensure that they will be implemented in the placement process or in the day-to-day life of residential care institutions.Furthermore, current studies conducted in various national contexts raise doubt about the safety and protection of children in care today.The Commission Samson (2012, p. 147) on "Sexual Abuse of minors in institutions under the authority of the government" stated that in Dutch residential childcare homes, the probability of being abused is more 2.5 times higher than for other children of the same age.Fifty percent of the perpetrators of violence are peers and only two percent of the cases are known to the professionals (Commissie Samson, 2012, p. 146).The lack of knowledge concerning the remaining 98 percent of cases not perceived raises the question beyond the Netherlands of what growing up in an out-of-home placement means for young people and how safe it is in any country.This question is particularly important to answer in child protection cases were children are placed from a non-safe environment to a residential care facility.Several more studies show internationally a significantly higher risk of abuse in residential childcare homes, e.g., for Germany (Rau et al., 2019), Norway (Greger et al. 2015) or the USA (U.S.Department of Justice, 2010).Currently, there are general data on incidents, agency responses, and political implications of child protection (UBS Optimus Foundation, 2012), but no representative findings are available that focus on the risk of abuse in today's childcare Institutions in Switzerland.If children are removed from their family for reasons of child protection, the central question seems to be whether their integrity is protected in the new place or not.
Methods and studies the paper is based on
The paper refers to two research projects conducted at the Institute of Childhood, Youth and Family of the Zürich University of applied Science (ZHAW).Ethical approvals were reviewed according to Swiss cantonal and national standards for both studies: (1) the research project entitled "life trajectories after residential care placements in the canton of Zurich 1950-1990 ′′ .The study was part of the research network: "Placing Children in Care 1940-1990', funded by the Swiss National Science Foundation -Sinergia Program (funding no.14769).
Besides an archive study (n = 606 files) the project comprised biographical interviews (n = 39 former residents of children's homes in the canton of Zurich).The interviewees experiences residential care between 1950 and 1990.The earliest date of an interviewee leaving care 1 A Swiss youth welfare foundation established in 1912 under the auspices of the "Schweizerische Gemeinnützige Gesellschaft" (Swiss Society for the Common Good).
was in 1951, and the last left in 1989.This means that at the time of the interviews the interviewees were between 25 and 85 years old.The distribution of interviewees was gender balanced (Businger & Ramsauer, 2019).The reasons for entering the children's home and the age on entry varied.A frequent reason for leaving was to start vocational training at the age of 16 to 18 years.The research aimed to trace, analyse, and interpret the patterns of people's lives, the crises they experience, and their coping mechanisms.It particularly aimed to understand how the life trajectories of adults relate to their experiences in residential care.The research approach took into account that there are complex interactions between resilience and vulnerability (Gabriel, Keller & Bombach, 2021).Following the Werner and Smith (1982) quote: "Not all development is determined by what happens early in life" (p. 2), the project was broadly geared towards positive and negative developments.Children "who (…) swim when all known predictors say they should sink" (Cowen and Work, 1988, p. 593) were of special interest.Another objective of the research was to determine whether and where the formative experience of growing up in residential care between 1940 and 1990 resulted in similar outcomes for different individuals.The reconstruction of individual courses of life was the central goal of the study, which was conducted in the tradition of phenomenological, ethnomethodological, and interactionist science.Through qualitative analysis, based on grounded theory (Strauss & Corbin, 1990), central themes and questions were reconstructed from the data on the basis of testimonies and differentiated in a circular manner.The interest in knowledge is not limited to the individual case or to the descriptive retelling of individual life stories, but it focuses on intersubjective experiences, and on recurring contexts of meaning.
(2) the research project entitled "Foster Care Placement Breakdown" (funded by the Jabobs foundation, 2015-2018).An international team of researchers from the ZHAW School of Social Work in Switzerland, the University of Siegen in Germany, and the University of London in England has conducted the study.The aim of the study was to evaluate the reasons why foster care placements in England, Germany, and Switzerland are disrupted.To fulfil these objectives, the project team conducted narratives interviews with foster children and parents who had experienced a breakdown of the foster placement (n = 60) and analyzed files relating to foster care placement breakdowns (n = 200).The analysis of the individual case structure allowed for an ensemble of factors, which is embedded in concrete life experience and biographical processes, to be analysed hermeneutically (Bombach, Gabriel & Stohler, 2018;Gabriel & Stohler 2020).This procedure is to be supplemented by qualitative biographical methods capable of distinguishing between universal, generation-typical, and biography-typical case structures (cf.Garz, 2000;Hildenbrand, 2005;Loch & Schulze, 2012).
The two studies refer to different points in time.However, they share a common topic and a common approach.Both are qualitative, narrative approaches in survey and analysis, which focus on the perspective of the (former) children and their experiences of care in a biographical perspective over time.Switzerland (1940Switzerland ( -1990)"
Selected results from "experiences of residential child care in
The strong impact of residential care experiences on a person's life manifested itself in turning points and critical life events, as well as in certain life domains even decades after the person had left the care facility.Individuals often report, for example, having great difficulty engaging in social relationships with colleagues, friends, partners, and children: " [It's] very difficult […] because you don't really trust anyone […].You lack that sense of basic trust that children normally have" (Adrian).
Analyses show that these impacts are closely associated with experiences in the care setting.Memories of institutional care evoke feelings of loneliness, isolation, and a sense of being left on one's own.Jonas, who also spent his childhood in residential care, expresses feeling out of place or superfluous: "Yes, sure.My God, they might just as well have thrown us away.[…] You were simply superfluous, like a piece of meat.We were kept alive, nothing more."(Jonas) This quote reveals how these individuals perceived themselves as children; they were only one of many children in residential care, and they saw little evidence that they were valued as an individual.Of course, not all children who were placed in residential care are ultimately psychologically burdened by their care experience.However, care experiences can manifest themselves unexpectedly, having an impact on the care leavers for the rest of their lives.There is a recurring pattern in the interviews that can be described as the experience of objectification.This feeling of being objectified is often linked to a violation of integrity and a lack of agency (Bombach, Gabriel & Stohler, 2018).
The experiences of numerous formerly institutionalised children showed that they were denied vital dimensions of human recognition (in the sense of Honneth, 1992) during their childhood.In addition to physical violence, they qualified experiences of contempt as incisive: This type of experience refers, for example, to family interactions that violate needs and claims for attention, respect, and appreciation.Especially for those who were placed in a residential institution in early childhood, the question of the legitimacy or illegitimacy of their own birth plays a lifelong significance.The intergenerational connections of recognition and disregard become clear in the following quotation: "I relied on my mother, but she ran away to Spain, I can't rely on my father, he said that a friend of his was also involved in group sex and that's when we conceived you as an accident, so that was my father, at 18 he also told me that, that's when I knew I didn't have a father" (Paul) This case is exemplary for many life courses of former children in care.The knowledge of their origins and the absence of recognition by their biological parents often cause lifelong vulnerabilities.
Many reports of residential care experiences describe 'invasive encroachments' on the integrity of children and young people in care by peers and adults.
«When you're locking up a bird for such a long time and then all of a sudden you say: Fly away, fly away now! And the bird is not flying away and you are wondering why the bird isn't flying away.» (Frank).
Feelings of 'shame' and 'guilt' for being placed in a residential children's home are indicators that their integrity was violated, most perspicuously among people who have kept their experiences in the children's home a secret from their children and partners to the present day.One aspect which seems to be crucial in this regard is the social dimension of integrity in the context of reappraising and publicly addressing the history of residential care.A lack of understanding or a failure to recognise experiences in care that harmed their integrity can cause suffering and further undermine the integrity of the people affected (Gabriel, Keller & Bombach, 2021, p. 5).
Selected results from "children's experiences of today's child care: foster placement breakdown experiences"
The file analysis showed that the professionals saw little reason to document the children's perspective.The children's point of view or even their perspective on the professionals' decisions was only marginally and not systematically included: • Children's opinions and consent, assessments hardly existed; • Documents of the children were hardly available; • History of the child was seldom written down and often incomplete; • Descriptions and assessments from third parties were copied and adopted In addition, the children's testimonies revealed a lack of concern for their experiences or feelings.Louis, a 16-year-old boy put the experienced objectification in a nutshell.His placement in a residential care facility started in 2010 with an experience of objectification.Louis' biographical theme of "…being run into the ground" reflects his father's repeated violent abuse.The unpredictability of his father's violence is crucial for Louis, "(…) because I had to be afraid all the time that he would hit me for some little thing".His mother does not react nor protect him; according to her own statements, she already "hated Louis in the womb".The situation of helplessness is aggravated by the fact that Louis' cries of pain can be heard in the whole block of flats without anybody intervening.The Swiss child care and juvenile justice system only react later reinforcing his experience of powerlessness and being at the mercy of others.The experience of reification and being controlled by others is directly linked to the institutionalisation of Louis and his sister.
"yes and then they pulled me out of school and I was delivered to the children's home … that was a huge shock" (Louis,16).
It is not the violent father who is removed from the family, but Louis who is placed in a children's home.If we listen to children's testimonies about their experiences with child protection, we find multiple forms and more examples of objectification.One young person compared the legal guardians to birds of prey circling above in times of manifest crises: "and they have to do extreme things to get their attention" (Sarah,14).
In many cases, biographical themes of powerlessness and objectification were amplified by experiences in child and youth welfare: "Well, actually I didn't want to live there, that was already the case before 2014.I mean, many things happened that you don't need to experience.[…] You can't really do much on your own.And I mean, in my case now, or if it's really a crap situation, you're not taken seriously anyway.You're just a foster child anyway.You have nothing to say, even though it's your life and you can't help it that you're a foster child."(Kim,13 years) The self-perception of children as having no rights in out-of-home care often leads to feelings of powerlessness instead of agency.Therefore, it is indispensable to acknowledge and include the child's perspective in decisions regarding their lives, primarily to avoid experiences of objectification and reification.Professionals often mention the child's age and maturity as reasons for not doing this systematically, as mentioned in the UN Convention on children's rights.Besides the inadmissibility of this argument, it indicates that there are major deficits, especially in dealing with younger children.This is documented as an example at the start of the foster care placement of a four-year-old girl.When the girl asked where her mother was, she was told that her mother had just gone to the bathroom.In reality, she had already left.This was the beginning of a foster child placement in 2015 which subsequently led to several placement changes (Gabriel & Stohler 2020).
Linking empirical results to a first outline of a theoretical framework of subjectivity for children in care
With the strengthening of the professional orientation toward the best interests of the child and children's rights, not all questions of the past and present child care are answered.What we can see in the history of children in care might be called objectification (or reification) of children.Objectification is commonly examined at the level of society, but as a type of dehumanisation, it can also refer to institutions or to individuals.Nussbaum found the common understanding of objectification too simplistic.Objectification or reification more broadly means treating a person as an object without regard to their personality and dignity.Although Nussbaum (1995) specifically addresses the sexual objectification of women, she coincidentally covers the questions raised by the history of child care.According to Nussbaum, a human being is objectified if one or more of the following properties are applied to them in an analytic sense (Nussbaum 1995): • "Instrumentality" -treating the person as a tool for another's purposes • "Denial of autonomy" -treating the person as lacking in autonomy or self-determination • "Inertness" -treating the person as lacking in agency or activity • "Fungibility" -treating the person as interchangeable with (other) objects • "Violability" -treating the person as lacking in boundary integrity and violable, "as something that it is permissible to break up, smash, break into.• "Ownership" -treating the person as though they can be owned, bought, or sold • "Denial of subjectivity" -treating the person as though there is no need for concern for their experiences or feelings (p.257) Some of these dimensions can be traced in the above mentioned empirical findings.They affect the personal integrity.Experiencing such contempt can lead to an impairment of self-confidence and trust in the worldas voiced above among others in the quotes by Adrian and Kimwhich does not affect physical but psychological and social integrity.
The socio-philosophical concept of "reconnaissance" (Ricoeur, 2006) seems ideal for examining the relations of recognition between generations.Ricoeur (2006) adds to a passive dimension "(demander à) être reconnu": "(to) be recognised, to demand to be recognised" (Ricoeur, 2006, p. 39) an active dimension of "reconnaître".This means "to (re) recognise something, objects, persons, oneself another, one another" (Ricoeur, 2006, p. 39), as it was the case with Paul, when he was denied the knowledge of having a distinct father.Ricoeur's addition is a dialogical and interactive component of "reconnaissance" as the basis of socialisationally acquired abilities to recognise yourself and others.These experience of not being "recognised" or "accepted" by parents often plays a central role in the biographies of formerly institutionalised children.Both nationally and internationally, studies indicate that the mortality rate is higher among people who were in residential care (Gabriel, Keller & Bombach, 2021, p.5). Suicide and life-threatening, risky behaviour can be understood as a radical answer to the central, basic question on integrity posed by Pollmann (2018): 'Is my own life worth living?'.If the answer is negative or ambiguous, this can be a sign of fundamental disruptions in their integrity, or even its total loss.'Fear' and 'depersonalisation' are emotional indicators that a person's integrity may have been disrupted.According to Pollmann's (2018) definition, people have integrity if, in a manner relatively free from internal and external constraints, they are able to live their life (i) in accordance with their own, firm will, (ii) within the limits of the morally tolerable, and (iii) based on an integrated ethical and existential selfunderstanding and (iv) with a general feeling of wholeness, which at the very least requires them to be mentally and physically unscathed (Pollmann, 2018, pp. 77-126).
The socio-pedagogical concept of the subject is fundamentally anchored in German idealism and German educational philosophy.It sets itself apart from the Anglo-American tradition of thought, which was more concerned with empiricism than with mind and consciousness.The need to include the experiences and perspectives of the children is, from this point of view, a basic requirement for the success of professional social pedagogy in out-of-home placements.Addressing the child as a subject is understood as a fundamental prerequisite of education and social pedagogy and the «restart of education» (Gabriel & Tausendfreund, 2019;Lüpke, 2004) in an out-of-home placement.The following remarks are embedded in the tradition of German-speaking social pedagogy which considers the subject as a core category of social work and defines «Subjectivation» as its central professional aim (Winkler, 1999(Winkler, & 2021)).From this perspective, the child always "remains to be recognised as a subject".No matter how small the expressed subjectivity may seem, it is irrelevant whether it is "infringed and damaged, dependent and controlled".This implies that "a suffering subject is not addressed merely as a victim, but as an acting and responsible individual".The problematic situation is part of the biographical reality in which the subject is virtually entangled (Winkler 2021, pp. 148-150).Subjectivity is not just a theoretical construct, it must substantiate itself through action Winkler (2021): • Subject status cannot be separated from the notion of action and activity; it is inextricably linked to creativity.• Subjects are autonomous and responsible beings, whereby the concept of dignity "marks the minimum condition that must be fulfilled in social interaction".• Subject status presupposes experience and history: history as its own product and not only as past and present; the subject has its own time and can create its own future.• Subjectivity implies self-reference in three ways: self-reflection, development of one's own identity, environmental change "for the sake of its own humanisation".(pp. 141-148) In summary, this means: "The subject is the (…) mode in which modern humans can endure the contradictions of the world and at the same time take the initiative, find new foundations and change them.In it -first conceptually, but then also as a motive guiding action in real terms -the disposition over the world is conquered" (Winkler, 2021, p. 138).If we regard children's homes as places that support the development of children as human subjects in the tradition of German social pedagogy, the following central requirements can be formulated (Winkler, 1999): • They must provide existential security and protection.
• They must have an error friendliness that allows the individual actors to playfully try out and adopt social rules.Socio-pedagogical places must provide space and time to allow for testing, failure, and retesting.
• They must open perspectives for the future, … which can build on the existing life story, but also allow for a break with this life story.• They must provide opportunities for development and learning processes by allowing for a new arrangement so that the subjects can tailor it to their needs.• They must not be enclosed spaces but must offer the possibility to visit other places as well as to allow for a return.• They must provide an environment where social and cultural rules and norms of society can be experienced, tested, and shaped.• (pp.321-322) If we want to avoid that children become objects of social, political, or religious interests, it is essential to strengthen the children's perspective and their status as a "subject" of care.
Conclusions: The perspective of the child
The dichotomy between the child as subject or object of education (Oelkers, 2010) is more than a philosophical German concept.The empirical findings of the cited studies show that, historically, one can speak of an objectification of the child and that even today the fulfillment of the subject-theoretical claim in child and youth care remains a desideratum, the implementation of which must be scrutinised.This is especially true when professionals disregard the children's perspective.
From a socio-pedagogical point of view, the orientation towards the subject is a fundamental prerequisite for all professional claims or models of child care.The results of the discussed historical and current research lead to the question of whether Article 12 of the Convention on the Rights of the Child requires a more radical interpretation: "1.States Parties shall assure to the child who is capable of forming his or her own views the right to express those views freely in all matters affecting the child, the views of the child being given due weight in accordance with the age and maturity of the child.2. For this purpose, the child shall in particular be provided the opportunity to be heard in any judicial and administrative proceedings affecting the child, either directly, or through a representative or an appropriate body, in a manner consistent with the procedural rules of national law" (UN General Assembly, 1989).Even when opinions are freely expressed by children, their consideration is wide open to relativization by professionals due to the reference to "maturity".It does not seem surprising that younger children in particular are deprived of information about important decisions in their lives and co-decision-making (Arnold et al., 2008).Recent studies also show that the degree of participation varies according to age and correlates with the knowledge of one's rights (Andresen, Willems & Möller, 2019;Tausendfreund et al., 2020).Basically, it must be kept in mind that only the child's expressionalso independent of his or her maturity does not yet mean that it will be heard or even taken into account by professionals.Taking the situation of the Netherlands (Commissie Samson, 2012) as an example, we can ask: How can it be that 98% percent of all sexual assaults in residential care facilities remain undetected?The finding of a "lack of child-centeredness in child protection" (Alberth, & Bühler-Niederberger, 2015) could potentially be part of an internationally valid answer.Patterns of intervention and professional perspectives must be called into question as to whether they hear and take notice of children's voices.This goes far beyond legal or political questions of participation.
When the empirical findings on the perspective of children in care are connected with theories of social pedagogy, participationin the sense of radical subject orientationcan be understood as a basic prerequisite of all educational processes in the fields of social work.From the perspective of social pedagogy, successful growing up and development is fundamentally a dialogical and thus also a mutually cooperative process: without the willingness of adolescents to be educated, all efforts of adults are in vain.However, to let the participation of children and adolescents take effect pedagogically, more is needed than the simple implementation of political procedures.In this socio-pedagogical sense, the radical recognition of the subject status of adolescents is a central demand that goes far beyond the implementation of children's rights.However, not the claim in itself but its realisation that must be the benchmark for child and youth welfare now and in future.In this respect, participation should not be understood exclusively in legal or political terms.It requires a subject-theoretical basis if it does not want to undermine or overlook/conceal the questions outlined above.However, it takes more than the simple translation of political procedures to make the participation of children and young people pedagogically effective.In this socio-pedagogical sense, the recognition of the adolescent's subject status is a central claim that goes beyond a reference to children's rights alone.
Funding
One cited study was funded by the Swiss National Science Foundation -Sinergia Programme (Funding Number 14769)/ 01.11.2013-28.02.2018, and the other one was funded by the Jacobs foundation.Ethical approvals were reviewed according to Swiss cantonal and national standards.
T. Gabriel
|
v3-fos-license
|
2022-10-27T15:06:28.501Z
|
2022-05-13T00:00:00.000
|
253150415
|
{
"extfieldsofstudy": [],
"oa_license": "CCBYSA",
"oa_status": "GOLD",
"oa_url": "https://talenta.usu.ac.id/dentika/article/download/6581/5022",
"pdf_hash": "e700a2ac5d78259c9ccf8dfe8de09ac536185ef8",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2388",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b7b52a8bd4f5ca9a145d9f9f32c21a51311cd87c",
"year": 2022
}
|
pes2o/s2orc
|
Growth and Development Factors of Infants and Maternal Conditions During Pregnancy for the Eruption of the First Deciduous Teeth (Literature Review)
Tooth eruption is a condition in which the cusp or incisal of the tooth emerges through the gingiva, but not exceeding 3mm above this level after the corona is formed. The first human tooth that erupts is the mandibular primary central incisor, which is the reference for the eruption of others, including the primary and permanent teeth, that support the growth of the jaw, face, mastication, swallowing, speech, and aesthetics. Furthermore, tooth eruption is influenced by the growth and development of the fetus during pregnancy. Maternal conditions during pregnancy such as age, level of education, physical condition, and nutritional intake affect fetal nutrition which indicates the level of growth and development in the form of head circumference, birth weight, and height that affect the eruption time of the mandibular deciduous central incisor. During pregnancy, the maternal preparation to be considered is the age which might range from 20-35 years, adequate nutritional intake of carbohydrates, folic acid, protein, vitamin C, vitamin D, and minerals, prevention of physical fatigue, intelligence in choosing nutrition, and abstaining from alcohol and caffeine consumption. This study aims to provide information/education on the preparation of pregnant women for the eruption of the mandibular primary central incisor which is part of the infant's growth and development.
INTRODUCTION
Humans as diphyodonts have two periods of teeth during their life, 1,2 the first is deciduous teeth also called primary, milk, infant, or lacteal teeth. [1][2][3][4] Meanwhile, the second is the permanent teeth also known as secondary, replacement, or adult teeth. [1][2][3] The total number of primary teeth is 20, which includes central and lateral incisors, canine, as well as the first and second molar. The total number of permanent teeth is 32, comprising of a central and lateral incisor, canine, first and second premolar, as well as the first, second, and third molar. [1][2][3] The teeth play a role in the mastication process, speech, predicting age, and supporting aesthetics. In addition, deciduous teeth provide a place for the eruption of the permanent teeth, to maintain and stimulate the development of the jaw and head. 5,6 The first deciduous tooth to erupt is the mandibular central deciduous incisor. Hence, it can be used as a reference for the eruption of other teeth to avoid the risk of disrupting the growth of the jaw, face, mastication, swallowing, speech, and malocclusion that can interfere with aesthetics. 7 The eruption process for both primary and permanent teeth occurs after the dental corona is formed through the odontogenesis process consisting of the bud, cap, and bell stage. Tooth eruption occurs due to the movement that pushes the teeth into the oral cavity originating from the formation of the root, the role of the periodontal ligament, fibroblast contraction, and vascular pressure thereby displacing the incisal and occlusal surfaces of the primary teeth to the alveolar bone, this process is known as the pre-eruptive stage. Subsequently, in the pre-functional stage, the crown erupts into the oral cavity through the epithelium-lined pathway due to oral epithelium fusion, reaching occlusion contact with the opposite tooth where the incisal edge of the lower incisor meets the cingulum of the maxillary incisor, this is known as the functional stage. 8 Primary teeth are often overlooked because of their short time in the oral cavity, which is ± 6 years. However, they have an important role in maintaining as well as stimulating the jawbone and head growth and development. 5,6 Therefore, this study aims to examine the factors that influence the eruption time of the first deciduous teeth to prevent delays in the optimal jaw and head growth as well as to support the survival of children to adulthood.
DISCUSSION
According to Kiran et al. (2011) (in Massignan 2016, tooth eruption is defined as the presence of the clinical crown in the oral cavity, not exceeding 3mm of the gingiva level. 9 The first deciduous tooth to erupt is the mandibular central incisors, specifically when the infant is 6 months old. Tooth eruption can be influenced by various factors, such as infant's head circumference, birth weight, and length, as well as the maternal condition such as maternal age and education level. 6,7,9,11 Ntani et al. (2015) conducted a study on 2,915 children born in Southampton and reported a strong correlation between the size of the infant's head circumference and the eruption time of the mandibular deciduous central incisors. This is because head circumference is a skeletal measurement variable and is more likely to be associated with dental development. 12 Motamayel (2017) also performed a study on 126 infants at the Hamadan Health Center, Iran, while Vejdani et al. (2015) examined 648 infants aged 3-15 months at the Rasht Health Center. Iran reported that there was a significant correlation between the head circumference and tooth eruption, infants with a normal head circumference or >33 cm experienced tooth eruption time that was consistent with the pattern of the first deciduous teeth namely 6 months of age. Meanwhile, infants with a small head circumference <33 cm experienced delayed eruption of deciduous teeth. 7,13 This is presumably related to nutritional intake, which is one of the factors that affect the growth of various organs in the body. Although the growth rates of each organ system are different, there is a harmony of overall proportions. For example, the teeth begin to grow appropriately when the jaw is large enough to accommodate them, therefore a large head circumference is associated with a rapid eruption time. This is because the large head circumference is in line with the growth of the jaw which acts as a place for the teeth to erupt. In contrast, infants with a small head circumference, caused by premature births and severe malnutrition might experience delayed tooth eruption. 14,15 A study conducted by Zarabadipour et al. (2019) on 160 infants at the Tabriz Medical Center reported that there was a significant relationship between birth weight and the eruption time of the mandibular deciduous central incisors. 6 2017) revealed that there is a difference in the teeth eruption time between infants with normal and low birth weight (LBW), where infants who have a normal birth weight ranging from 2,500-3,500 experienced tooth eruption times consistent with the pattern of deciduous teeth namely 6 months of age, Meanwhile, infants who have low birth weight <2,500 g will experience delayed eruption of deciduous teeth after 6 months. 7 This is related to an infant's nutritional intake such as Vitamin B12, iron, folic acid, and essential fatty acids 18 , where nutritional deficiencies lead to low infant weight and delayed tooth eruption. This is because nutrition plays an important role in the formation of teeth, hence, a typical infant weight will cause a normal tooth eruption time, while low body weight leads to a delay in tooth eruption. 7,19,20 Vitamin B12 has a potential role in elevating plasma homocysteine levels in pregnancy and is implicated in adverse outcomes such as low birth weight. Only a small amount of 3 micrograms is needed daily. Meanwhile, iron deficiency anemia is associated with low birth weight, preterm babies, and might also affect immune function, as well as increase susceptibility to infections. Supplementation of 60 mg ferrous iron and 250μgm folic acid twice a day is recommended. DHA (docosahexaenoic acid) and Arachidonic acid are essential long-chain PUFA (polyunsaturated fatty acids) that are important structural components of the lipid membrane of the central nervous system and are very critical for normal growth and development. 18,21 Aside from these nongenetic mechanisms in the form of nutritional intake, the association between maternal height and birth outcomes consisting of infant's weight and length can be attributed to genetics, given that genetic polymorphisms which influence maternal height might also have direct functional effects on pregnancy outcomes in the fetus. 22 Ntani et al. (2015) stated that taller infants tend to erupt mandibular deciduous incisors earlier. 12 Meanwhile, Vejdani (2015) reported that the infant's length has a relationship with the tooth eruption time, where those with a normal length of 48 cm experienced a pattern that was consistent with the deciduous teeth namely 6 months. In contrast, infants born with a length below 48 cm experienced a delayed eruption of deciduous teeth >6 months. 13 This is associated with body length which is closely related to skeletal growth. In general, the skeletal growth is in line with the development of the skull and jawbones, hence, normal body length facilitates a fast tooth eruption time. 7 Apart from nongenetic mechanisms, specifically nutrient intake, the relationship between maternal height and birth outcomes such as infant weight and height might be regulated by genetics, where genetic polymorphisms which affect maternal height can also directly influence pregnancy outcomes. 22 Samalisto describes the type, symbol, locus, and function of genes that affect human height in table 1. Based on the table, mutations in genes such as GHR, GHRHR, IGFBP3, IGF1, IGFIR, IGFALS, STAT5b, and SHOX can trigger growth disorders that might lead to short stature. In contrast, mutations in the ESR1 gene potentially cause heights that are higher than normal. Other genes such as JAK2, VDR, DRDD2 are also involved in growth hormone signaling. 24 Genes that affect tooth eruption are RANKL and ALK-2. The Receptor Activator of Nuclear Factor Kβ Ligand (RANKL) and the activin receptor-like kinase 2 (ALK-2) gene are known to bind bone morphogenetic protein-2 (BMP-2) growth factor which plays a role in the process of tooth eruption. The RANKL gene controls osteoclastogenesis for alveolar bone resorption and the expression also causes bone resorption in the coronal follicle to form the course of tooth eruption. BMPs act through transmembrane serine and threonine protein kinase receptors that have multiple functions during cell morphogenesis and differentiation. They are also considered to be part of the epithelial-mesenchymal network signaling molecules that regulate the initiation of dental crown formation. 25 Meanwhile, BMP-2 has greater expression at the basal site of dental follicles, causing bone formation at the base of the alveolar bone which indicates tooth eruption. This morphogen binds to the BMP receptor, activin receptor-like kinase 2 (ALK-2) for transactivation and forms a heterodimeric complex which is then translocated to the cell nucleus and acts directly with other molecules to regulate the transcription of target genes.
The coordination between bone formation and resorption is subsequently maintained by several combined mechanisms between osteoblasts and osteoclasts. 3,25,26 For the process of tooth eruption, more osteoblasts are needed to cause bone damage at the top of the alveolus which forms the path of tooth eruption.
Wu et al. (2019) conducted a study on 1,296 mothers at the Affiliated Obstetrics and Gynecology Hospital of Nanjing Medical University and reported that the reproductive age of women is associated with the delay in eruption of the mandibular deciduous central incisors. 27 The recommended age for women to undergo pregnancy is 20-35 years, because the maturity of the reproductive organs has been completed at this age. 28,29 Women aged <20 years are at risk of experiencing nutritional deficiencies due to competition for nutrients between mother and fetus. Meanwhile, women aged > 35 years tend to experience complications such as sclerosis of the small arteries and myometrial arterioles, thereby inhibiting nutrients from being delivered to the fetus. 15 Additionally, at the age <20 years and >35 years complications such as preeclampsia can occur. This is a syndrome characterized by high blood pressure and protein in the urine that usually appears at the end of the 2 nd or 3 rd trimester. It is often accompanied by edema, sudden weight gain, headaches, and vision changes. The consequences of preeclampsia on the fetus include premature birth, growth retardation, and death. This condition is caused by reduced blood flow to the placenta, which interferes with the supply of nutrients to the fetus and potentially leads to a lack of food and subsequent starvation. 17,29,30 Based on these conditions, pregnancies at ages <20 years and >35 years can disrupt fetal nutritional intake thereby making the infants to have low birth weight. Seow (1996) in Alshukairi (2019) reviewed and identified tooth development in infants with a weight less than 1,000g and gestational age less than 30 weeks having delays in tooth maturation. 31 The mother's education level indirectly contributes to the knowledge about food or drinks that are good for consumption during pregnancy. Maternal nutrition during pregnancy also has a major role in supporting the growth and development of the fetus as well as the timing of the infant's teeth eruption. Some of the nutrients needed include carbohydrates, folic acid, minerals, protein, as well as vitamin C and D. 32 Carbohydrates are needed in fetal growth as a source of energy in the form of glucose. Meanwhile, folic acid does not only contribute to the infant's weight but also minimizes disturbances in the formation of the neural tube which is the forerunner of the neural crest to support craniofacial growth and development of teeth. Minerals consist of calcium, phosphorus, and fluorine which play a role in the formation of bones and teeth. Protein acts to provide the basic ingredients for the formation of enzymes, antibodies, muscles, and collagen. Moreover, protein and vitamin C are essential nutrients for the synthesis of collagen which plays a role in the formation of the tooth matrix. Vitamin D is needed to help build and maintain strong bones and teeth, hence, it is highly needed during fetal development. 15,20,32 The correlation between maternal education level and the knowledge about food or drinks that are good for consumption during pregnancy was reported by Bech et al. in Marisiantini (2018) which stated that high caffeine consumption can cause uteroplacental vasoconstriction, leading to low birth weight. This is also in line with a study conducted by Mardiawati (2011) in Marisiantini (2018) at Bengkulu City, Indonesia which showed that there is a significant relationship between caffeine consumption and birth weight. 33 Low body weight has a relationship with the timing of tooth eruptions, hence, infants with low body weight tend to experience delayed tooth eruption. 7,20 A similar statement was also presented by Demelash et al. (2015) who conducted a study on 387 mothers in Southeast Ethiopia. The results showed that mothers with no formal education had a higher risk of giving birth to infants with low body weight compared to those with higher education. 35 Similarly, another study carried out in Sumenep city, Indonesia by Festy (2011) in Nuryani (2017) found that mothers with low education had a 4.4 times risk of giving birth to infants with low birth weight. 36 Education generally has a relationship with the mother's socioeconomic level. Mauludyani et al. (2012) in Nuryani (2017) stated that a good socioeconomic status helps pregnant women to live in a better environment far from exposure to cigarette smoke as well as abstain from heavy work. 36 Furthermore, Marisianiti et al. (2015) stated that women exposed to cigarette smoke during pregnancy are at risk of giving birth to infants with low birth weight. This will affect the eruption time of the deciduous teeth as previously described. 33 Based on previous studies, mothers with low socioeconomic backgrounds tend to work extra. Meanwhile, there is a relationship between small head circumference and the long working hours of mothers during pregnancy. Small head circumference is more common in infants born to women who work for >40 hours per week. There is also an increased risk with standing or walking for 4 hours per day during the second trimester of gestation and with lifting a weight of 25 kg by hand in the last trimester. 37 A study conducted by Peoples Sheps in Marisiantini (2018) showed that pregnant women who stand too long every day will give birth to infants whose head size is 1 cm smaller than those who do not stand for too long. The small head circumference is associated with delayed tooth eruption time, as described in the previous discussion. 7,20,33 The size of the infant's head circumference, weight, height, as well as the mother's age during pregnancy, and education level affect the eruption time of the first deciduous teeth.
Women or mothers need to pay more attention to age during pregnancy, regulate nutritional intake, and improve lifestyles such as reducing caffeine intake, as well as avoiding exposure to cigarette smoke and al-cohol consumption to obtain optimal infant growth and development in the eruption of deciduous teeth.
Further studies are needed in Indonesia regarding the factors that can affect the eruption of deciduous teeth, especially those related to the infant's head circumference, weight, height, as well as the mother's age during pregnancy, and level of education.
|
v3-fos-license
|
2023-08-19T15:05:23.067Z
|
2023-08-17T00:00:00.000
|
260980262
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.1097/as9.0000000000000313",
"pdf_hash": "962a1c3799e4c4b6185e8565fbb1c0f4b3715d86",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2389",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "eb1f9c2017fa2f82cde1f0f9bdbcd52f2ebadb00",
"year": 2023
}
|
pes2o/s2orc
|
Disposal of Unused Postoperative Opioids: A Real-World Demonstration of Surgeon-initiated Strategies Using an Activated Charcoal Bag System
Excessive opioid prescribing following surgery creates a reservoir of unused medications available for diversion and abuse. We conducted a cohort study examining the impact of clinic-based, surgeon-initiated strategies using an activated charcoal bag (ACB) system on disposal of unused opioids. Among patients undergoing a variety of general surgery procedures, 67% of those with unused opioids disposed of them using the ACB. Our findings demonstrate practical ways to incorporate opioid disposal into surgical practice as a complement to judicious opioid prescribing.
T he national opioid epidemic continues to worsen, leading to over 80,000 deaths in 2021 and costing the US $1.5 trillion in 2020 alone. 1,2Despite extensive efforts to address this, deaths from prescription opioids have not decreased since the start of the epidemic. 1Excess opioid prescribing remains the primary contributor to prescription opioid abuse, as this practice creates a reservoir of unused opioids available for community diversion and illicit use. 3 Although improving, excess prescribing following surgery remains prevalent, making this a significant component of the current opioid epidemic. 4,5Much of the attention to date has rightly focused on decreasing excess surgical prescribing, but increasing safe disposal of unused opioids offers another path to reducing the supply of opioids available for diversion.
Current opioid disposal strategies, like drug-take back programs and "flush" lists, are perceived as inconvenient, limited by poor access and environmental concerns. 6Use of a drug disposal system consisting of a biodegradable activated charcoal bag (ACB) 7 distributed perioperatively was associated with a nearly four-fold increase in self-reported disposal of unused opioids. 8lthough effective, this approach requires significant coordination at the hospital level, which is a barrier to widespread community adoption.We examined the impact of clinic-based, surgeon-initiated strategies using the ACB system in the preoperative and postdischarge settings on disposal of unused opioids.
METHODS
Patients were recruited at three surgical clinics: Door County Medical Center (DCMC), a community general surgery practice; University of Wisconsin (UW) Breast Center, an academic breast cancer clinic; and the UW 1SP General Surgery, an academic-affiliated clinic with regional catchment.In an effort to maximize generalizability of the findings, consecutive patients presenting for a qualifying appointment at each individual clinic were enrolled.Qualifying appointments were either postoperative or preoperative, based on the site-specific disposal strategy.Two disposal strategies were implemented.DCMC patients were instructed to bring unused opioids to their postoperative appointment, offered the ACB (Deterra; Verde Environmental Technologies, Minneapolis, MN; cost $4.28 per disposal bag) 7 and were encouraged to dispose of unused opioids in clinic.UW patients received the ACB and an informational pamphlet regarding opioid safety at their preoperative appointment.Postoperative opioid prescription information was collected through the Electronic Health Record.Opioid dose prescribed was converted to morphine milligram equivalents (MME) to allow comparison across prescription types. 9Patient reported opioid use and disposal was collected at the postoperative appointment or by phone 4-6 weeks postoperatively.This study was IRB exempt under the category of quality improvement by the University of Wisconsin.
RESULTS
One hundred nineteen patients undergoing elective surgical procedures (46 DCMC, 73 UW) were enrolled in the study between January and August 2020.Patient acceptability of the study was high, measured by both patient comments in the phone interviews and from the low number of patients declining to participate.No patients declining to participate at the DCMC or UW 1SP General Surgery clinics, and only 3 patients at the UW Breast Center clinic declining, with all citing sensitivity of their cancer diagnosis as the reason.A total of 116 patients were prescribed 1,146 opioid tablets (7,053 MME, Table 1).The median prescription size was 10 tablets at DCMC and 9 tablets at the UW sites.All DCMC patients received prescriptions and 41 (89%) had unused opioids totaling 302 tablets (1,715 MME).At UW sites, 70 patients received prescriptions with 34 (47%) having unused opioids totaling 177 tablets (1,072 MME).
Overall, 52 of 75 (69%) patients with unused opioids disposed of them, 50 of these using the ACB.Disposal rates were similar between sites (66% DCMC, 74% UW) resulting in the destruction of 322/479 tablets (67%, 1,906 MME).Patients rated the experience of using the ACB as "very easy" to "easy" and 91% described their sentiment as "very positive" or "positive."Clinics rated the disposal system as highly feasible with negligible disruption to workflow.Due to COVID-19 restrictions, postoperative appointments at DCMC transitioned to virtual visits in March, limiting patients from receiving the planned intervention.
DISCUSSION
This study demonstrates the feasibility of clinic-based, surgeon-initiated distribution of an activated charcoal disposal bag system for safe disposal of unused opioids using two techniques.Both led to significant disposal were well received by patients and clinical staff and integrated well with clinical workflow.Patients with unused opioids overwhelmingly chose the activated charcoal bag system over other disposal methods.
Our disposal rates were well above those previously reported in the absence of an intervention and comparable to the prior randomized trial. 8These results further validate the use of ACB as an effective means for disposing unused opioids following surgery.Addition of the ACB is an excellent complement to judicious opioid prescribing practices in the prevention of diversion for nonmedical use.
The novel approaches initiating distribution of ACB in the surgical clinic as opposed to the hospital setting facilitate broader community dissemination.These approaches require minimal resources to implement and rely on fewer staff and coordination compared with hospital-based interventions.The cost of this intervention is low, arising primarily from the price of the ACB, which is inexpensive, and the brief added counseling time by the nurse or medical assistant.are particularly advantageous in settings where surgeons operate at multiple sites.Further, these approaches directly leverage the trust of the patient-surgeon relationship, an important factor contributing to the high disposal rates observed.Finally, this study was conducted at the height of COVID-19 restrictions in our state, further demonstrating the feasibility of our approaches and the desire by both patients and surgeons for simplified approaches to disposal of unused opioids.Italics are referring to sub-categories under the opioid prescribing characteristics and disposal characteristics categories.There is no further significance.
|
v3-fos-license
|
2022-09-14T13:40:36.191Z
|
2022-09-14T00:00:00.000
|
252213492
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00259-022-05964-w.pdf",
"pdf_hash": "72f941a94ec2982da5c4371e5a6af54ba96f8da0",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2390",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "72f941a94ec2982da5c4371e5a6af54ba96f8da0",
"year": 2022
}
|
pes2o/s2orc
|
Additive value of [18F]PI-2620 perfusion imaging in progressive supranuclear palsy and corticobasal syndrome
Purpose Early after [18F]PI-2620 PET tracer administration, perfusion imaging has potential for regional assessment of neuronal injury in neurodegenerative diseases. This is while standard late-phase [18F]PI-2620 tau-PET is able to discriminate the 4-repeat tauopathies progressive supranuclear palsy and corticobasal syndrome (4RTs) from disease controls and healthy controls. Here, we investigated whether early-phase [18F]PI-2620 PET has an additive value for biomarker based evaluation of 4RTs. Methods Seventy-eight patients with 4RTs (71 ± 7 years, 39 female), 79 patients with other neurodegenerative diseases (67 ± 12 years, 35 female) and twelve age-matched controls (69 ± 8 years, 8 female) underwent dynamic (0–60 min) [18F]PI-2620 PET imaging. Regional perfusion (0.5–2.5 min p.i.) and tau load (20–40 min p.i.) were measured in 246 predefined brain regions [standardized-uptake-value ratios (SUVr), cerebellar reference]. Regional SUVr were compared between 4RTs and controls by an ANOVA including false-discovery-rate (FDR, p < 0.01) correction. Hypoperfusion in resulting 4RT target regions was evaluated at the patient level in all patients (mean value − 2SD threshold). Additionally, perfusion and tau pattern expression levels were explored regarding their potential discriminatory value of 4RTs against other neurodegenerative disorders, including validation in an independent external dataset (n = 37), and correlated with clinical severity in 4RTs (PSP rating scale, MoCA, activities of daily living). Results Patients with 4RTs had significant hypoperfusion in 21/246 brain regions, most dominant in thalamus, caudate nucleus, and anterior cingulate cortex, fitting to the topology of the 4RT disease spectrum. However, single region hypoperfusion was not specific regarding the discrimination of patients with 4RTs against patients with other neurodegenerative diseases. In contrast, perfusion pattern expression showed promise for discrimination of patients with 4RTs from other neurodegenerative diseases (AUC: 0.850). Discrimination by the combined perfusion-tau pattern expression (AUC: 0.903) exceeded that of the sole tau pattern expression (AUC: 0.864) and the discriminatory power of the combined perfusion-tau pattern expression was replicated in the external dataset (AUC: 0.917). Perfusion but not tau pattern expression was associated with PSP rating scale (R = 0.402; p = 0.0012) and activities of daily living (R = − 0.431; p = 0.0005). Conclusion [18F]PI-2620 perfusion imaging mirrors known topology of regional hypoperfusion in 4RTs. Single region hypoperfusion is not specific for 4RTs, but perfusion pattern expression may provide an additive value for the discrimination of 4RTs from other neurodegenerative diseases and correlates closer with clinical severity than tau pattern expression. Supplementary Information The online version contains supplementary material available at 10.1007/s00259-022-05964-w.
Introduction
The identification of specific biomarkers that allow for early detection of tau pathology in four-repeat tauopathies (4RTs) will become crucial for target engagement in tau targeting treatment trials. In this regard, imaging with the next-generation tau-PET tracer [ 18 F]PI-2620 facilitated discrimination of patients with a clinical diagnosis of the 4RTs progressive supranuclear palsy (PSP) [1] and corticobasal syndrome (CBS) [2] from healthy controls, non-tauopathy Parkinson syndromes and Alzheimer's disease (AD). [ 18 F]PM-PBB3 also has the potential to differentiate 4RTs in vivo [3]. Neurodegeneration of cortical and subcortical brain regions is a common feature of 4RTs [4,5], comprising a relevant objective parameter of disease progression [6] in AD as the most frequent tauopathy [7]. For AD, it has been proposed to classify the disease according to biomarkers for amyloid, tau, and neurodegeneration by the A/T/N scheme [8]. In this classification scheme, neurodegeneration on a biomarker level can be determined in vivo by different diagnostic approaches: (i) atrophy in structural magnetic resonance imaging (MRI), (ii) levels of total tau in cerebrospinal fluid, (iii) hypometabolism in [ 18 F]fluorodeoxyglucose-(FDG)-PET, (iv) or hypoperfusion with several imaging techniques such as single-photon-emission-computed-tomography (SPECT). Brain atrophy in MRI as well as region-specific hypometabolism patterns in FDG-PET are well established in the diagnostic work-up in 4RTs [9]. However, an equivalent concept to A/T/N of combining simultaneous visualization of tau pathology and neurodegeneration in 4RTs has not yet been established. Recently, we reported that early-phase imaging of β-amyloid PET closely matches the pattern of glucose uptake in CBS [10] and we found that early-phase imaging of [ 18 F]PI-2620 tau-PET could also serve as a surrogate of brain perfusion in mixed neurodegenerative disorders [11]. With respect to cost, radiation exposure, and patient burden, such "one-stop shop" protocols provide the opportunity to examine two important diagnostic and potentially also prognostic biomarkers simultaneously with one procedure. We hypothesized that early-phase imaging with [ 18 F]PI-2620 PET mirrors the known neurodegeneration pattern in the brain of 4RT patients when compared to controls. Furthermore, we hypothesized that 4RT-related perfusion expression patterns may facilitate the discrimination of 4RT patients against other neurodegenerative diseases. Since the utility of imaging biomarkers for diagnosis and disease progression may well differ [12], we directly correlated perfusion patterns as well as the amount and pattern of tau pathology with clinical and functional scores in 4RTs.
Patient enrolment and study design
Seventy-eight patients with possible or probable 4RTs (71 ± 7 years, 39 female) were enrolled at the departments of neurology and psychiatry of the LMU Munich. Tau-PET data of PSP [1] and CBS [2] patients were previously published elsewhere. The 4RT cohort consisted of 30 patients with PSP Richardson syndrome (PSP-RS), 23 cases with predominant corticobasal syndrome (PSP-CBS), five cases with predominant Parkinsonism (PSP-P), two cases with predominant frontal cognitive dysfunction (PSP-F), and one case each with predominant speech/language disorder (PSP-SL), with primary lateral sclerosis (PSP-PLS) and with pure akinesia with gait freezing (PSP-PGF), and 15 patients fulfilled possible CBS criteria. Clinical diagnoses were based on MDS PSP and Armstrong criteria [9,13]. All included patients fulfilling CBS criteria had a negative amyloid-PET scan or negative Aβ in CSF (Aβ 42 and Aβ 42 /Aβ 40 ratio) to rule out AD pathophysiology [2]. Seventy-nine patients with suspected neurodegenerative diseases other than 4RT movement disorders (PSP, CBS) were enrolled in the same time period (Oct 2018-Apr 2021) at LMU Munich and used as a comparative dataset. This cohort underwent an equal clinical workup and it was composed of patients belonging to the AD continuum (n = 47; all amyloid-PET positive or positive Aβ in CSF (Aβ 42 and Aβ 42 /Aβ 40 ratio)), α-synucleinopathies (n = 12), FTD (n = 10), and other neurodegenerative diseases (i.e. anti-IgLON5 syndrome or Down syndrome; n = 10). Twelve amyloid-negative healthy controls without cognitive decline or motor disability that matched in age and sex were used from published datasets [1,2] and additional recruitment in Munich. Furthermore, an independent dataset composed of 21 patients with 4RTs and 16 patients with other neurodegenerative disorders was used from centers in Leipzig, Cologne, and New Haven [1]. An overview on all used data is provided in Table 1.
Regional PET quantification of perfusion imaging was compared between 4RTs and healthy controls. Target region positivity was evaluated at the individual patient level in 4RTs and in the comparative dataset of other neurodegenerative disorders. Additionally, perfusion and tau pattern expression levels were used for discrimination of 4RTs and other neurodegenerative disorders, including validation in the independent external dataset.
All patients and controls provided informed written consent. The study was conducted in accordance with the principles of the Declaration of Helsinki, and approval for scientific data analysis was obtained from the local ethics committee (application numbers 17-569, 19-022).
Radiosynthesis
Radiosynthesis of [ 18 F]PI-2620 was achieved by nucleophilic substitution on a BOC-protected nitro precursor using an automated synthesis module (Synthera, IBA Louvain-laneuve, Belgium). The protecting group was cleaved under the radiolabelling conditions. The product was purified by semipreparative HPLC. Radiochemical purity was ≥ 97%. Non-decay corrected yields were about 30% with a molar activity of 3•10 6 GBq/mmol at the end of synthesis.
PET acquisition and preparation
The main cohort of this study was scanned at the Department of Nuclear Medicine, LMU Munich, with a Biograph 64 or a Siemens mCT PET/CT scanner (both Siemens, Erlangen, Germany). A low-dose CT scan preceded the PET acquisition and served for attenuation correction. [ 18 F]PI-2620-PET was performed in a full dynamic 0-60 min setting initiated upon intravenous injection (~ 10 s) of 185 ± 10 MBq of the tracer. Dynamic emission recordings were framed into 6 × 30 s, 4 × 60 s, 4 × 120 s, and 9 × 300 s. PET data were reconstructed iteratively (4 iterations, 21 subsets, 5.0 mm Gauss/5 iterations, 24 subsets, 5.0 mm Gauss) with a matrix size of 336 × 336 × 109/ 400 × 400 × 148, a voxel size of 1.018 × 1.018 × 2.027/ 1.018 × 1.018 × 1.500 mm 3 /and a slice thickness of 2.027/1.500 mm. Standard corrections with regard to scatter, decay, and random counts were used. Data from Hofmann phantoms were used to obtain scanner-specific filter functions which were then consequently used to generate images with a similar resolution (FWHM: 9 × 9 × 10 mm), following the ADNI image harmonization procedure [14].
Controls and the external validation dataset were scanned at different imaging units (Leipzig: Siemens Biograph mMR, Siemens, Erlangen, Germany; New Haven: Siemens ECAT EXACT HR + , Siemens, Erlangen, Germany; Melbourne: Philips Gemini TF 64 PET/CT, Eindhoven, The Netherlands; Cologne: Siemens mCT PET/CT, Siemens, Erlangen, Germany) using the same established scanning protocol. Details on all scanners, as well as acquisition and reconstruction parameter, are provided in the Supplement of our previous study [1].
Image processing
All image data were processed and analysed using PMOD (version 3.9, PMOD Technologies Ltd., Zurich, Switzerland). For spatial normalization, tracer-specific templates in the MNI space were created for early-phase (0.5-2.5 min [11]) and late-phase (20-40 min [15]) [ 18 F]PI-2620 data as described previously [16]. Based upon our previous work [1], we created optimized templates by use of 35 randomly selected individuals with a structural high-resolution 3D T1-weighted image (MPRAGE). Early-phase and late-phase [ 18 F]PI-2620 images were spatially normalized to MNI space by applying a non-linear transformation (brain normalization settings: nonlinear warping, 8 mm input smoothing, equal modality, 16 iterations, frequency cutoff 3, regularization 1.0, no thresholding). The cerebellum (excluding the dentate nucleus and superior layers) was used as a reference region for scaling of early-and late-phase [ 18 F] PI-2620 images. Standardised uptake value ratios (SUVr) of all 246 regions of interest of the Brainnetome atlas [17] were extracted and used for data analysis. A subset of n = 42 patients with 4RTs and n = 8 healthy controls was processed including partial volume effect correction (PVEC) and the perfusion pattern differences of 4RTs and healthy controls were compared between uncorrected and PVE-corrected data. For partial volume effect correction (PVEC), we used the Gaussian Transfer Method (GTM) [18] as implemented in the PETPVC toolbox, i.e. a pre-established software package designed for partial volume correction of PET data. We specifically chose the GTM approach, due to its suitability for partial volume correction of region of interest-based data [18]. The exact mathematical approach of the GTM as implemented in the PETPVC toolbox has been described in detail previously [19] and can be accessed online (https:// github. com/ UCL/ PETPVC). For PVEC correction, we used data of participants with both 3 T T1w MRI and PI-2620 tau-PET data available. T1w MRI images were warped to MNI space using the non-linear high-dimensional warping that is implemented in the Advanced Normalization Tools (ANTs) package (http:// stnava. github. io/ ANTs/). PET images were rigidly co-registered to the native-space T1w images using ANTs, and the Brainnetome atlas was subsequently warped from MNI to PET space by combining the reversed non-linear warping (i.e. T1w to MNI) and linear registration parameters (i.e. PET to T1w). In native PET space, we then used the PETPVC toolbox2 and applied the GTM algorithm using the scanners Point Spread Function, in order to determine partial volume effect corrected PET data for each ROI of the Brainnetome atlas. Pons scaling was used for the PVEC subset in order to minimize methodological induced effects on the reference tissue.
Statistical analysis
All statistical analyses were performed using SPSS (version 26.0, IBM, Armonk, NY, USA).
Hypoperfusion pattern: Regional early-phase SUVr of all 246 Brainnetome regions were compared between 4RTs and healthy controls by an ANOVA including false-discoveryrate (FDR, p < 0.01) correction for multiple comparisons as well as adjustment for age and sex.
Single region classifier: The resulting 4RT target regions were subject to an individual subject classifier. Regional SUVr ≤ mean value (MV) − 2 standard deviations (SD) of the healthy controls were defined as significant regional hypoperfusion. Here, one affected target region defined the subject as positive (dichotomous) for a 4RT-like hypoperfusion scan. This classification was performed in all patients with 4RTs and other not neurodegenerative diseases, scanned in Munich. Sensitivity, specificity, and positive and negative predictive values were calculated for identification of patients with 4RTs in this cohort. This procedure was repeated for the presence of three and five affected target regions and with MV-2.5 SD and MV-3.0 SD thresholds in order to validate the results against altered sensitivity.
Discrimination by pattern expression: A principal component analysis (PCA) was performed to test pattern expression levels of perfusion and tau signal for discrimination of 4RTs from other neurodegenerative disorders [20]. The PCA was performed separately for early-and late-phase of [ 18 F]PI-2620 imaging. Prior to the PCA, the linear relationship of the data was tested by a correlation matrix and items with a correlation coefficient < 0.3 were discarded. The Kaiser-Meyer-Olkin (KMO) measure and Bartlett's test of sphericity were used to test for sampling adequacy and suitability for data reduction. Components with an Eigenvalue > 1.0 were extracted and a varimax rotation was selected. Resulting principal components were subject to a regression analysis to calculate their estimation value for 4RTs. Weighting factors obtained from the regression were then multiplied with each principal component to compute a single 4RT-related pattern expression score (4RTRP) per patient [20]. Validation in the independent external dataset was performed by applying the obtained weighting factors to the PCs of single cases. The 4RTRP expression scores were subject to a receiver operating characteristics (ROC) analysis to explore their potential for discrimination of 4RTs from other neurodegenerative disorders. Resulting area under the curve (AUC) values were compared for discriminatory performance by perfusion 4RTRP expression, tau 4RTRP expression, and the summation of both expression scores.
Correlation with clinical severity: Perfusion 4RTRP expression, tau 4RTRP expression, and the summation of both expression scores of patients with 4RTs were correlated with the clinical severity scores PSP rating scale, Montreal cognitive assessment (MoCA), and Schwab and England activities of daily living (SEADL) using a partial regression corrected for disease duration (time between symptom onset and PET), age and sex.
Demographics
A total of 157 patients with neurodegenerative disorders (age: 69.0 ± 9.9 years, 74 female) and twelve healthy controls (age: 68.5 ± 7.5 years, 8 female) were included in the main analysis. This sample consisted of 78 patients with suspected, possible, or probable 4RTs (age: 71.2 ± 7.1 years, 39 female) and 79 patients with other neurodegenerative diseases (59% AD, 13% FTD, 15% aSyn, 13% other; age: 66.6 ± 11.6 years, 35 female). The validation cohort consisted of 21 patients with suspected, possible or probable 4RTs (age: 70.7 ± 5.8 years, 8 female) and 16 patients with other neurodegenerative disorders (age: 67.7 ± 10.1 years, 7 female). For details of the study population, see Table 1. The clinical follow-up time of the cohort of patients with 4RTs and mixed neurodegenerative diseases was 18 ± 10 months with 71% of patients returning for follow-up visits with patients continuously fulfilling diagnostic criteria without change in diagnosis at their follow-up visits.
Early-phase [ 18 F]PI-2620 PET imaging in 4RTs resembles topology of neuronal injury
When compared to healthy controls, significant (FDR, p < 0.01) regional hypoperfusion in the 4RT cohort was observed in the thalamus, caudate nucleus, anterior cingulate cortex, and cortical regions of the frontal (superior and inferior frontal gyri) and the temporal lobe (superior gyrus and mesial temporal lobe), comprising 21 out of 246 brain regions of the Brainnetome atlas (Table 2, Fig. 1). Visually discernible hyperperfused regions (i.e. putamen) did not survive FDR correction. There was no hemispheric dominance of hypoperfusion (11 right/10 left). PVEC in a subset of patients with 4RT and controls revealed a similar hypoperfusion pattern when compared to uncorrected data (Supplemental Fig. 1).
Regional hypoperfusion in individual brain regions of 4RTs is not specific in a cohort of patients with mixed neurodegenerative disorders
We tested if the regions with significant hypoperfusion in 4RTs provide the opportunity to specifically identify individual patients with 4RTs in a real-world cohort of patients with neurodegenerative disorders referred to tau-PET at a tertiary center between 10/2018 and 4/2021. Hypoperfusion in at least one of the identified brain regions was present in 93.7% of individuals with 4RTs but also in 85.9% of patients with other neurodegenerative disorders (Fig. 2). Bootstrapping with three and five affected regions as well as with MV-2.5 SD and MV-3.0 SD thresholds resulted in improved specificity but reduced sensitivity. Overall, positive and negative predictive values for a 4RT in this cohort were low, regardless of the combination of region number and threshold (PPV: 51.4 ± 2.5 / NPV: 53.2 ± 6.5; Fig. 2).
4RT-related perfusion pattern expression may facilitate discrimination of patients with 4RTs additive to the 4RT-related tau pattern expression
Next, we asked if the pattern of perfusion facilitates discrimination of 4RTs from other neurodegenerative disorder more (Table 2) was below the mean value -2 SD of healthy controls. Bootstrapping was performed with three and five regions as well as with − 2.5 and − 3.0 SD thresholds. PPV = positive predictive value; NPV = negative predictive value specifically than single brain regions. Thus, we performed a PCA [20] and calculated 4RT-related pattern expression scores for perfusion and tau for the [ 18 F]PI-2620 scan of each subject which were subject to a subsequent ROC analysis. The 4RT-related perfusion pattern expression showed potential for discrimination of patients with 4RTs from patients with other neurodegenerative disorders (AUC 0.850 [95%-CI: 0.790-0.910]; p < 0.001). Discriminatory performance of the 4RT-related tau pattern expression was similar (AUC 0.864 [95%-CI: 0.807-0.921]; p < 0.001) with no statistical AUC difference of the discriminatory power of perfusion and tau pattern expression (p = 0.702). A combined perfusion-tau expression score increased the discriminatory power of the ROC analysis to an AUC of 0.903 (95%-CI: 0.855-0.950; p < 0.001), which suggests a statistical additive value in contrast to stand-alone perfusion (p = 0.011) or tau (p = 0.035) pattern expression (Fig. 3A). Transfer of the trained PCs to an independent external validation dataset mirrored the findings of the main cohort, indicating an AUC of 0.917 for the combined perfusion-tau pattern expression (Fig. 3B).
4RT-related perfusion pattern expression correlates with cross-sectional clinical severity
We endeavoured to determine if perfusion alterations in 4RTs could be used as a biomarker of clinical severity. Thus, we correlated 4RT-related perfusion pattern expression with clinical scales cross-sectionally and compared the findings with the associations of 4RT-related tau pattern expression with clinical severity. 4RT-related perfusion pattern expression (i.e. perfusion deficit) was positively associated with PSP rating scale (R = 0.402; p = 0.0012) after controlling for age, sex, and disease duration (Fig. 4A). A negative association was observed between 4RT-related perfusion pattern expression and activities of daily living (R = − 0.415; p = 0.0005; Fig. 4B), whereas no associations were observed between 4RT-related perfusion pattern expression and MoCA test performance (R = − 0.119; p = 0.365; Fig. 4C). There were no significant associations between 4RT-related tau pattern expression and clinical severity (Fig. 4D-F). A validation analysis using only patients fulfilling the MDS PSP criteria mirrored associations between perfusion pattern expression and clinical severity (Supplemental Fig. 2).
Discussion
In this cross-sectional study, we found that early-phase [ 18 F] PI-2620 imaging yielded a valuable surrogate biomarker for perfusion alterations in 4RTs. The observed pattern of hypoperfusion in patients with 4RTs, as compared to healthy controls, matched the known topology of neuronal dysfunction in PSP and CBS. However, our study indicated that only consideration of combined brain regions has potential to facilitate discrimination of 4RT patients from patients with other neurodegenerative disorders that underwent an equal clinical workup, since single region hypoperfusion was not specific enough in 4RTs. Furthermore, we observed that combining perfusion and tau pattern information may have an additive value for the discrimination of 4RTs from other neurodegenerative disorders compared to each pattern alone. Finally, we observed stronger associations between 4RTrelated perfusion pattern expression with clinical severity scales when directly compared to corresponding tau deposition. This implies that perfusion imaging could facilitate an objective read-out of disease progression of neurodegeneration in 4RTs and needs to be tested in longitudinal studies with the goal of validation as an endpoint for clinical trials.
The first goal of this study was to validate [ 18 F]PI-2620 perfusion imaging for detection of regional neuronal dysfunction in 4RTs. Our previous study found a strong correlation between early static SUVr and R1 of [ 18 F]PI-2620 with FDG-PET in a mixed population of neurodegenerative disorders [11]. Thus, we hypothesized that early static SUVr of [ 18 F]PI-2620 facilitates the detection of known neuronal injury patterns of 4RTs against healthy controls. We selected 0.5-2.5 min SUVr since this methodology can be achieved by a simple dual-phase [ 18 F]PI-2620 protocol, readily providing images for clinical interpretation without high sophisticated reconstruction and analysis methodology. In line with the known patterns of neuronal injury that were detected by perfusion imaging or FDG-PET in 4RTs [6,[21][22][23], we found a fronto-temporal and subcortical hypoperfusion with predominance in the thalamus, the caudate nucleus, and the anterior cingulate cortex at the group level of 4RTs against healthy controls. The putamen and the globus pallidus indicated a non-significant hyperperfusion, which was consistent with the regionally elevated time-activity-curves in the basal ganglia of patients with PSP within the perfusion phase [1]. This general pattern of perfusion in a mixed cohort of patients with 4RTs likely represents the least common denominator of perfusion alterations regardless of distinct clinical features among subgroups. Future studies should interrogate the associations between varying phenotypes of patients with 4RTs and resulting deviations from this general perfusion pattern.
On the group level, statistical analysis indicated satisfactory sensitivity of [ 18 F]PI-2620 perfusion imaging for detection of 4RTs. We challenged the methodology by a mixed sample of 4RTs and other neurodegenerative diseases and used a threshold-based multiregion classifier. Here, we found only low specificity of [ 18 F]PI-2620 perfusion imaging and very limited PPVs and NPVs for detection of 4RTs (average PPV/NPV < 60%). In line, low specificity of perfusion imaging and FDG-PET were consistently reported when different neurodegenerative disorders were evaluated against each other instead of comparing against healthy controls [24]. Our findings support regional similarity of hypoperfusion among diseases with partially similar clinical phenotype such as PSP-F and FTD or CBS and asymmetric AD. Thus, our findings were not surprising and emphasized the need for more detailed analyses of neuronal injury patterns [25]. Indeed, several studies successfully investigated data-driven metabolic network-based classification algorithms for discrimination of atypical Parkinsonian syndromes [25][26][27]. Here, sensitivity, specificity, PPV, and NPV for differential diagnosis of different parkinsonian syndromes were consistently > 80% in an automated image-derived classification procedure [25]. Importantly, one of these studies found that metabolic expression patterns did not differ between patients with PSP and patients with CBS [26] which supports pooling of 4RTs [28]. This was also justified since the majority of patients with CBS of our sample also fulfilled the MDS PSP criteria [9]. Interestingly, in our clinically pre-diagnosed cohort, the perfusion 4RT-related pattern expression showed potential for discrimination of patients with 4RTs from patients with mixed neurodegenerative diseases (AUC: 0.850). This suggests that consideration of whole brain patterns facilitates improved discrimination when compared to consideration of single regions with strongest hypoperfusion in 4RTs. This indicated the presence of disease-specific pattern apart from the regions with severe neurodegeneration and our validation cohort substantiated the usefulness of the determined networks. Furthermore, midbrain glucose hypometabolism to FDG-PET and midbrain atrophy in structural MRI were already acknowledged as supportive imaging biomarkers for diagnosis of PSP [9,29]. In conclusion, perfusion pattern expression shows promise for 4RT discrimination in comparison to the multi-region classifier discussed above. Interestingly, a recent [ 18 F]FP-CIT study similarly indicated that the early-phase of a brain PET ligand facilitates quantification of a metabolic network expression surrogate [30].
Subsequently, we tested if the combination of early and late phase 4RT pattern expressions of [ 18 F]PI-2620 provide an additive value. Assuming that early-phase [ 18 F]PI-2620 imaging provides the neuronal injury pattern [11] and latephase [ 18 F]PI-2620 imaging delivers information on tau aggregation [1,31], we hypothesized a complementary gain of information. As a limitation it needs to be considered that [ 18 F]PI-2620 binding in patients with 4RTs [1,2] was not yet confirmed by autopsy in patients that underwent PET. Nevertheless, our data suggest an additive value for the combination of pattern expression in comparison to stand-alone perfusion or tau for the discrimination of 4RTs against other neurodegenerative disorders. Higher sensitivity of perfusion and higher specificity of tau pattern expression fit into the concept of "(N)" and "T" biomarker information, meanwhile well established for AD [32]. A strength of this comparison is the head-to-head evaluation of perfusion and assumed tau information in a relevant number of cases with clinically diagnosed 4RTs, according to current diagnosis criteria. As a limitation, it needs to be considered that we used 20-40-min static SUVr for assessment of tau pattern expression [15], and not the gold standard kinetic modeling approach. The focus of this research aimed to generate data that can be used in a clinical routine setting which is easier to accommodate by static windows. Therefore, it needs to be considered that the used 20-40-min static SUVr can be influenced by altered cerebral blood flow [33].
Additionally, in this study, AUC values are assessed in an already clinically diagnosed cohort with clinical evaluation being the current standard for diagnosis, which limits the value of the individual AUC values to hypothesis generating data. Therefore, here we primarily compare the additional value of combined tau and perfusion expression pattern against each pattern on their own, while prospective studies in a cohort of patients suspected to suffer from neurodegenerative disease will be needed to properly test the AUC values of tau and perfusion pattern expression against clinical diagnosis as gold standard for identification of 4RT patients. Our assessment of AUC values in an already prediagnosed cohort strongly support the hypothesis of pattern expressions being valuable biomarkers in 4RT but need to be followed up on and tested in the prospective study design.
We observed a significant association between 4RTrelated perfusion pattern expression and clinical severity in our patient cohort with 4RTs. This association was specifically observed with PSP rating scale scores and activities of daily living (SEADL) but not apparent for cognitive screening (MoCA). Therefore, our findings indicate that the regional networks involved in 4RT-related perfusion pattern expression have stronger associations with gait, bulbar, limb motor, and ocular motor features than cognitive domains captured by MoCA. We note that MoCA was not developed as a dedicated screening tool for 4RTs which might limit its interpretation. Fitting to patterns of atrophy, we observed congruent decreases of early-and late-phase [ 18 F]PI-2620 PET in regions near to the ventricles (i.e. caudate). Thus, it is likely that the detected lower tracer binding in these regions is not only related to hypoperfusion but also to partial volume effects, which was not entirely recoverable by PVEC. Based on our findings, we hypothesize that 4RT-related perfusion pattern expression could be a relevant biomarker for clinical progression in 4RTs which deserves testing in longitudinal studies. This could be relevant for monitoring of therapy trials since associations between different tau-related biomarkers and clinical progression in 4RTs were lower or not present in earlier cross-sectional [1,3,34] and longitudinal [35] studies. We note that an a priori available 4RT-related expression pattern of the [ 18 F]PI-2620 perfusion phase was not available. Thus, it was necessary to use our large cohort as a training set with only a small validation set available. Longitudinal studies will aid deciphering the pathophysiology underlying the association of detected 4RT-related perfusion pattern with clinical symptoms and symptom progression. The data presented here suggests, that not tau depositions but rather resulting neuronal cell loss, i.e. perfusion, predicts symptom development and progression. Prospective investigations will be needed to understand the interplay of tau pathology, perfusion deficits, and clinical disease presentation in the 4RT disease spectrum. As a limitation, we acknowledge that autopsy confirmation of clinical diagnosis was only available in few patients. Thus, the analyses of the manuscript rely on clinical diagnosis, supporting biomarkers, and confirmation during clinical follow-up, which implies a limited number of wrong diagnoses, given by the nature of an observational study.
Conclusions
Our data indicate that [ 18 F]PI-2620 perfusion imaging is sensitive for detection of regional hypoperfusion in 4RTs. The perfusion pattern expression of 4RTs may provide an additive value to tau pattern expression for the discrimination of 4RTs from other neurodegenerative disorders and correlates closer with clinical severity (i.e. PSP rating scale) and everyday life function/activity (i.e. SEADL) when compared to tau pattern expression alone.
Declarations
Ethical approval All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee (LMU Munich-application number 17-569) and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards.
Informed consent Informed consent was obtained from all patients.
Conflict of interest Johannes Levin reports speaker fees from Bayer
Vital, Biogen, and Roche; consulting fees from Axon Neuroscience and Biogen; author fees from Thieme medical publishers and W. Kohlhammer GmbH medical publishers; non-financial support from Abbvie; and compensation for duty as part-time CMO from MODAG, all outside the submitted work. Andrew W. Stephens is a full-time employee of Life Molecular Imaging, GmbH. Thilo van Eimeren reports speaker/consultant fees from Eli Lilly, Shire, H. Lundbeck A/S, and Orion Corporation and author fees from Thieme medical publishers, all without conflict of interest with regard to the submitted work.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
|
v3-fos-license
|
2018-05-08T18:06:50.331Z
|
0001-01-01T00:00:00.000
|
16363188
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/1471-2334-7-124",
"pdf_hash": "a655893ca7d5de8c5bea5b3ab35f20e4fb2e1c50",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2392",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "a655893ca7d5de8c5bea5b3ab35f20e4fb2e1c50",
"year": 2007
}
|
pes2o/s2orc
|
Bmc Infectious Diseases Effectiveness of Dna-recombinant Anti-hepatitis B Vaccines in Blood Donors: a Cohort Study
Background: Although various studies have demonstrated efficacy of DNA-recombinant anti-hepatitis B vaccines, their effectiveness in health care settings has not been researched adequately. This gap is particularly visible for blood donors, a group of significant importance in the reduction of transfusion-transmitted hepatitis B.
Background
There is a wide consensus that vaccination against hepatitis B virus (HBV) is the best cost-effective measure to combat the disease [1]. In Brazil, its burden is grossly underestimated due to the epidemiologic surveillance which captures primarily severe cases in need of hospital treatment while missing the long term consequences of HBV infection such as cirrhoses and hepatocelular carcinoma. It has been estimated that vaccinating children against HBV in developing countries would prevent loss of three million lives each year, as well as 60-80% of cases of hepatocelular carcinoma [1][2][3]. More than 150 coun-tries have already given high priority to the vaccination programs against HBV [3], but some of them, including Brazil, have not fully implemented such programs.
Brazil has recently started producing a DNA-recombinant anti-HBV vaccine whose immunogenicity has been confirmed in several studies [4][5][6][7]. However, its effectiveness in reducing HBV incidence in health care settings has not been adequately researched. This gap is particularly visible regarding blood donors -a group of special importance in preventing HBV transmission and therefore explicitly targeted for immunization at any age according to the guidelines of the Brazilian Ministry of Health [8].
Although the residual risk of not detecting HBV by routine serologic screening in the largest blood bank in the state capital Florianopolis has been reduced considerably in the decade of 1990, it remains at a level approximately hundred times higher than in the developed countries [9,10], thus seriously undermining transfusion recipient safety. Another reason to strongly encourage vaccination among blood donors and verify its effectiveness is recent evidence of a high risk group using blood bank serologic screening to check their HIV status due to the guarantee of obtaining an anonymous, free of charge and rapidly delivered test result in this health setting [10,11].
The aim of this study is to evaluate the effectiveness of DNA-recombinant anti-HBV vaccines among blood donors in an endemic area in Brazil. Although there are no reasons to believe that the vaccine efficacy in blood donors should be much different from that of other healthy adults, its effectiveness may not necessarily follow this logic because it is highly susceptible to selection bias within a particular setting. In the blood donor context, this bias may arise because the main protection factor (getting vaccinated) is likely to be influenced by a more general attitude towards health which also reduces risk behavior for HBV infection. Differently from a randomized controlled trial of vaccine efficacy, it would be ethically unacceptable to randomly allocate anti-HBV vaccine to blood donors because its protective effect has already been proven beyond any doubt. It is therefore necessary to make post hoc adjustment for risk factor imbalances between vaccinated and non-vaccinated blood donors in order to evaluate the benefit of the vaccine achieved in a particular health care setting, i.e. the vaccine effectiveness. The results can then be compared to those achieved in randomized controlled trials of vaccine efficacy in healthy adults which can be considered an upper bound for vaccine effectiveness.
Methods
A retrospective double-cohort study [12] was used to estimate the protective effect of anti-HBV vaccination by com-paring the incidence of HBV infection in the vaccinated and the non-vaccinated cohort of repeat blood donors. For the purpose of this study, case definition of HBV infection was the presence of either of the two serologic markers used for blood donor screening, namely HBsAg (produced by "Hepanostika-Biomerieux") or anti-HBc IgM+IgG (produced by "Ortho Diagnostics"), confirmed by at least one positive test result on subsequent serologic testing with the same markers. The cohorts were matched by sex, age and municipality of residence.
The vaccinated cohort was recruited from an earlier study on seroconversion [7]. The inclusion criteria for both cohorts were having done at least two blood donations during the period of the study (1998)(1999)(2000)(2001)(2002) and being between 18 and 65 years of age. For the vaccinated cohort, an additional requirement was that the blood donors seroconverted after three doses of anti-HBV DNA-recombinant vaccine ("Engerix-B ® " -SmithKline Beecham Biologicals; "Euvax-B ® " -LG Chemical, Korea; "ButaNG ® " -Instituto Butantã, São Paulo, Brazil), with the second and third dose following one and six months after the first dose, respectively. The seroconversion criterion was obtaining the titre of at least 10 UI/l of antibody to HBsAg (test anti-Hbs produced by "Biomerieux") three months after receiving the last dose of the vaccine.
For each eligible vaccinated donor, non-vaccinated donors were looked after within the same age band (18-29, 30-44, 45-65 years), of the same sex, and in the same municipality. If no sex by age pair could be found in the same municipality, an adjacent municipality of residence was searched to obtain this matching. All blood donations were realized in the blood bank in Joaçaba, a county capital situated in an area endemic with hepatitis B.
For the vaccinated donors, the individual contribution to person-time incidence denominator was calculated as the difference between the date of last blood donation and the date last dose of anti-HBV vaccine was applied. For the non-vaccinated donors, the individual person-time contribution was the difference between the dates of the last and the first donation during the study period. If HBV infection occurred, person-time denominators were reduced by half-time between the positive serologic test result and the last seronegative donation [13].
The sample size calculation was based on the incidence ratio of 19:1 for the non-vaccinated against the vaccinated cohort, derived from the reported anti-HBV vaccine efficacy of approximately 95% (against remaining 5% in the same group), the HBV incidence in the non-vaccinated blood donors of 0.67% in the Joaçaba blood bank [14], and errors type I and II of 5% and 20%, respectively. Assuming these parameters and equal person-time denominator for both cohorts, Breslow-Day method resulted in 2148 person-years and at least 4 HBV cases between the cohorts [15].
Statistical analysis used Stata [16] and WINPEPI [15], with mid-P method to calculate the 95% confidence intervals (CI) and Pearson's chi-square test for significance of the differences between the blood donors' baseline characteristics.
Results
Total follow-up time of 1411 study participants was 4472 person-years, with average follow-up of 3.17 person-years for all repeat donors and 2.42 and 3.94 person-years for non-vaccinated and vaccinated donors, respectively.
Baseline differences between cohorts
The distribution of risk factors for HBV was similar among vaccinated donors included and excluded from the analysis of vaccine effectiveness, except for larger percentages of donors with frequent donations prior to the study (chisquare of 535 with 3 degrees of freedom and p < 0.01) and those residing in the county capital (chi-square of 91 with 2 degrees of freedom and p < 0.01) ( Table 1).
Among repeat donors included in the analysis of vaccine effectiveness, it is worth noticing a higher percentage of vaccinated (56.2%) than non-vaccinated (37.7%) resid-ing in the county capital, as well as a higher number of frequent donors among the former compared to the latter (Table 1). Consequently, average interval between donations was significantly longer among the non-vaccinated (0.62 years, 95% CI from 0.58 to 0.65 years) compared to the vaccinated donors (0.46 years, 95% CI from 0.43 to 0.50 years).
Anti-HBV vaccine effectiveness
Anti-HBV vaccine effectiveness was calculated as one minus the incidence ratio of vaccinated to non-vaccinated donors. The vaccine was 100% effective, with 95% CI from 30.1% to 100% ( Table 2).
The difference in HBV incidence between the cohorts compared was 2.33 (95% CI from 0.05 to 4.62) per thousand person-years. From the perspective of the number necessary to treat (NNT), it would be necessary to vaccinate 429 (95% CI from 217 to 21422) blood donors to avoid one HBV infection in this population.
In order to verify possible confounding due to the two HBV risk factors found to be significantly different between the cohorts compared, a stratified analysis crossing all levels of these two factors was performed. This resulted in 12 strata (4 levels of the number of prior blood donations crossed with 3 levels of the residence area). No statistically significant difference (p < 0.05) of the differ- * with at least 20,000 inhabitants ** less than 20,000 inhabitants ence in HBV incidence between the vaccinated and the non-vaccinated cohort was found either within any of the 12 strata analysed separately or for the Maentel-Hanzel pooled estimate (mean difference -9.82 with 95% CI from -21.19 to 1.54).
Discussion
The results of this study showed excellent effectiveness of DNA-recombinant anti-HBV vaccines in blood donors, similarly to the results in the general population [17][18][19][20]. Wide confidence intervals for the vaccine effectiveness are probably due to the very small numbers in the incidence numerators natural for rare events and presence of other factors influencing the vaccine effectiveness in a health care setting which are better controlled for or balanced out in a clinical trial setting.
The countries that implemented a large scale vaccination against HBV and maintained it for at least a decade managed to reduce the HBV incidence more than ten times during this period [17][18][19][20]. Brazilian Ministry of Health has recently introduced a similar anti-HBV vaccination program with mandatory vaccination for children and adolescents up to 15 years of age [8]. However, only children in the first year of life have had reasonable vaccine coverage, thus leaving a vast majority of the population, including blood donors, unprotected against HBV. A catch-up vaccination campaign would be appropriate to accelerate the reduction of HBV in the general population. In addition, Brazilian blood donors should be systematically encouraged to get vaccinated with DNA-recombinant anti-hepatitis B vaccine. Although this group has been clearly emphasised in the recent vaccination guidelines [8], the logistics necessary to meet this target have not been reinforced nationwide, so it is basically up to a blood donor to seek a primary care unit and solicitate anti-HBV vaccination on individual basis.
Several limitations of this study should be born in mind. First, although matching by sex and age has likely removed major confounding factors and the stratified analysis confirmed the main results of the study, it is still possible that residual confounding factors such as motivation to donate blood have not been fully accounted for in the analysis. For example, the motivation to donate blood repeatedly is known to be associated with lower chances of being infected with HBV compared to first-time blood donors. A prospective cohort study would have better means of controlling for this factor. Second, the absence of HBsAg positive result in the early phase of HBV infection may be as high as 25% [13], thus leaving a period of several months before anti-HBC becomes detectable by routine serologic testing in blood banks for a considerable proportion of donors. During this period, neither of these two HBV markers can detect the virus, leading to underestimation of the true HBV incidence. Third, very few cases of HBV seroconversion, as well as the fact that all of them occurred in the non-vaccinated cohort, reduced the statistical power to estimate vaccine effectiveness with precision, thus leading to wide confidence intervals. Small number of the events of interest was also prohibitive for the applicability of time-to-event multivariate regression methods, capable of a more precise statistical adjustment for relevant covariates than the univariate matched group analysis used in this study. Fourth, the blood donors resided in an endemic area for HBV where the infection rate in adult population is likely to be high, as indicated by the HBV incidence of 2.33 per thousand person-years observed in the non-vaccinated cohort. An incidence of this order could be significantly reduced by vaccination compared to the impact of vaccination in low endemic areas, therefore limiting the generalisability of anti-HBV vaccine effectiveness from this study to such areas.
Despite the limitations, this is the first study of anti-HBV vaccine effectiveness in blood donors using a cohort study with matched control group to minimize the impact of confounding factors. The same methodology can be applied to other blood banks without interfering with their daily routine. A multi-centre evaluation of anti-HBV vaccine effectiveness can be set up on a regular basis with this study design, providing important information about the reduction of HBV infection in the adult population.
In a wider perspective, it should be borne in mind that the results of this study have demonstrated medium term protective effects of DNA-recombinant anti-HBV vaccines in blood donors as means of reducing the transfusion-transmitted HBV in Brazil. At present, long term benefits of a widespread anti-HBV vaccination in Brazil cannot be eval- As government-sponsored mass vaccination against HBV has started approximately a decade ago and has so far achieved reasonable vaccine coverage only for the children in the first year of life, blood donor population in Brazil continues exposed to HBV. This underlies the need for targeted vaccination of blood donors in addition to the children and adolescent vaccination.
Conclusion
The results showed a very high effectiveness of DNArecombinant anti-hepatitis B vaccination among blood donors resulting in the enhancement of blood recipient safety in the State of Santa Catarina. Considerable variation of this estimate is likely due to the limited follow-up and the influence of confounding factors normally balanced out in efficacy clinical trials.
|
v3-fos-license
|
2022-02-26T16:07:39.495Z
|
2022-01-01T00:00:00.000
|
247120591
|
{
"extfieldsofstudy": [],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11207-021-01941-y.pdf",
"pdf_hash": "c26ca04876fadb1a06134c9413939124c9f1932d",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2394",
"s2fieldsofstudy": [
"Environmental Science",
"Physics"
],
"sha1": "e5f373fc8f845b90c7624a00acd8ebcdc87515cd",
"year": 2022
}
|
pes2o/s2orc
|
Solar Temperature Variations Computed from SORCE SIM Irradiances Observed During 2003 – 2020
NASA’s Solar Radiation and Climate Experiment (SORCE) Spectral Irradiance Monitor (SIM) instrument produced about 17 years of daily average Spectral Solar Irradiance ( SSI ) data for wavelengths 240 – 2416 nm. We choose a day of minimal solar activity, August 24, 2008 (2008-08-24), during the 2008 – 2009 minimum between Cycles 23 and 24, and compute the brightness temperature ( T o ) from that day’s solar spectral irradiance ( SSI o ). We consider small variations of T and SSI about these reference values, and derive linear and quadratic analytic approximations by Taylor expansion about the reference-day values. To determine the approximation accuracy, we compare to the exact brightness temperatures T computed from the Planck spectrum, by solving analytically for T , or equivalent root finding in Wolfram Mathematica . We find that the linear analytic approximation overestimates, while the quadratic underestimates, the exact result. This motivates the search for statistical “fit” models “in between” the two analytic models, with minimum root-mean-square-error, RMSE. We make this search using open-source statistical R software, determine coefficients for linear and quadratic fit models, and compare statistical with analytic RMSEs. When only linear analytic and fit models are compared, the fit model is superior at ultraviolet, visible, and near-infrared wavelengths. This again holds true when comparing only quadratic models. Quadratic is superior to linear for both analytic and statistical models, and statistical fits give the smallest RMSEs. Lastly, we use linear analytic and fit models to find an interpolating function in wavelength, useful when the SIM results need adjustment to another choice of wavelengths, to compare or extend to any other instrument. Advantages of the quadratic T over the exact T include ease of interpretation, and computational speed.
Introduction
The Sun's temperature and its variations over timescales from hours to decades have been determined since 1978 from satellite measurements of associated variations in Total Solar Irradiance (TSI). Since the deployment of the SORCE satellite in 2003, the Sun's temperature has also been determined for a continuous range of wavelengths that span ultraviolet, visible and near-infrared wavelengths, from solar spectral irradiance (SSI) measurements across the peak of the SSI distribution. These are of great interest due to the fundamental role that solar variations play in understanding the variations of the Earth's climate (Harder et al., 2005;Eddy, 2009). Beyond decadal timescales, the solar irradiance is key to estimating the Sun's luminosity, and on the longest timescales it determines Earth's lifetime, since it determines when the Sun will exhaust its energy from fusion of hydrogen in the core (Bahcall, 2000).
The average radiative temperature of the Earth is determined by an approximate balance between the amount of energy it receives from the Sun, which can be calculated from the TSI and the Earth's albedo, and the amount of energy that Earth emits into space that depends on Earth's emissivity (Stephens et al., 2015). Earth's albedo is the fraction of solar energy reflected back into space, which averages about 30%, the remainder being absorbed by the atmosphere and surface (Wild et al., 2013). To determine how solar variations impact Earth's atmosphere-ocean system at various heights, SSI must be monitored in addition to TSI.
The relationship between Earth's temperature and the variability of solar irradiance was first speculated on by Herschel, and as observations have improved, so has the understanding of solar variability and its contribution to climate change (Gray et al., 2010;Bahcall, 2000). The variability of both TSI and SSI occurs due to variations in magnetic fields on the solar surface, which in turn cause the appearance of sunspots and faculae (Shapiro et al., 2015). Various models attempt to predict these changes over a wide range of timescales.
Prior to SORCE, limited observations of SSI made studying the variability of SSI and solar brightness temperature difficult, but databases over several years now enable these calculations that are important for both Heliospheric and Earth sciences (Rottman and Cahalan, 2002). In this article, we present a study of linear and quadratic analytic and statistical approximations of the solar brightness temperature, T , using either a single "reference day" during a solar minimum, or using statistical properties over many days in the available record. The estimation of values using linear and quadratic approximations, both analytic and statistical, are of great help in simplifying the calculations of T , and in interpreting its variability. In order to determine the accuracy of the approximated values, it is necessary to compare them with very nearly exact values of T , calculated from the monochromatic exact analytic T equation, Equation D.2, derived from the Planck distribution D.1, or equivalently by applying root-finding techniques to the Equation D.3 that implicitly determines T from observed values of SSI.
This paper shows that the daily values of T over the SIM wavelengths are well determined from a polynomial that is quadratic in the observed daily values of SSI. The coefficients may be expressed as analytic functions of wavelength, with small RMS errors. Even smaller RMS errors are achieved with coefficients determined from statistical fits of the observed data. Advantages of the quadratic T over the exact T include ease of interpretation, and computational speed. Interpretation of the linear term is the sensitivity of T to changes in SSI, a concept widely used in climate studies. The speed gain will become important as time-dependent models of solar variations are further developed.
The article is structured as follows: Section 2 describes the data analyzed here, provides the online link to download it, and describes the temporal and wavelength range. Section 3 summarizes the methodology used for the analysis of the SSI spectral data, the calculation of exact values of solar brightness temperatures T , as well as linear and quadratic analytic and statistical approximations of T . Section 4 discusses the results of exact computations of T , the time series of observed SSI, and the time-series comparisons of exact and approximate T values. Section 5 concludes by summarizing the key results and suggests future directions for research related to variations in solar irradiance and brightness temperature. Finally, the article contains nine appendices that derive results referenced in Sections 1 -5. Several figures and tables are discussed throughout. Readers may interact with several of the plotted results by going to the following dashboard that was coded in Microsoft Power BI. Dashboard link: http://wayib.org/solar-temperature-variations-relative-toa-quiet-sun-day-in-august-2008/.
Data
The TSI and SSI data were downloaded from the University of Colorado's LASP Interactive Solar Irradiance Datacenter (LISIRD), based on measurements made by instruments onboard the Solar Radiation and Climate Experiment (SORCE) satellite. The data is free and publicly available here: https://lasp.colorado.edu/lisird/data/sorce_sim_ssi_l3/.
The SORCE Total Irradiance Monitor (TIM) instrument provides records of Total Solar Irradiance (TSI), while the Spectral Irradiance Monitor (SIM) instrument provides records of the Solar Spectral Irradiance (SSI). Both instruments provide daily averages, with TIM beginning 2003-02-25, andSIM beginning 2003-04-14 and both ending on 2020-02-25 when the SORCE instruments were passivated (i.e., turned off). We employ throughout the latest "final" data versions, v19 for TSI, and v27 for SSI, as discussed in Kopp (2020) and Harder (2020), respectively. All dates in this article are given in the format YYYY-MM-DD, in accord with https://www.iau.org/static/publications/stylemanual1989.pdf.
The SIM measures SSI as a function of wavelength over the range from 240 nm to 2416 nm. Though measurements of SSI were made prior to SORCE, for example by the UARS SOLSTICE (operating during 1991 -2001), SIM was the first to provide SSI for a continuous range of wavelengths across the peak of the solar spectrum that occurs near 500 nm, and well into the near-infrared (IR) wavelengths, with sufficient precision to determine true solar variations (see, e.g., Harder et al., 2009;Lee, Cahalan, and Dong, 2016).
Note that all irradiance data from SORCE, including all TSI and SSI values, are adjusted to the mean Earth-Sun distance of one astronomical unit, 1 AU. Doppler corrections are also made to remove any variations due to the satellite orbit. Absolute and relative calibrations are enabled by a variety of laboratory measurements carried out at both the University of Colorado's LASP (Laboratory for Atmospheric and Space Physics), and at NIST facilities. Onboard instrument degradation is monitored and corrected for. Our focus in this paper is on the day-to-day variability at near-ultraviolet, visible, and near-infrared wavelengths. For this, we rely primarily on the high precision and repeatability of TIM and SIM, more than on the absolute calibration. The high quality of TIM and SIM data has been amply documented in the literature.
Due to operational difficulties encountered, particularly after 2011 as SORCE aged, there are a limited number of days where the records are given as NA (not available) or no values were recorded. These were omitted in all calculations reported here. As an example of the SSI records measured by the SIM instrument, the time series of the solar spectrum from 2003 to 2020 is shown in Figure 4 for a fixed wavelength, 656.20 nm, which corresponds to the hydrogen alpha (Hα) transition in the Balmer series.
For much of the data analysis, open-source R and Python software was used, as well as commercial software including Wolfram Mathematica, and Microsoft Excel. Mathematica enabled precise computation of the brightness temperatures of the SSI data, using efficient interpolation and root-finding methods, and provided a check on exact values computed from the analytic equation for T derived from the Planck distribution for the spectral irradiance, shown in Appendix D.
For more details on the TSI and SSI data used here, see the "release notes" for SORCE TIM v19, and for SORCE SIM v27, available from the NASA Goddard Space Flight Center Earth Sciences Data and Information Services Center, or from the University of Colorado's LASP (Harder, 2020;Kopp, 2020).
Methodology
For the radiation from a blackbody, the irradiance spectrum may be computed theoretically using the Planck distribution. However, the Sun is not a perfect blackbody, due to wavelength-dependent processes in the Sun's atmosphere. Large deviations from the Planck distribution are observed, as we show below. However, it is very useful for interpreting irradiance observations to define a solar "brightness temperature," either for the TSI, integrating all wavelengths, or for the solar spectral irradiance, SSI, at each available wavelength. This is the temperature for which the irradiance computed from a Planck distribution coincides with the irradiance observed by an instrument outside Earth's atmosphere, for example TIM for the wavelength-integrated irradiance, the TSI, or SIM for the wavelength spectrum of irradiance, SSI.
Computation of the brightness temperature from TSI, T eff , is simply a matter of explicitly solving the Stefan-Boltzmann Law for T eff , with a result proportional to the one-quarter power of TSI. Appendices A, B and C discuss the importance of TSI and related quantities. Appendix D displays the Equations D.2 and D.3 that determine the value of the spectral brightness temperature T as an explicit function of the observed SSI for each fixed wavelength. Equations D.1 and D.3 also determine T as an implicit function of the observed SSI, by solving Equation D.3 for T as a function of SSI at each fixed wavelength using a rootfinding procedure. We employ a root-finding algorithm developed in Wolfram Mathematica, using the following initial condition T = 5770 K, where T is chosen near the effective radiative temperature computed using TSI = 1360.8 W/m 2 as provided by the SORCE TIM (Kopp and Lean, 2011). These two approaches produce the same values of T , referred to in this paper as the "exact" values, and each method provides a check on the other. SORCE SIM provides a daily SSI record for each associated wavelength from 240 nm to 2416 nm, so that over the 17 year period there is a large amount of data. To handle the large number of records, algorithms were developed in R, Python and Mathematica, to provide approximate values of T . These approximate alternatives allow more rapidly computed values of T for any date, given a fixed set of wavelengths. In this article, we investigate linear and quadratic analytic approximations as a function of the observed SSI values, derived in Appendix E. Below, it will be shown that these approximations bracket the exact values, which motivates the development of linear and quadratic fit approximations, that minimize the root-mean-square-error (RMSE) across a large range of days, which can include all available days. These fit approximations are developed in Appendix G.
For the development of the linear and quadratic analytic approximations, a Taylor expansion is used (see Appendix E). Having the derivatives of T with respect to the SSI, this expansion gives a representation of T in terms of polynomial functions of SSI. To keep the models simple, only the first and second terms of this expansion are considered.
To apply a Taylor expansion it is necessary to have a reference value around which to expand. For this, we choose the SSI on a single "reference" day during the 2008-2009 solar minimum of Cycle 23. Namely, we choose 2008-08-24, and label that day's exact values (T o , SSI o ). With the observed value of SSI o and associated computed value of T o during a solar minimum, the linear and quadratic coefficients were calculated for the analytic approximation models. The remainder of this section discusses the time series of SSI and estimated T values. Section 4 then compares the approximate values with the exact values, and also compares the analytic approximations with analogous fit approximations that use coefficients obtained by minimizing RMSE (root-mean-square-errors) over all days, and also over two selected ranges of days.
To compare the estimation with the exact value of brightness temperature, we compute difference values, and relative differences, or delta values as: In addition to the linear and quadratic analytic approximations obtained with the Taylor expansion, a linear and quadratic fit model is developed in Appendix G, with the help of R statistical software. The linear and quadratic fit models have coefficients that depend on a given temporal range of available data, and not only on the chosen reference day, as is the case with the analytic approximations. In Section 4 we report results for the full range of available days, as well as for two subranges, those of "early" and "late" days, R1 and R2, respectively.
The comparison between the brightness temperatures calculated with the linear and fit approximations are shown in the tables, along with a comparison between the linear coefficients. In order to make the computations very explicit, in Appendix H, an example of the calculation of the brightness temperature is given for the linear and quadratic analytic approximation methods, as well as for the linear and quadratic fit approximation methods, for a randomly selected day.
In Appendix I, a method of rapid interpolation is given for the linear analytic and fit coefficients, valid over a broad range of wavelengths that satisfies 400 nm ≤ λ ≤ 1800 nm.
Results
Before considering the temporal variations of SSI observed by the SORCE TIM and SIM instruments over the 17 years, 2003 -2020, we first consider the wavelength variations of SSI on our chosen "reference day" 2008-08-24. Figure 1 shows this SSI o wavelength dependence observed on the reference day, in green, and for comparison the Planck irradiance distributions computed for temperatures T = 4500 K, 5770 K, 6500 K using Equation D.1, in blue, tan, and red, respectively. The lower and upper Planck temperatures are seen to give computed SSI values that bracket the observations of SSI 0 for this wavelength range, while the computed SSI for the intermediate 5770 K (tan) approximately follows the observed SSI o (green). Although the observed value coincides with the computed 5770 K Planck value for a few wavelengths only, the observed values of SSI o lie above or below the Planck curve. The measured value of TSI (historically "solar constant") by the Total Irradiance Monitor (TIM) instrument on the reference day is TSI o = 1360.4704 W/ m 2 and is associated with an effective radiative temperature of T o = 5771.2685 K, close to T = 5770 K, used in computing the intermediate tan curve in Figure 1 (Kopp and Lean, 2011).
Figure 1
Solar Spectral Irradiance (SSI) vs. wavelength for reference day 2008-08-24, plotted in green, as measured by the SIM instrument onboard SORCE. For comparison, we also show Planck distributions for 6500 K in red, 5770 K in tan, and 4500 K in blue. The Planck distributions use Equation D.1 for a fixed temperature, with wavelength as the independent variable, and transformed to spectral irradiance by multiplying by the factor α s = π * ( R s AU ) 2 = 6.79426 * 10 −5 , with R s the Sun's mean radius, and AU the mean Earth-Sun distance, as in Equation D.3. Figure 2 is a zoom of Figure 1 for the wavelength range 240 nm to 680 nm. The apparently irregular bumps in this plot, and in Figure 1, are due to well-known Fraunhofer lines in the solar spectrum, smoothed to the SIM instrument's bandpass, which varies from about 1 nm width near wavelength 240 nm, up to almost 30 nm near 1000 nm, then decreases slightly (Harder et al., 2005). The width of a typical atomic Fraunhofer line is of order 1 Å, or 0.1 nm, so the observed bumps are smoothed clusters of several nearby lines. A few of the contributing atomic lines are indicated in the labels on the vertical dashed lines. For example, the green dashed line near 430 nm, is labeled CaFeg to indicate that lines of calcium, iron, and oxygen (g-band) are all included within the plotted bump in the green line. For identification of g-band lines (both atomic and molecular) and its variability related to magnetic field strength, see Shelyag et al. (2004).
Effects of ionization thresholds are also seen, such as just above the Ca II H and K lines near 400 nm, which has photon energies near 3.1 eV.
TSI provides key observational data about the Sun and is needed to compute the Sun's luminosity and lifetime (see Appendix B). TSI is not a solar constant, as had been assumed prior to the satellite era. Its value varies due to turbulent magnetic processes on the Sun. TSI variations amount to about 0.1% (1000 ppm) of the mean value over the four solar cycles so far observed by satellite (Cycles 21 through 24), since 1978. The average solar luminosity, and thus the TSI, is determined by nuclear processes in the Sun's core. These change over a much longer timescale than the solar cycle, up to billions of years, as nuclear processes transform hydrogen into helium. The present value of TSI, and thus solar luminosity can provide a good estimate of the Sun's lifetime, and thus the time that the Sun's nuclear fuel will eventually run out. Such calculations are shown in Appendix B, where it is shown that the current best TSI value at solar minimum, 1360.80 ± 0.50 W/ m 2 (Kopp and Lean, Figure 1 for the wavelength range 240 nm to 680 nm. The apparently irregular bumps in this plot, and in Figure 1, are due to well-known Fraunhofer lines in the solar spectrum, smoothed to the SIM instrument's bandpass, which varies from about 1 nm width near wavelength 240 nm, up to about 30 nm width near 1000 nm, then decreases slightly. The width of a typical atomic Fraunhofer line is of order 1 Å, or 0.1 nm, so many of the observed bumps are smoothed clusters of nearby lines. A few of the contributing atomic lines are indicated in the labels on the vertical dashed lines. For example, the tan dashed line near 430 nm, is labeled CaFeg to indicate that lines of calcium, iron, and oxygen (g-band) are all included within the plotted bump in the green line. Effects of ionization thresholds are also seen, such as just above the Ca II H and K lines near 400 nm, which has photon energies near 3.1 eV. 2011), gives the overall lifetime of the Sun as approximately 10.70 billion years. The current estimated age of the Sun, and of our solar system, is about equal to Earth's estimated age of 4.54 billion years (±50 million years). Hence, this leaves about 6.2 billion years, more or less, before the Sun will expand into a Red Giant, leaving a white dwarf star behind.
The importance of TSI in climatic variability has been mentioned, for example in computing Earth's global average effective radiative temperature. Appendix C estimates the effective temperature of the Earth as 255.48 K, using the TSI on the reference day, and Earth's average albedo of 0.29 (Stephens et al., 2015).
TSI is the integral of SSI over all wavelengths, and SSI in turn determines the solar spectral brightness temperature T at each wavelength. Determining T as well as SSI is useful in understanding the physical and chemical processes that take place on the Sun. For example, Figure 3a, a plot of the brightness temperature, T o , on the reference day, shows a broad peak above 1600 nm. This is associated with transitions in hydrogen ions H − (1 proton + 2 electrons). Photons with a wavelength λ < 1644 nm are dominated by the H − bound-free transitions, while photons with λ > 1644 nm are absorbed and re-emitted in H − free-free transitions (Wildt, 1939). The H − ion is the major source of optical opacity in the Sun's atmosphere, and thus the main source of visible light for the Sun and similar stars. Now, we consider the temporal variations of SSI. Figure 4 shows the time series of the irradiance corresponding to a fixed wavelength, in particular for Hα (Hα wavelength = 656.2 nm), the longest wavelength in hydrogen's Balmer Series. The variability of the SSI can be seen, with the deepest minimum occurring early in the record, during Oct-Nov 2003. The spike that goes below 1.523 W/m 2 /nm is associated with the Halloween solar storms, a series of solar flares and coronal mass ejections that occurred from mid-October to early Figure 1. Plot 3b is a zoom into the same short-wavelength range as in Figure 2. As in Figure 2, several bumps are labeled with contributing atomic lines, such as the green dashed line near 430 nm, labeled CaFeg (calcium, iron, oxygen g-band). As in Figure 2, the rise due to the ionization threshold is evident near 400 nm, just above the Ca II H and K lines. In both plots, the temperature at each wavelength was computed using a Mathematica root-finding procedure to solve for T in Equation D.3, SSI = α s B (λ, T ), with SSI the observed value.
This occurred during the declining phase of Solar Cycle 23. On the slower year-to-year timescale, the Sun's activity declines into the much quieter period of the solar minimum during 2008 -2009 (Kopp, 2016). The solar minimum implies about a 0.1% decrease in Figure 4 Time series of irradiance for all records of daily average data from the full 17 years of SIM data, version 27, downloaded from LISIRD. In this case, we have chosen the H α wavelength, 656.2 nm. In this plot, it is evident that there is a minimum of solar activity in mid-2008. We choose as a reference day 2008-08-24, and consider variations about this day to approximate the temperatures on all other days. solar energy that arrives on Earth, causing the Earth's temperature to decrease slightly (Gray et al., 2010). After this solar minimum, solar activity increases again, as Cycle 24 sunspots and other solar activity increase in intensity into a solar maximum in 2014 -2015, before declining again, into a quieter minimum period of 2019 -2020.
As can be seen in Figure 5, the brightness-temperature time series for Hα is also similar to the temporal variability of the SSI for Hα. It is evident that they are in phase. As SSI data is extended beyond the end of SIM, by TSIS-1 and successor missions, the solar cycles will become more evident, as happened with TSI (Solanki, Krivova, and Haigh, 2013). It is important to emphasize that the spectral brightness temperatures are wavelength-dependent radiative temperatures of the Sun, the temperatures at which the SSI data measured by the satellite coincides with what is obtained using the Planck distribution (Trishchenko, 2005). Figure 6 shows a plot of the linear analytic approximation of brightness temperature compared with the exact value, the value obtained by Equation D.2, or the root-finding solution of Equations D.1 and D.3. The linear analytic approximation is given by neglecting the quadratic term in Equation E.18, taking as reference the date during the solar minimum, 2008-08-24. Figure 6 shows that this approximation closely overlays the exact.
To more clearly see the difference between the exact and the linear analytic approximation, Figure 7a shows the difference, exact -approximation, in units of mK = 10 −3 K, and Figure 7b the delta, difference/exact (Equation 3.1) in parts per million (ppm). The negative differences in Figures 7a and b show that the linear analytic approximation overestimates the exact value of the brightness temperature. The root-mean-square-error (RMSE) is 412.4545 × 10 −6 , i.e., very small, which explains why such differences are not evident in Figure 6. A significant increase in variability is seen in 2011 and afterwards, hence Figure 7a also displays the RMSE for both the earlier, quieter period, as well as the later, noisier Figure 5 Time series of the temperature T calculated in Wolfram Mathematica for all records of solar spectral data with fixed wavelength, Hα = 656.20 nm, using Equation D.3. We term the root-finding solution of Equation D.3 the "exact" value of the temperature, to distinguish it from the two analytic approximations (linear and quadratic) described in Equation E.18, with E.10 and E.17, and from the two statistical "fit" approximations (also linear and quadratic) described in Appendix G. period. Some of this increased noise is due to Solar Cycle 24, but some is likely also due to the aging of the satellite and the SIM instrument. from root finding with D.1 and D.3. This looks nearly identical to the analogous Figure 6 for the linear analytic approximation. However, in the plot analogous to Figure 7, we plot in Figure 9 the difference between the exact and the quadratic analytic approximation, and here the results are quite different from the linear case. Figure 9a shows the difference, exact -approximation in units of µK = 10 −6 K, and Figure 9b the delta, difference/exact in parts per million (ppm) in the quadratic case. The positive differences in Figures 9a and b show that the quadratic approximation underestimates the exact value of the brightness temperature, though are much closer than the linear, with RMSE reduced to 0.3428 × 10 −6 , more than 1000× smaller than the linear case in Figure 7, and Table 2 shows that the Mean Error (Bias) is also more than 1000× smaller than the linear. Comparing Figures 7 and 9 (and Table 2) indicates that the opposite signs of the bias suggests there may be a better approximation that lies "in between" the linear and quadratic approximations. Below, we will show that the "fit" approximations do typically provide such improvements. Figure 9a also shows that, in accord with intuition, the decrease in RMSE indicates that the approximate value is better the more terms are considered in the Taylor expansion. The improvement from RMSE in Figures 7a to 9a removes the most significant figures in RMSE in Figure 7a, suggesting a rapidly converging series. This indicates that finding an improved "in between" fit approximation will be a challenge, as the quadratic analytic approximation is excellent. Table 2 supports this last point, comparing the RMSE for linear and quadratic analytic models, with the RMSE for the linear and quadratic fit models for the same Hα wavelength used in Figures 7 and 9. Indeed, though the linear fit model RMSE is about 2.85× smaller than the linear analytic RMSE, the quadratic fit model RMSE is 2.81× times smaller again than the 1000× smaller quadratic analytic RMSE. Hence, at the Hα wavelength, the quadratic fit model is more precise even than the very precise quadratic analytic model. Figure 7 shows that the exact value lies between the linear and quadratic analytic approximations. Tables 1, 3 and 4 extend Table 2 to wavelengths 285.5 nm, 855.93 nm, and 1547.09 nm, respectively. As noted for Hα, at these near-ultraviolet and near-infrared wavelengths, the linear fit model also has smaller RMSE than the linear analytic. Also, if we compare the two quadratic models, then again for 255.5 nm, 855.3 nm, and 1547.09 nm, the quadratic fit model wins, and for 255.5 and 855.3 it is by an even larger factor than it does for Hα, by factors 10.34 and 7.78, respectively, while for 1547.09 nm the quadratic fit model wins over the quadratic analytic by a factor 2.00. If we take these four wavelengths as representative, then, the quadratic fit model is preferred, and nearly reproduces the exact values, despite the high precision of the quadratic analytic model. Some applications may not require such high precision. If we choose to restrict ourselves to linear models the fit model is still preferred, though it is a close call at 1547.09 nm, where the linear analytic model RMSE is 1.05× larger than the linear fit, so has only a 5% improvement. At that wavelength, the linear analytic may be sufficient, and indeed an analytic approach has some advantages. For example, it may be optimized for a particular range of dates of particular interest, and the single coefficient interpreted as a "linear sensitivity" of temperature to irradiance at this wavelength.
Note that the SIM instrument registers a higher variability of spectral irradiance for shorter wavelengths, i.e., 285.5 nm and 355.93 nm. This occurs because the more ener- getic photons (according to the Planck-Einstein relationship E = hc/λ) allow for more transition and ionization processes than at near-infrared wavelengths, such as those shown in Figure 10, 855.3 nm and 1547.09 nm.
Continuing with the plans for simplifying the calculations of the brightness temperature, which is the central objective of the article, Figure 11 shows the plots of the quotients of the linear analytic coefficients for certain wavelengths. Looking at the behavior of the curve of the quotients a , a polynomial interpolation was obtained, as discussed in Appendix I. This provides a simple mathematical expression useful in calculating the linear coefficients for any wavelength in the range from 400 nm to 1800 nm. With this, calculating the brightness temperature becomes simpler and faster than Equation E.18 with D.3, and valid for To compare the linear analytic and linear fit models, Figures 12 and 13 show the differences between the coefficients of the linear analytic approximation model, Equations E.10 and E.18 omitting the quadratic term, or G.5, and that of the linear fit model, Equation G.1. Note that the fit coefficient a in Equation G.1 is computed using R software, and depends on the range of days supplied. This can range over the full set of days available from SORCE SIM (17 years of daily data). For comparison we also compute aR1 over the set of days in the first half of the data, that have the smaller or RSME values shown in Figure 7a, as well as aR2 over the late-day range, with larger RMSE. In short, early and late-year ranges are R1 = 2003 -2010, and R2 = 2011 -2020. All three ranges, overall, R1 and R2 are shown in Figures 12 and 13. In the figures we can see that the values obtained with Equations G.1 and G.5 (with E.10) do not vary much for wavelengths less than 1400 nm and greater than 400 nm, therefore the brightness temperature values that are calculated in that range of wavelengths also do not differ much, using the linear analytic and linear fit models. Note that aR1 and aR2 values lie on either side of the overall difference value of a, which in every case lies in between, for each wavelength.
Summary and Conclusions
Our results and conclusions may be summarized as follows: (i) The linear and quadratic analytic approximation models, Equations E.18, with Equation E.10 for the linear term, and E.17 for the quadratic term, and E.3 to compute B from SSI, simplify calculations of solar brightness temperature T on any chosen day for a fixed wavelength, with B or SSI as a single variable. (ii) The linear analytic approximation overestimates the exact values of T , while the quadratic analytic approximation underestimates the exact values, but has much smaller RMSE (rms error) than the linear. (iii) By using the full dataset to find coefficients that minimize the RMSE we find linear and quadratic "fit" approximations that lie closer to the exact values for representative wavelengths, as can be seen by the "fit" RMSE values in Tables 1 to 4, being smaller than the corresponding analytic RMSEs, i.e., (fit RMSE)/(analytic RMSE) < 1 for both linear and quadratic cases, for near-ultraviolet, visible, and nearinfrared wavelengths. (iv) For wavelengths in between the tabulated ones, Equations I.1 and I.2 provide a smooth interpolating polynomial function of wavelength, which is simpler and faster to apply than Equation E.10 in the analytic case, or the R software in the fit case, and accurate for any wavelength within a broad range across the peak of the SSI, extending into near-infrared wavelengths that are of particular importance in modeling Earth's climate.
The statistical measure used to understand the differences between values calculated by the linear and quadratic analytic approximation models with the exact values of T obtained from Equation D.2 (or root finding in Mathematica software), is the RMSE (root-mean- Table 2 show that for the Hα wavelength the RMSE for the linear (412.455 × 10 −6 K) and quadratic (0.3428 × 10 −6 K) analytic approximation models are small, and therefore the deviations between the estimated and exact values are small. Table 1 shows that for a wavelength of 285.5 nm the RMSEs for both analytic models remain small, though larger than for Hα. For both these wavelengths, the quadratic analytic model is superior to the linear analytic model. Tables 3 and 4 shows that for the longer nearinfrared wavelengths 855.93 nm and 1547.09 nm this pattern continues, with the quadratic analytic model being superior to the linear analytic. The fact that at all four wavelengths the quadratic analytic RMSE is smaller than the linear analytic RMSE suggests that further terms in the Taylor expansion may converge towards the exact over the full wavelength range. However, we do not have proof of convergence. Even if the series does converge, there is only a suggestion, not a guarantee, that it will converge to the exact value given by Equation D.2.
Comparisons of the linear analytic coefficient (Equation E.10 or G.5) with the coefficient of the linear least squares fit of the data performed with the statistical packages of R software are shown in Figures 11 and 12. The linear fit model shows the line that best represents the entire data set, whereas the linear analytic approximation model has its maximum accuracy on the chosen reference day. Gaps in the data, the primary one being that which occurs from 2013-07-20 and 2014-03-12 (Harder, Beland, and Snow, 2019) have a direct influence on the coefficient of linear fit, because the solar spectrum measurement instruments SIM A and SIM B showed significant differences from the spectrum measured at the beginning of 2011, as can be seen for example in Figure 1 of the article of Harder, Beland, and Snow (2019).
Despite the good quality of the two analytic approximations, we find that the two fit models provide better "in between" approximations. The most accurate of the four approximations considered here is the quadratic fit model. We have seen that the brightness temperatures that it produces are in most cases indistinguishable from the exact temperatures that are found as roots of the equation that defines the brightness temperature, SSI = α s B(T ), where B is the Planck distribution, and α s is the solid angle subtended by the Sun at the mean Earth distance.
There will soon be new opportunities to apply and extend this study. Both TIM and SIM instruments are now acquiring daily data onboard the International Space Station. The new record, begun on 2018-03-14, had sufficient overlap with SORCE to enable the prior dataset to be adjusted to match TSIS-1 (https://lasp.colorado.edu/lisird/data/sorce_sim_tav_ l3b/). Currently, TSIS-1 extends to 2021-07-20 and continues to be extended. TSIS-1 will be succeeded by TSIS-2, which is expected to continue the record beyond the peak of Solar Cycle 25. We look forward to testing and applying the approximations studied here to future solar-cycle data, to enable improved understanding of the Sun's irradiance and temperature variations.
Appendix A: Total Solar Irradiance and the Sun's Effective Temperature
The Sun is not a blackbody, since the brightness temperature varies significantly with wavelength, as shown in Figure 1. However, we can define an "effective" radiative temperature T eff , using the blackbody formula, with the Stefan-Boltzmann constant σ = 5.670374 * 10 −8 W/m 2 /K 4 as follows Here, TSI is the total solar irradiance (historically "solar constant"), while α is the ratio between the total area of the Sun with radius R s = 6.957 * 10 8 m divided by the area of a sphere centered on the Sun with radius equal to one astronomical unit AU = 149 597 870 700.0 m, so that The energy flow emitted by the Sun decreases as it diverges from the Sun's photosphere, becoming isotropically decreasing as 1/distance 2 . TSI and SSI values measured by satellites like SORCE are adjusted to the mean Earth-Sun distance of one AU, thus removing variations due to the satellite orbit. Earth receives a small fraction of the energy emitted by the Sun and recorded by satellites, and that fraction will be considered in the following.
Figure 13
The relative error between the analytic linear coefficient a and the fit linear coefficient a, where the fit coefficient a is calculated in the same three ways as in Figure 12, namely using the full available time period 2003 -2020, or R1 = 2003 -2010, or R2 = 2011 -2020.
Appendix B: Solar Luminosity and the Sun's Lifetime
Questions of how the Sun shines, and how old it is, have been objects of interest since ancient times, but it was not until the scientific revolution that there was an opportunity to give definitive answers, first from classical physics, then using ideas from relativity, quantum mechanics and nuclear physics. With the development of modern theories, the answer became well understood (Bethe, 1939;Bahcall, 2000;Adelberger et al., 2011). The solar luminosity, L, is the total solar power, the total radiative energy emitted from the Sun per second, isotropically in all directions. The best current estimate of L relies on the measurements of TSI, which is the solar power per m 2 at the mean Earth-Sun distance of one AU. To obtain L from TSI, multiply by the total number of square meters on a sphere with radius equal to the Earth-Sun distance, so using the TIM value from Kopp and Lean, 2011 gives L = TSI * 4π * AU 2 = 1360.8 W m 2 * 4π * AU 2 = 3.82696 * 10 26 W. (B.1) The energy produced by nuclear reactions in the Sun's core is determined using Einstein's E = mc 2 , where m is the mass loss in the primary reaction, which in the Sun is conversion of four H atoms into one He, as explained in 1939 by Hans Bethe in his classic 1939 paper "Energy production in stars," for which he won the Nobel Prize (Bethe, 1939).
Assuming If the constant value of L is replaced by a linearly increasing L, while the Sun is also assumed to be about halfway through its lifetime, then the above estimate is not significantly altered, since a dimmer younger Sun is compensated by a brighter older Sun.
Appendix C: Earth's Temperature from TSI
Earth intercepts a small fraction of the solar energy, casting a small shadow on the sphere of area 4π * AU 2 . That absorbed energy fraction is determined by the product of the TSI, the Earth's cross section (π * R 2 E ), and the Earth's absorptivity, or its albedo α = 1 -absorptivity. The absorbed fraction determines Earth's global mean temperature (North, Cahalan, and Coakley, 1981;Gray et al., 2010). Earth's temperature then determines the total thermal energy that Earth emits back into space. The balance between the absorbed solar energy, and the emitted thermal energy, determines Earth's effective radiative temperature, T E . This condition of radiative equilibrium at the top of Earth's atmosphere is expressed as Dividing through by Earth's surface area gives the global average energy emitted and absorbed in the form where = TSI/4, and α = Earth albedo = 0.29. (Note, the albedo symbol α used in Equation C.2 is not the α of Equation A.2.) Now, knowing that the energy absorbed and radiated by the Earth are equal, in thermal equilibrium, the effective temperature of the Earth can be calculated as TSI impacts the average and long-term variability of Earth's temperature and, of course, its variations have impacted climate for millions of years (Kopp and Lean, 2011;Solanki, Krivova, and Haigh, 2013). TSI variations can be understood as a combined impact of variations in sunspots, and faculae, as well as variations occurring over the entire Sun. Models based on these have been key tools in studies of Earth's climate (e.g., Kopp and Lean, 2011;Foukal and Lean, 1985).
Appendix D: "Exact" Solar Brightness Temperatures
As already mentioned in previous sections, the Sun is not a pure blackbody. The SSI (solar spectral irradiance) has evident deviations from a pure Planck distribution, due to atomic absorption and ionization processes in the solar atmosphere. (See Figures 1 and 2.) An especially helpful way to study these deviations is by transforming SSI at each wavelength λ into solar brightness temperature T . To do this, at each fixed wavelength, we solve the Planck distribution for T . That is, we solve where k 1 = 10 20 c 1 = 1.19268 * 10 20 W m 2 /Sr and k 2 = 10 7 c 2 = 1.43877 * 10 7 K m are constants, and the units of B are W/m 2 /nm. Solving for T gives the following, which we term the "exact" solar brightness temperature: To obtain the solar spectral irradiance SSI from the Planck distribution B requires an integral over the solid angle of the Sun at the Earth's mean orbital distance. This gives where the value of α s = π * α, so from Equation A.2 we have α s = 6.79426 * 10 −5 . Note the wavelength λ is kept fixed, and for each wavelength there is a corresponding brightness temperature T , determined by the value of temperature for which the satellite's SSI observation coincides with the Planck distribution for that λ and T . Equivalently to Equation D.2, to solve Equation D.3 in Mathematica software, we use the initial condition T = 5770 K and α s = π * (R s /AU) 2 , and apply the function FindRoot to Equation D.3, which gives the same values of T as the explicit "exact" Equation D.2. The next appendix shows how to approximately calculate the brightness temperatures T for any fixed wavelength, having only the observed SSI values (or equivalently B) as a variable, because all other parameters are defined on a single "reference day" so do not vary from day-to-day. It is important to remember that the wavelength is fixed, and consequently, the parameters T o , SSI o , (dT /dSSI) o and higher derivatives (evaluated on the reference day) vary with wavelength. For the SIM data used in this article to produce the plots, the wavelengths range from 240 nm to 2416 nm. For each wavelength in this range, there is a set of parameters that can be used to determine a time series of brightness temperatures T for all other days in the date range [2003-04-14 to 2020-02-26].
Appendix E: Analytic Approximations for Brightness Temperature
This appendix derives two simple analytic representations of the daily brightness temperatures that take advantage of the fact that, at a given wavelength, the SSI values are very nearly equal from day-to-day, and typically vary by less than 1%. The analytic approximations express the daily temperature values on any given day, T , at each fixed wavelength, by a Taylor expansion of the exact value of T as an analytic function of SSI, as given in Equation D.2. The expansion is about the value of SSI and T on a given "reference" day (T o , SSI o ), as follows We focus on the "linear approximation" that keeps just the first derivative, and then the "quadratic approximation" that keeps the first two derivatives. Higher-order terms will be neglected, except in the discussion of convergence. Since SSI is directly proportional to B by a constant rescaling, as given in Equation D.3, we may write E.1 as In order to compute the first and second derivatives via the chain rule, we introduce two new variables, y and z, as follows: Let Then, Therefore,
Equations D.2 and E.3 imply
We may compute the derivative of Equation E.9 using the chain rule, employing Equation E.8, to obtain k 1 * (e y − 1) 2 e y = k 2 λ 4 k 1 y 2 * (e y − 1) 2 e y . (E.10) To evaluate E.10 on the reference day, we set B = B o equal to the value on that day, compute y = y o from Equation E.3, and substitute that into Equation E.10. In order to compute the second derivative, we note that Equation E.10 is already in the form analogous to E.9, namely y (z (B))).
(E.11) Therefore, as in computing Equation E.10, we take the derivative of E.11 using the chain rule, E.8 and E.10 to obtain (e y − 1) 2 e y + 1 y 2 d dy (e y − 1) 2 e y * −λ 5 k 1 * (e y − 1) 2 e y . (E.12) On the right side we applied the product rule to compute dT (1) /dy from Equation E.10, giving the two terms in the left square brackets, and used Equation E.8 to substitute into the right square brackets. Evaluating the first term in the left bracket of Equation E.12 allows us to factor out 1/y 2 from both terms. We also combine the rightmost constant −λ 5 /k 1 with the leftmost constant k 2 λ 4 /k 1 to yield the following We apply the product rule to the remaining derivative in the second term in the left-hand brackets, and use Equation E.4, which implies dz/dy = z, to give (E.14) In the second term within the left square brackets, we distribute the z, then factor out (z−1) 2 z to yield To evaluate Equation E.17 on the reference day, just as for Equation E.10, we set B = B o , equal to the value observed on that day, substitute that into Equation E.3 to compute y = y o , and substitute the value of y = y o into Equation E.17. Substituting these first and second derivatives of T evaluated on the reference day into Equation E.2, and neglecting all higherorder terms, we obtain the quadratic analytic approximation given by where the linear term is computed using E.10, and the quadratic term is computed using E.17. Omitting the quadratic term in E.18 gives the linear analytic approximation.
Appendix F: The Sun's Effective Temperature
Here, we derive a linear approximation for the "effective" temperature T eff , Equation F.4, associated with the total solar irradiance, TSI. This is a simpler case than for SSI, since for TSI, the Stefan-Boltzmann equation makes the exact T eff a simple analytic function of TSI, given in Equation F.1. As mentioned before, the Sun is not a blackbody, but we can calculate its associated effective temperature by using the Stefan-Boltzmann Equation and using TSI (total solar irradiance) measured directly by satellites above the atmosphere, by solving Equations A.1 and A.2 to obtain where σ = 5.670374 * 10 −8 W/m 2 /K 4 is the Stefan-Boltzmann constant, and from Equation A.2 α = 2.16268 * 10 −5 . Taking the derivative of F.1 gives an expression for the change in effective temperature with a change in TSI as follows: Since TSI typically varies by about 0.1% or less, Equation F.4 is quite accurate for most days. An extreme case is the "Halloween" event of 2003-10-29, when a large sunspot grouping dropped the temperature by about 3.6670 K below the reference day T eff on 2008-08-24. Equation F.4 estimates a 3.6605 K decrease from the reference day, i.e., a 0.0065 K underestimate, which is 0.1773% of the drop, or 0.0001% of (T eff ) o = 5771.2685 K. Substituting the 2003 "Halloween" values of T eff and TSI into Equation F.3 gives instead of F.4 the coefficient 1.06255. This day of minimum T eff is also the day of maximum coefficient of sensitivity over the full 17-year SORCE SIM record and is 19% larger than the coefficient on the reference day, shown in Equation F.4.
Conversely, the minimum coefficient over the 17 years occurs on the day of maximum T eff , which is 5773.1820 K, which occurred on 2015-02-26. That minimum coefficient is 1.05947, 10% less than the reference-day ratio in F.4. The average coefficient over all days is 1.06029. The fact that this average value is 2% less than the reference day's implies that the linear approximation in Equation F.4 typically slightly overestimates the changes in T eff . The same is true for SSI: the linear analytic approximation of brightness temperature T , obtained by dropping the quadratic term in Equation E.18, also has a positive mean error, or bias, as shown for four representative wavelengths in Tables 1 -4. Those tables also show that inclusion of the quadratic term in E.18 largely removes this positive bias, leaving a very small mean error, and small RMSE, as discussed in more detail in the text.
In principle, there are two ways to determine the total solar irradiance (TSI). The first is by using the SORCE TIM instrument to obtain a direct measurement. The second is to use the SSI measured by the SIM instrument and integrate as wide a range of wavelengths as possible. As expected, there is a shortfall in the value computed by integrating the SIM data compared to what is measured by TIM, mainly due to missing energy above the longest wavelengths measured by SIM, approximately 2400 nm. This TIM-SIM difference is shown in Figure 2 of the article by Harder, Beland, and Snow (2019), and amounts to 146.128 W/m 2 . This must be subtracted from the value measured by TIM, or added to the integrated SIM value, for comparisons to be made between TIM and SIM. In this paper we focus on SSI, though both SSI and TSI must be considered in the study of Earth's climatic variations.
Appendix G: Linear and Quadratic Fit Model
Comparing Figures 7 and 9 shows that the linear analytic approximation, using only the first two terms in E.18, overestimates the exact T , given in D.2, while the quadratic approximation, using all three terms in E.18, though closer to the exact, slightly underestimates. To consider a possible "in between" approximation, this appendix introduces linear and quadratic "fit" models. These statistical "fit" models calculate the brightness temperature as a function of wavelength using R software. In the linear and quadratic analytic approximation models discussed in earlier appendices, estimates of solar brightness temperatures T are made based on the measured solar spectrum of the chosen reference day, and the exact brightness temperatures computed for that day, which occurs during a time of minimum solar activity. By contrast, the fit models we discuss below take into consideration the statistical properties of the full set of daily data over the 17 years of the SORCE mission.
A statistical model that provides a least square fit to the solar spectral irradiance (SSI) data obtained in the R software with linear regression may be written as where a, b are constants for a specific wavelength and T is the brightness temperature. In the same way, R software may compute a least squares quadratic fit of the form These two fit models express linear and quadratic dependences, respectively, between SSI and T . We obtain, using code developed in open-source R software, simple models that best fit the data, for which the mean square error is minimized.
We rewrite the analytic Equation E.1 (or the equivalent E.2), up to the linear term, as follows: which has a similar form to Equation G.1. This will allow us to make a comparison between values of the constants that appear in Tables 7 and 9. The linear analytic coefficient, which appears in Table 6, and SSI o are constant, because they are evaluated for the data of the reference day that appears in Table 5. It is important to note that the constant a defined in this part is the same constant given in the linear analytic term in Equation E.1. We follow a similar procedure for the quadratic analytic model, rewriting Equation E.1 to obtain, which is a mathematical expression similar to Equation G.2, where the values of the constants are given as: The values of constants A , B , C in the analytic model, and A, B, C in the fit model, are shown in Tables 8 and 10, along with values of T obtained with the quadratic analytic model and the quadratic fit model for certain wavelengths.
As mentioned above, the linear fit is obtained with regression techniques in the R software and for this, all the available spectral irradiance data are used for a fixed wavelength and therefore, if there is a change in the range of data, the linear fit changes because an analysis is done on all the data. In this way, a partition of the available data (2003 to 2020) into an "early period" designated R1 (2003 to 2010), and a "late period" designated R2 (2011 to 2020) was made to make a comparison between the linear coefficients shown in Tables 11 and 12.
Appendix H: Example Calculations of Brightness Temperature Using the Analytic and Fit Models
This appendix illustrates the T approximations by considering an example of a randomly chosen day. For this example, results from the linear and quadratic analytic models are compared with the results of applying the linear and quadratic fit models for the randomly chosen day. In Tables 1 -4, the RMSE (root-mean-square-error) and the ME (mean error, or bias), computed over all the available days in the SIM v27 record, are shown for all four models, linear and quadratic, analytic and fit.
To better explain how the analytic and fit models work, consider the following example for the wavelength of λ = 656.20 nm (Hα). As mentioned above, the data of one particular day during solar minimum is taken as a reference, in this case 2008-08-24. On that Table 12 Values of the linear fit constant a calculated using two ranges of dates, R1 and R2: aR1 is the linear coefficient calculated with R software using data from 2003 to 2010, while aR2 is calculated using data from 2011 to 2020. The relative errors are also obtained when making the comparison with a . Similarly, using the same B o but varying wavelength, and so varying y o , Table 6 shows the linear analytic model coefficients for the wavelengths of 285.50 nm, 656.20 nm, 855.93 nm and 1547.09 nm. The linear analytic approximation is used to estimate the value of T for some other day, knowing the SSI of that day. If we choose a random day, for example 2011-10-10, the value of SSI of that day (see Table 5) for the wavelength of Hα is SSI = 1.527622 Wm −2 (nm) −1 . Using Equation E.18, without the quadratic term, then yields T = 5772.410671 K + 973.20427 * (1.527622 − 1.526558) K T = 5773.44616 K.
(H.2) This is the linear analytic approximation for the brightness temperature for the "example" date 2011-10-10. It is close to, but slightly larger than, the value computed by the "exact" Equation D.2 (or root finding in Mathematica software), T = 5773.44598 K (see Table 5). The error of root finding is very small compared to either the analytic or statistical estimates, for the relatively smooth functions involved here, so in this article both the result of using Equation D.2 and the root-finding result are referred to as the "exact" value.
If we consider the root-mean-square-error (RMSE) in Table 2, the value obtained with the linear approximation, 5773.44616 K ± 0.00041 K, agrees very well with the exact value, since it is within the range of values. The exact value is obtained by applying Equation D.2. The approximate result of H.2 is obtained when using the Equation G.5 with the values of the constants in Table 7.
For the quadratic analytic approximation, we calculate the brightness temperature using Equation Similarly, the quadratic coefficients for the other wavelengths are given in Table 6, but rounded to three places. Therefore, the value of the Hα temperature obtained by including the quadratic term in H.2 is 5773.44597715 K, which when rounded to five places right of the decimal, agrees well with the exact. If we consider the RMSE, rounded to eight places, the range of the brightness temperature is 5773.44597715 ± 0.00000034 K, which also includes the exact value, and is much closer to the exact than is the linear.
The above was for a wavelength of 656.20 nm. Table 5 shows the SSI and T values for this same wavelength, as well as for wavelengths of 285.50 nm, 855.93 nm and 1547.09 nm, and also the SSI and exact T values on 2011-10-10. Tables 7 and 8 show estimated results for 2011-10-10, using the analytic Equations G.5 and G.6, respectively.
Finally, in Tables 9 and 10 the parameters of linear G.1 and quadratic G.2 fit models are shown, and the value of the estimated brightness temperature on 2011-10-10. Comparing these results with the values of the RMSE and ME listed in Tables 1, 2, 3 and 4, the results are shown to be in excellent agreement with the exact values obtained from Equation D.2 or the Mathematica root-finding method.
Appendix I: Temperature Sensitivity Ratios and Rapid Interpolation
This final appendix provides a rapid method of interpolation between the measured wavelengths.
The ratio of a small change of the Sun's effective temperature divided by the associated change of the TSI is 1.06053 K/(W/m 2 ), as given by Equation F.4. The analogous spectral relationship is the linear coefficient a of the linear analytic approximation, the ratio of the change of spectral brightness temperature divided by the associated change in the SSI, the solar spectral irradiance, from the linear term in Equation E.18, using E.10 and D.3. To match the units of this TSI sensitivity ratio, it is appropriate to divide a by the wavelength λ. This allows determination of the wavelength for which the ratio a /λ is closest to the TSI value 1.06053 K/(W/m 2 ). The spectral values are given in Table 13, which shows a minimum value of a λ = 1.2280, which occurs near 486.3 nm, whereas otherwise a λ > 1.2280 > 1.06053 K/(W/m 2 ) = TSI sensitivity.
For interpolation between measured wavelengths, it is useful to obtain a simple analytic mathematical expression for the brightness temperature sensitivity ratio a /λ. An interpolation function for the ratio a λ may be expressed as a λ = 6.043791 × 10 −6 λ 2 − 6.076933 × 10 −3 λ + 2.851032, (I.1) where λ is any wavelength that satisfies 400 nm ≤ λ ≤ 1800 nm. With the previous expression one can calculate a for any wavelength λ within this range, and then use the SSI value on any day to compute the associated brightness temperature T , using the linear analytic approximation, which can be written (I.2) Equation I.2 represents a method of calculating the brightness temperature that is simpler and faster than the linear term in Equation E.18, and valid for any wavelength within a broad range.
|
v3-fos-license
|
2022-10-11T15:07:19.496Z
|
2022-10-06T00:00:00.000
|
252789192
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2673-8244/2/4/26/pdf?version=1666834517",
"pdf_hash": "249958c2671acad9549978956098c1dd6039670f",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2395",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"sha1": "14a3300b75dd81411aea3b3b715ce8fe2b259016",
"year": 2022
}
|
pes2o/s2orc
|
Simulation of an Aeronautical Product Assembly Process Driven by a Metrology Aided Virtual Approach
: Major aircraft manufacturers are expecting the commercial aircraft market to overcome the pre-COVID levels by 2025, which demands an increase in the production rate. However, aeronautical product assembly processes are still mainly manually performed with a low level of automation. Moreover, the current industry digitalization trend offers the possibility to develop faster, smarter and more flexible manufacturing processes, aiming at a higher production rate and product customization. Here, the integration of metrology within the manufacturing processes offers the possibility to supply reliable data to constantly adjust the assembly process parameters aiming at zero-defect, more digital and a higher level of automation manufacturing processes. In this context, this article introduces the virtual metrology as an assistant of the assembly process of the Advanced Rear-End fuselage component. It describes how the assembly process CADmodel is used by simulation tools to design, set up and perform the virtual commissioning of the new metrology-driven assembly methods, moving from a dedicated tooling approach to a more flexible and reconfigurable metrology-aided design. Preliminary results show that portable metrology solutions are fit-to-purpose even for hardly accessible geometries and fulfil the current accuracy demands. Moreover, the simulation environment ensures a user-friendly assembly process interaction providing further set-up time reduction.
Introduction
The aeronautical product assembly process is complex by nature, mainly because it involves a manual intensive operation and the use of a large number of assembly fixtures. Traditionally, it is performed by employing dedicated or modular fixed tooling that ensures the accurate location among the components and against any assembly forces. Nowadays, the process is completely manual or slightly supported by automatic actuators and the irruption of novel composite materials demands new approaches to avoid or at least reduce manual intervention. The fragile nature of the composite materials and the modification of the assembly workflow demand a more digital, flexible and higher level of automation within the assembly processes, aiming at a zero-defect assembly strategy by improving the assembly rate.
The increasing production plans defined by major aircraft manufacturers demand the reduction in the aircraft lead time (from design to delivery) but the low level of automation within the current assembly workflow is considered the main barrier toward a zero-defect assembly strategy at high assembly rates. This is where the integration of metrology within the assembly process plays a significant role. It makes it possible not only to drive the in-process quality assurance of assembled products but to supply reliable data to constantly adjust the assembly process towards a zero-defect assembly process [1].
However, due to the close relative tolerances in the large work volumes, the production and quality production inspection of these components often encounters the limits of manufacturing and production metrology [2]. The tradeoff between increasing component dimensions and constant or even decreasing tolerances as well as the necessity of making measurements in uncontrolled environments demand new concepts and innovative measurement technologies to be integrated into the manufacturing and assembly processes [2,3]. This is where the Large Scale Metrology (LSM) expertise comes in [2]. Some LSM attributes are also suffered from within the aeronautical product assembly processes, such as the challenging tolerances, the non-negligible effect of gravity, the non-ideal environment affecting the measurement uncertainty or the small-batch production and the need for a first-time-right production [2].
The use of metrology tools within the assembly process preparation stage also makes it possible to design, optimize and check in advance the suitability of certain metrology-aided assembly processes [4], similar to what the virtual commissioning concept suggests to speed up the final commissioning of manufacturing tools and processes [5,6].
The integration of metrology within the aeronautical assembly process is not new. The concept of Measurement-Assisted Assembly (MAA) suggests any method of assembly in which measurements are used to guide the assembly processes [7]. It encompasses a range of innovative measurement-assisted processes which enable rapid part-to-part assembly, increased use of flexible automation, traceable quality assurance and control, reduced structure weight and improved levels of precision across the dimensional scales [7]. The state-of-the-art review of tooling for rigid part manufacturing and assembly [7,8] describes the assembly methods and their limitations based on the use of fixed and rigid toolings, and how external metrology solutions based on photogrammetry and laser technologies can be used to solve the lack of flexibility of these assembly methods. Mei et al. present a review of flexible and measurement-assisted assembly technologies in aircraft manufacturing, in which they suggest a digital metrology system as the basis for accurate digital flexible assembly [9]. Muelaner et al. suggest that MAA provides many of the advantages of part-to-part assembly without requiring interchangeable parts [1]. Schmitt et al. introduce an optical automated measuring solution that estimates and compensates for the real deformation of aircraft shells before joining through smart tooling [10]. Zhehan et al. propose an analytical uncertainty evaluation method for resolving the uncertainty assessment of position and orientation in the aircraft components' alignment [11]. Kihlman et al. explain a reconfigurable tooling approach for airframe assembly based on the use of automatic devices fed by measuring data [12]. Hu et al. present a flexible drilling jig system that is adjustable for wing-fuselage connection and adaptable to different part sizes and forms [13]. In any case, although the automatic assembly and tooling flexible reconfiguration driven by MAA approaches are mentioned, non-meaningful bibliographic references have been found regarding the virtual metrology-aided assembly simulation concept (virtual commissioning of MAA) and its application within the aeronautical product assembly process of hardly accessible parts. This is a main novelty and contribution to this research, as previous studies deal with multiple instruments deployment or multi-station sequential measurement strategy simulations supported by uncertainty assessment exercises, whilst this application requires a single instrument location approach in a harsh line of sight set-up with agile measurement procedures. The same approach has not been found in the bibliography, as most of the applications deal with the assembly of external surfaces or accessible airplane structures, whereas this paper presents the assembly of the internal components of an aeronautical product. Hence, the motivation and therefore the novelties of the research are addressing this case study regarding the metrology-aided simulation and feasibility analysis of hardly accessible assembly scenarios. The research is focused on the measurement process simulation concerning the visibility of control points from a single instrument location and the accuracy assessment of moving parts inside the external skin or the main shell of the aeronautical product. The simulation approach follows the ISO 15530-4 method driven by the Monte Carlo simulation strategy and a digital twin of the measuring scenario and method. Therefore, this article presents a novel metrology-aided assembly method driven by automatic actuators. This is defined as a collaborative assembly tooling approach which aims to guide the assembly operator during the process of moving and positioning the parts in their design location instead of employing the traditional manual operation. This approach not only improves the level of automation, but it is also safer and more accurate as the large components to be assembled are always moved in a similar manner. This aspect is not ensured when the manipulation is manually executed, although the final position of every component within the assembly is established by the traditional rigid fixtures.
This article presents the introduction of metrology within the traditional assembly process of the Advanced Rear-End (ARE) fuselage component. The work presents how metrology is brought into the assembly process to improve, optimize or even take corrective measures before the assembly process ends when correction is more expensive or even impossible. Thus, a digital twin of either the assembly process or the metrology solution is developed to run an a priori Monte Carlo simulation to develop the novel assembly process of composite high load frames in a highly integrated composite skin driven by metrology [14,15].
The article describes the assembly process pipeline in detail to understand the cons and the pros of the suggested novel assembly process and how the external metrology framework collaborates with it to guarantee the quality request. The a priori simulation makes it possible to check different aspects, such as the feasibility of the measurement process, the robustness of the measuring procedure as well as the achievable accuracy regarding part positioning requests.
Description of the Traditional Assembly Process and the Proposed Novel Concept
Traditionally, the manufacturing and assembly processes of aeronautical structural parts involve stiff fixtures and auxiliary tooling, which ensures a precise part positioning within the emerging structure. The assembly tooling which is used to control the form of the assemblies is typically a heavy steel structure built on a concrete foundation, and it includes the mechanical references (tooling holes) that are considered as fiducial points to ensure that the final part geometry is fit-to-purpose. Thus, assembly tooling accounts for approximately 10% of the total manufacturing cost of an airframe [7,16]. The traditional assembly process starts first by locating the smaller parts (frames, ribs, beams, etc.) in the assembly tooling and then this structure is covered with the skin (several pieces). Exceptionally for this ARE product, as the skin is a single part, the traditional assembly sequence is not feasible. Thus, the skin is fixed to the main fixture and the frames (load and closing) and lateral beams are located and attached to it. Hence, the composite skin attached to the main tooling frame becomes the reference part during the assembly. In any case, this monolithic tooling is very expensive to be manufactured and it has long lead times and reduced flexibility to accommodate any product variation and design changes.
The ARE fuselage component prototype suggests a slightly different assembly process based on an external metrology framework and automatic stages. The assembly approach is driven by a metrology-aided assembly process that feeds the automatic positioning stages to locate the parts that are theoretically assumed to be fixed. The measurement instrument, in this case a laser tracker, knows by a priori rough alignment where the fiducial points (retroreflectors) attached to the fixed and moving parts of the tooling are located. Those fiducial points are automatically measured by the laser tracker and the relative position between the parts is estimated, creating a correction command for the moving stages. The correction value is automatically sent to the actuators. Thus, those measurements and the automatic displacements are iteratively performed until the theoretical location of the moving parts is reached and ensured under a certain positioning threshold (0.1 mm in XYZ). In summary, the employed design concept is close to a collaborative assembly tooling as it is based on a smart mechatronic system that aids the operator through the assembly process.
Aiming to reduce any technical risk related to the new assembly process, the real prototype shall enable both the traditional method as well as the metrology-driven method.
Not only does it avoid any risky manual intervention that may damage any involved part during the assembly operation, but it also makes a comparison between the traditional and the presented approaches possible. Figure 1 depicts the different methods assessed in this preliminary design stage. It shall be highlighted that the objective of the tooling set is to demonstrate that integrated technologies and approaches are suitable to produce precise parts.
Aiming to reduce any technical risk related to the new assembly process, the r prototype shall enable both the traditional method as well as the metrology-driv method. Not only does it avoid any risky manual intervention that may damage any volved part during the assembly operation, but it also makes a comparison between t traditional and the presented approaches possible. Figure 1 depicts the different methods assessed in this preliminary design stage shall be highlighted that the objective of the tooling set is to demonstrate that integrat technologies and approaches are suitable to produce precise parts.
Methods
This chapter describes the general elements involved in the virtual assembly proce as well as the interaction among them. Initially, the components that comprise the asse bly and how these components are positioned and fixed in a common mechanical stru ture (assembly tooling) is introduced and accompanied by technical drawings for a bet understanding. After that, the assembly sequence is briefly described, focusing on ho the main components are assembled. Finally, the virtual measuring strategy and the co plete simulation workflow are presented.
Description of the Product
The overall dimensions of the ARE prototype are approximately 3 m × 2.2 m × 1.2 In all cases, except for the skin, the local reference system is defined by some specific To ing Holes (THs) realized as measuring points. The datum plane is defined by 3 poin (TH1, TH2 and TH3), the secondary axis is defined by TH1 and TH2, and the coordina system's origin is defined by TH1. In the case of the composite skin, the reference syste shall be defined with the origin (TH1), the main axis (TH1 to TH2) and 6 additional T lugs (TH3/TH4/TH5/TH6/TH7/TH8) along the longitudinal edges that shall be used to g the best-fit position (see Figure 2). The real demonstrator integrates the following ma parts where those TH points are physically considered: -One (1) high-integrated composite skin, including the co-cured omega stringers a co-cured contour frames. The composite skin is depicted in Figure 2.
Methods
This chapter describes the general elements involved in the virtual assembly process as well as the interaction among them. Initially, the components that comprise the assembly and how these components are positioned and fixed in a common mechanical structure (assembly tooling) is introduced and accompanied by technical drawings for a better understanding. After that, the assembly sequence is briefly described, focusing on how the main components are assembled. Finally, the virtual measuring strategy and the complete simulation workflow are presented.
Description of the Product
The overall dimensions of the ARE prototype are approximately 3 m × 2.2 m × 1.2 m. In all cases, except for the skin, the local reference system is defined by some specific Tooling Holes (THs) realized as measuring points. The datum plane is defined by 3 points (TH1, TH2 and TH3), the secondary axis is defined by TH1 and TH2, and the coordinate system's origin is defined by TH1. In the case of the composite skin, the reference system shall be defined with the origin (TH1), the main axis (TH1 to TH2) and 6 additional TH lugs (TH3/TH4/TH5/TH6/TH7/TH8) along the longitudinal edges that shall be used to get the best-fit position (see Figure 2). The real demonstrator integrates the following main parts where those TH points are physically considered: -One (1) high-integrated composite skin, including the co-cured omega stringers and co-cured contour frames. The composite skin is depicted in Figure 2.
-Three (3) high-load composite frames (FR70, FR72, FR74). These parts shall be assembled according to the novel assembly method and shall be compared against the traditional method. The composite frames (FRs) are shown in Figure 3. -Three (3) high-load composite frames (FR70, FR72, FR74). These parts shall be assembled according to the novel assembly method and shall be compared against the traditional method. The composite frames (FRs) are shown in Figure 3. -One (1) closing additive frame. This part is manufactured by thermoplastic tooling and it is depicted in Figure 4. -Two (1 + 1) "H" shape composite beams. They are shown in Figure 5. -Three (3) high-load composite frames (FR70, FR72, FR74). These parts shall be assembled according to the novel assembly method and shall be compared against the traditional method. The composite frames (FRs) are shown in Figure 3. -One (1) closing additive frame. This part is manufactured by thermoplastic tooling and it is depicted in Figure 4. -Two (1 + 1) "H" shape composite beams. They are shown in Figure 5. -One (1) closing additive frame. This part is manufactured by thermoplastic tooling and it is depicted in Figure 4. -Three (3) high-load composite frames (FR70, FR72, FR74). These parts shall be assembled according to the novel assembly method and shall be compared against the traditional method. The composite frames (FRs) are shown in Figure 3. -One (1) closing additive frame. This part is manufactured by thermoplastic tooling and it is depicted in Figure 4. -Two (1 + 1) "H" shape composite beams. They are shown in Figure 5. -Two (1 + 1) "H" shape composite beams. They are shown in Figure 5.
Description of the Main Tooling
Aiming to realize a metrology-driven assembly process, a novel tooling set is envisaged. The development includes the complete tooling lifetime, from the design stage to the prototype manufacturing and assembly stages. The tooling set is divided into several functionalities and operations where 3 main toolings can be distinguished as explained in Figure 6. Although all of the previously presented tooling set is needed to perform the assembly process, the article focuses on the assembly tooling which plays the role of locating and joining all the components that comprise the ARE fuselage component. It covers not only the mechanical tooling (manipulation sling and drilling templates) but also the interaction, management and control over the integrated control systems (drives, force/torque sensors, displacement sensor, external measuring system, etc.) during the assembly process.
In the following lines, the main parts and functionalities of the assembly tooling are described briefly.
-Main structure (highlighted in green): The skin is located and attached to the main structure (see Figure 7 where sling structure is included). The nominal geometry shall be ensured by the adjustment of fixing lugs (TH points) through chocks, studs and resins. This is the main frame of the assembly tooling which must remain stable during the complete assembly process to maintain the tooling stiffness.
Description of the Main Tooling
Aiming to realize a metrology-driven assembly process, a novel tooling set is envisaged. The development includes the complete tooling lifetime, from the design stage to the prototype manufacturing and assembly stages. The tooling set is divided into several functionalities and operations where 3 main toolings can be distinguished as explained in Figure 6.
Description of the Main Tooling
Aiming to realize a metrology-driven assembly process, a novel tooling set is envisaged. The development includes the complete tooling lifetime, from the design stage to the prototype manufacturing and assembly stages. The tooling set is divided into several functionalities and operations where 3 main toolings can be distinguished as explained in Figure 6. Although all of the previously presented tooling set is needed to perform the assembly process, the article focuses on the assembly tooling which plays the role of locating and joining all the components that comprise the ARE fuselage component. It covers not only the mechanical tooling (manipulation sling and drilling templates) but also the interaction, management and control over the integrated control systems (drives, force/torque sensors, displacement sensor, external measuring system, etc.) during the assembly process.
In the following lines, the main parts and functionalities of the assembly tooling are described briefly.
-Main structure (highlighted in green): The skin is located and attached to the main structure (see Figure 7 where sling structure is included). The nominal geometry shall be ensured by the adjustment of fixing lugs (TH points) through chocks, studs and resins. This is the main frame of the assembly tooling which must remain stable during the complete assembly process to maintain the tooling stiffness. Although all of the previously presented tooling set is needed to perform the assembly process, the article focuses on the assembly tooling which plays the role of locating and joining all the components that comprise the ARE fuselage component. It covers not only the mechanical tooling (manipulation sling and drilling templates) but also the interaction, management and control over the integrated control systems (drives, force/torque sensors, displacement sensor, external measuring system, etc.) during the assembly process.
In the following lines, the main parts and functionalities of the assembly tooling are described briefly.
-Main structure (highlighted in green): The skin is located and attached to the main structure (see Figure 7 where sling structure is included). The nominal geometry shall be ensured by the adjustment of fixing lugs (TH points) through chocks, studs and resins. This is the main frame of the assembly tooling which must remain stable during the complete assembly process to maintain the tooling stiffness. -Auxiliary/secondary tooling: The frames (load and closing) and the "H" shape composite beams are located and fixed to the secondary tooling (see Figure 8). Similarly to the main structure, the geometry of such fixtures (6×) will be metrology-driven to meet the nominal geometry within the ARE fuselage assembly. -Mobile fixture: It aims to move and locate the load frames within the auxiliary tooling (see Figure 9). This mobile fixture will be fed and commanded by the external metrology framework as explained later in this paper. Following this, the integrated control systems included within the tooling assembly are performed.
-Process monitoring sensors (positioning, an anticollision system, force and torque sensors): Force and torque sensors shall be integrated into the main tooling structure to -Auxiliary/secondary tooling: The frames (load and closing) and the "H" shape composite beams are located and fixed to the secondary tooling (see Figure 8). Similarly to the main structure, the geometry of such fixtures (6×) will be metrology-driven to meet the nominal geometry within the ARE fuselage assembly. -Auxiliary/secondary tooling: The frames (load and closing) and the "H" shape composite beams are located and fixed to the secondary tooling (see Figure 8). Similarly to the main structure, the geometry of such fixtures (6×) will be metrology-driven to meet the nominal geometry within the ARE fuselage assembly. -Mobile fixture: It aims to move and locate the load frames within the auxiliary tooling (see Figure 9). This mobile fixture will be fed and commanded by the external metrology framework as explained later in this paper. Following this, the integrated control systems included within the tooling assembly are performed.
-Process monitoring sensors (positioning, an anticollision system, force and torque sensors): Force and torque sensors shall be integrated into the main tooling structure to -Mobile fixture: It aims to move and locate the load frames within the auxiliary tooling (see Figure 9). This mobile fixture will be fed and commanded by the external metrology framework as explained later in this paper. -Auxiliary/secondary tooling: The frames (load and closing) and the "H" shape composite beams are located and fixed to the secondary tooling (see Figure 8). Similarly to the main structure, the geometry of such fixtures (6×) will be metrology-driven to meet the nominal geometry within the ARE fuselage assembly. -Mobile fixture: It aims to move and locate the load frames within the auxiliary tooling (see Figure 9). This mobile fixture will be fed and commanded by the external metrology framework as explained later in this paper. Following this, the integrated control systems included within the tooling assembly are performed.
-Process monitoring sensors (positioning, an anticollision system, force and torque sensors): Force and torque sensors shall be integrated into the main tooling structure to Following this, the integrated control systems included within the tooling assembly are performed.
-Process monitoring sensors (positioning, an anticollision system, force and torque sensors): Force and torque sensors shall be integrated into the main tooling structure to monitor the reaction loads on the TH1 and TH2 lugs. Moreover, the limit positioning sensor switch shall be integrated into the mobile fixture while the motion origin shall be adjusted during the tooling assembly set-up. Moreover, the assembly machine will include a non-desired object detection system to avoid the collision of the mobile parts and components through the assembly process.
-Positioning drives and transmission systems (linear and turning stages): Three linear stages (XYZ) and a turning stage (Rx) are suggested to ensure the automatic tooling assembly process of every component, except for the closing frame and the "H" shape beams that are mounted manually (see Figure 10). The linear stages shall move the mobile fixtures to locate the load frames in their nominal position while being commanded by a metrology-driven approach. A laser tracker instrument is employed to bring every component to its nominal position. In addition to the laser tracker instrument, the end limit travel range of the largest axis (X longitudinal axis) for each frame position shall be controlled by a Linear Variable Differential Transformer (LVDT) system to guarantee that moving frames do not collide with the fixed supports while reaching their nominal position. Concerning the turning stage (Rx), it will enable the turning of the ARE fuselage to allow performing the operations that require it (mainly drilling and riveting tasks). Thus, an extra angular encoder is introduced within the turning stage to ensure an accurate angular origin (0 • position) realization.
Metrology 2022, 2 434 monitor the reaction loads on the TH1 and TH2 lugs. Moreover, the limit positioning sensor switch shall be integrated into the mobile fixture while the motion origin shall be adjusted during the tooling assembly set-up. Moreover, the assembly machine will include a non-desired object detection system to avoid the collision of the mobile parts and components through the assembly process.
-Positioning drives and transmission systems (linear and turning stages): Three linear stages (XYZ) and a turning stage (Rx) are suggested to ensure the automatic tooling assembly process of every component, except for the closing frame and the "H" shape beams that are mounted manually (see Figure 10). The linear stages shall move the mobile fixtures to locate the load frames in their nominal position while being commanded by a metrology-driven approach. A laser tracker instrument is employed to bring every component to its nominal position. In addition to the laser tracker instrument, the end limit travel range of the largest axis (X longitudinal axis) for each frame position shall be controlled by a Linear Variable Differential Transformer (LVDT) system to guarantee that moving frames do not collide with the fixed supports while reaching their nominal position. Concerning the turning stage (Rx), it will enable the turning of the ARE fuselage to allow performing the operations that require it (mainly drilling and riveting tasks). Thus, an extra angular encoder is introduced within the turning stage to ensure an accurate angular origin (0° position) realization. Considering the collaborative approach for the tooling, ergonomics shall also be considered within the design of the novel tooling. In this way, the maximum and minimum heights, the tooling accessibility (inner and outer side) and the tooling operation modes shall be considered to aid the operators during the assembly process. The main dimensions and layout of the tooling cell are depicted in Figure 11. Considering the collaborative approach for the tooling, ergonomics shall also be considered within the design of the novel tooling. In this way, the maximum and minimum heights, the tooling accessibility (inner and outer side) and the tooling operation modes shall be considered to aid the operators during the assembly process. The main dimensions and layout of the tooling cell are depicted in Figure 11.
Description of the Assembly Sequence
This chapter describes the main steps of the ARE fuselage assembly process. The presented sequence is a basic simplification of the real process, which is much more complex, aiming to highlight the virtual assembly simulation process within the overall assembly process. The temporary fixing of elements is performed by Cleco fasteners whilst the more critical joining points are ensured by accurately manufactured pins. The ARE fuselage assembly process is described in the following points: 1 Skin component positioning in the assembly tooling: This is the operation during which the skin component is manually positioned on the main structure of the assembly tooling aided by a crane. Figure 12 represents the assembly operation where the sling releases the skin component on the main structure by locating it through the reference TH points. The fixing operation is performed by the fixing pins. The "H" shape composite beams positioning: This is a manual operation during which the "H" beams are positioned on the main structure through the TH1, TH2 and TH3 supports. The temporary fixing is realized by Cleco fasteners and several coordination holes (CHs) are transferred into the skin where more Cleco fasteners
Description of the Assembly Sequence
This chapter describes the main steps of the ARE fuselage assembly process. The presented sequence is a basic simplification of the real process, which is much more complex, aiming to highlight the virtual assembly simulation process within the overall assembly process. The temporary fixing of elements is performed by Cleco fasteners whilst the more critical joining points are ensured by accurately manufactured pins. The ARE fuselage assembly process is described in the following points: 1 Skin component positioning in the assembly tooling: This is the operation during which the skin component is manually positioned on the main structure of the assembly tooling aided by a crane. Figure 12 represents the assembly operation where the sling releases the skin component on the main structure by locating it through the reference TH points. The fixing operation is performed by the fixing pins. 2 The "H" shape composite beams positioning: This is a manual operation during which the "H" beams are positioned on the main structure through the TH1, TH2 and TH3 supports. The temporary fixing is realized by Cleco fasteners and several coordination holes (CHs) are transferred into the skin where more Cleco fasteners are employed for a correct fixing operation. Figure 13 depicts the "H" composite beams positioning operation on the skin component. 3 Load frame components positioning: This operation realizes the automatic positioning of each load frame (LF) into their supporting auxiliary tooling attached to the main structure. The mobile carriage sequentially brings every LF to their position wherein they are fixed to a supporting fixture. Initially, frame 74 (FR74) shall be positioned and fixed, then frame FR72 and finally frame FR70. The moving stage shall be metrologydriven by an externally placed laser tracker instrument while the LVDT sensors realize the end-of-limit positioning for each frame. Figure 14 shows the mobile carriage and the assembly of a frame.
The previously mentioned supporting fixture (in green in Figure 15b), which is also manually and sequentially mounted onto the skin, plays an important role within the main tooling, enabling the coupling between the LFs and the skin. Figure 15 represents how the FRs are mounted in flight direction as well as how they are positioned and fixed through three TH points and Cleco fasteners. 4 Positioning and drilling of the closing frame sub-assembly: This operation realizes the manual assembly and final location of the closing frame assembly. The closing frame is composed of several parts, the so-called central and lateral left-right components (LH-RH) that are joined into the main structure by the integrated fixed supports. These components are located by TH points and fixed by Cleco fasteners. Once the components are mounted, the coordination holes are transferred into the skin component and additional Cleco fasteners are used for a robust fixing. Figure 16 shows the assembly and fixing of the closing frame.
sented sequence is a basic simplification of the real process, which is much more complex, aiming to highlight the virtual assembly simulation process within the overall assembly process. The temporary fixing of elements is performed by Cleco fasteners whilst the more critical joining points are ensured by accurately manufactured pins. The ARE fuselage assembly process is described in the following points: 1 Skin component positioning in the assembly tooling: This is the operation during which the skin component is manually positioned on the main structure of the assembly tooling aided by a crane. Figure 12 represents the assembly operation where the sling releases the skin component on the main structure by locating it through the reference TH points. The fixing operation is performed by the fixing pins. The "H" shape composite beams positioning: This is a manual operation during which the "H" beams are positioned on the main structure through the TH1, TH2 and TH3 supports. The temporary fixing is realized by Cleco fasteners and several coordination holes (CHs) are transferred into the skin where more Cleco fasteners
436
are employed for a correct fixing operation. Figure 13 depicts the "H" composite beams positioning operation on the skin component. 3 Load frame components positioning: This operation realizes the automatic positioning of each load frame (LF) into their supporting auxiliary tooling attached to the main structure. The mobile carriage sequentially brings every LF to their position wherein they are fixed to a supporting fixture. Initially, frame 74 (FR74) shall be positioned and fixed, then frame FR72 and finally frame FR70. The moving stage shall be metrology-driven by an externally placed laser tracker instrument while the LVDT sensors realize the end-of-limit positioning for each frame. Figure 14 shows the mobile carriage and the assembly of a frame. are employed for a correct fixing operation. Figure 13 depicts the "H" com beams positioning operation on the skin component. Figure 13. "H" shape beams positioning through corresponding THs.
3
Load frame components positioning: This operation realizes the automatic po ing of each load frame (LF) into their supporting auxiliary tooling attached main structure. The mobile carriage sequentially brings every LF to their p wherein they are fixed to a supporting fixture. Initially, frame 74 (FR74) shall sitioned and fixed, then frame FR72 and finally frame FR70. The moving stag be metrology-driven by an externally placed laser tracker instrument while the sensors realize the end-of-limit positioning for each frame. Figure 14 shows t bile carriage and the assembly of a frame. The previously mentioned supporting fixture (in green in Figure 15b), which manually and sequentially mounted onto the skin, plays an important role wit main tooling, enabling the coupling between the LFs and the skin. Figure 15 rep how the FRs are mounted in flight direction as well as how they are positioned an The previously mentioned supporting fixture (in green in Figure 15b), which is also manually and sequentially mounted onto the skin, plays an important role within the main tooling, enabling the coupling between the LFs and the skin. Figure 15 represents how the FRs are mounted in flight direction as well as how they are positioned and fixed through three TH points and Cleco fasteners. nents (LH-RH) that are joined into the main structure by the integrated fixed supports. These components are located by TH points and fixed by Cleco fasteners. Once the components are mounted, the coordination holes are transferred into the skin component and additional Cleco fasteners are used for a robust fixing. Figure 16 shows the assembly and fixing of the closing frame.
Description of the Simulation Workflow
In the following lines, the virtual measuring approach and the simulation environment are introduced.
Introduction of the Measuring and Simulation Concepts
The metrology-driven approach that guides the assembly operator during the assembly process consists of an external metrology frame and a specific measurement procedure. The proposed technology is a portable coordinate measuring machine (PMMC), the so-called laser tracker (LT) instrument. The 3D accuracy of LT instruments is defined by U (k = 2) = 15 µm + 6 µm/m [17] according to ISO 10360-10:2016 Standard [17]. This PMMC measures the 3D coordinates of a retroreflector located on the measurand (component or structure). Combining the measurements of at least 3 points (usually, more than 3 points are recommended), the spatial orientation and position of a body (6 degrees of freedom (dof)) shall be established regarding a specific reference (measuring system, other part or even the world coordinate system). Thus, this measurement information shall be used to assist the assembly tasks and guidance purposes wherein an accurate part positioning and assembly is required.
For the ARE fuselage assembly process, several components such as the composite skin and the load frames demand an accurate assembly process on the plane reference system (global coordinate system). To do that, several nominal coordinates shall be defined from the very beginning within an initial manual referencing process, allowing those nominal points to be used as target points during the assembly process. Thus, the spatial deviation of each target point shall be monitored, guiding the operator during the manipulation of the ARE fuselage components to ensure the final assembly. In this way, the results of the measurement of three or more reflectors shall indicate the 6 dof of the component under the measurements concerning the active reference system aiming the assembly of the skin with the frames. The result of those measurements is converted into commands for the automatic stages that shall bring every component to the final position within the assembly. Thus, a closed-loop assembly process shall be conducted.
Aiming to predict the most suitable measuring approach (fast, simple and accurate enough) within the ARE assembly process scenario, an a priori metrology simulation is
Description of the Simulation Workflow
In the following lines, the virtual measuring approach and the simulation environment are introduced.
Introduction of the Measuring and Simulation Concepts
The metrology-driven approach that guides the assembly operator during the assembly process consists of an external metrology frame and a specific measurement procedure. The proposed technology is a portable coordinate measuring machine (PMMC), the socalled laser tracker (LT) instrument. The 3D accuracy of LT instruments is defined by U (k = 2) = 15 µm + 6 µm/m [17] according to ISO 10360-10:2016 Standard [17]. This PMMC measures the 3D coordinates of a retroreflector located on the measurand (component or structure). Combining the measurements of at least 3 points (usually, more than 3 points are recommended), the spatial orientation and position of a body (6 degrees of freedom (dof)) shall be established regarding a specific reference (measuring system, other part or even the world coordinate system). Thus, this measurement information shall be used to assist the assembly tasks and guidance purposes wherein an accurate part positioning and assembly is required.
For the ARE fuselage assembly process, several components such as the composite skin and the load frames demand an accurate assembly process on the plane reference system (global coordinate system). To do that, several nominal coordinates shall be defined from the very beginning within an initial manual referencing process, allowing those nominal points to be used as target points during the assembly process. Thus, the spatial deviation of each target point shall be monitored, guiding the operator during the manipulation of the ARE fuselage components to ensure the final assembly. In this way, the results of the measurement of three or more reflectors shall indicate the 6 dof of the component under the measurements concerning the active reference system aiming the assembly of the skin with the frames. The result of those measurements is converted into commands for the automatic stages that shall bring every component to the final position within the assembly. Thus, a closed-loop assembly process shall be conducted.
Aiming to predict the most suitable measuring approach (fast, simple and accurate enough) within the ARE assembly process scenario, an a priori metrology simulation is suggested. To do that, a digital twin of either the assembly process (components, tooling) or the measurement system is developed within a simulation framework, so the complete measurement sequence is materialized in simulation mode. In this case, the Spatial Analyzer (SA©) inspection software provides several powerful simulation tools such as running instruments in simulation mode, planning the placement of measurement instruments or checking the visibility of target points within a digital measurement scenario so that a fit-for-purpose measurement procedure is designed, reducing to a high extent the metrology-driven set-up process.
Design of the Measurement Set-Up
One of the most critical aspects of the metrology-driven solution designing process is to define an appropriate location of the measurement instrument and the target points aiming to (a) avoid occlusions and (b) obtain accurate measurements. The relative position among them describes the measuring scenario set-up and consequently determines the measuring procedure's performance and scope (see Section 3.4.3).
Aiming to design a virtual measurement scenario, establish a suitable measuring plan and define a specific measurement configuration, an iterative process is established as explained here:
1.
A new simulation session is opened within SA© software.
2.
A computer-aided design (CAD) of the ARE assembly is imported. The CAD model includes either the components' geometry or tooling geometry.
3.
The measuring instrument model is connected in simulation mode. It includes a virtual error model of the previously mentioned LT instrument's 3D accuracy. Figure 17 represents the simulation environment including the ARE product, the tooling and the LT instrument.
Metrology 2022, 2 438 or the measurement system is developed within a simulation framework, so the complete measurement sequence is materialized in simulation mode. In this case, the Spatial Analyzer (SA©) inspection software provides several powerful simulation tools such as running instruments in simulation mode, planning the placement of measurement instruments or checking the visibility of target points within a digital measurement scenario so that a fit-for-purpose measurement procedure is designed, reducing to a high extent the metrology-driven set-up process.
Design of the Measurement Set-up
One of the most critical aspects of the metrology-driven solution designing process is to define an appropriate location of the measurement instrument and the target points aiming to (a) avoid occlusions and (b) obtain accurate measurements. The relative position among them describes the measuring scenario set-up and consequently determines the measuring procedure's performance and scope (see Section 3.4.3).
Aiming to design a virtual measurement scenario, establish a suitable measuring plan and define a specific measurement configuration, an iterative process is established as explained here:
1.
A new simulation session is opened within SA© software.
2.
A computer-aided design (CAD) of the ARE assembly is imported. The CAD model includes either the components' geometry or tooling geometry.
3.
The measuring instrument model is connected in simulation mode. It includes a virtual error model of the previously mentioned LT instrument's 3D accuracy. Figure 17 represents the simulation environment including the ARE product, the tooling and the LT instrument.
4.
According to the metrology technician's experience, an initial spatial location of the LT measuring device is defined (initial guess). 5.
The number of target points and their distribution are identified aided by the main tooling CAD. Aiming to obtain a measurement uncertainty that is as low as possible, target points are volumetrically distributed along the XYZ dimensions of the jig structure. At this point, the target points are defined for both the main tooling (fixed points) and the moving stage that brings and locates the load frames. 6.
According to the measurement scenario defined within the simulation environment, an initial measurement of the target points is realized manually with the LT instru-
4.
According to the metrology technician's experience, an initial spatial location of the LT measuring device is defined (initial guess). 5.
The number of target points and their distribution are identified aided by the main tooling CAD. Aiming to obtain a measurement uncertainty that is as low as possible, target points are volumetrically distributed along the XYZ dimensions of the jig structure. At this point, the target points are defined for both the main tooling (fixed points) and the moving stage that brings and locates the load frames. 6.
According to the measurement scenario defined within the simulation environment, an initial measurement of the target points is realized manually with the LT instrument. Thus, virtual XYZ coordinate measurements are fabricated within the simulation environment. This initial measurement allows for the updating of the initial LT instrument measurement uncertainty error model defined by the LT manufacturer. The LT error model includes both the systematic and the random error components. 7.
At this moment, the line-of-sight from the LT location to the target points shall be verified to determine if every target point is reachable from the LT location. Figure 18 presents the line-of-sight simulation process wherein the target points to be measured are highlighted in red.
Metrology 2022, 2 439 ment. Thus, virtual XYZ coordinate measurements are fabricated within the simulation environment. This initial measurement allows for the updating of the initial LT instrument measurement uncertainty error model defined by the LT manufacturer. The LT error model includes both the systematic and the random error components. 7.
At this moment, the line-of-sight from the LT location to the target points shall be verified to determine if every target point is reachable from the LT location. Figure 18 presents the line-of-sight simulation process wherein the target points to be measured are highlighted in red.
8.
Once the LT error model is updated according to those initial measurements realized in step 6, an a priori Monte Carlo simulation shall be performed to predict the measurement uncertainty within the real assembly scenario. The Monte Carlo simulation is conducted according to the JCGM 101:2008 technical recommendation [18], which suggests materializing an iterative simulation process of the previously measured points by introducing a statistical error to the LT error model [19]. Hence, it allows an understanding of the influence of error on the defined virtual measurement scenario. 9.
Simulation results are obtained and summarized aiming to (a) characterize the measurement uncertainty, and (b) the target point visibility.
If results are under compliance (uncertainty values below 0.15 mm), the same LT configuration is considered for the rest of the load frames, which means repeating steps 2-9 for every load frame.
If results are not as good as expected, the LT location is changed and all the previous simulation steps are repeated until a suitable LT position is found.
10. Once the LT location and target points distribution is validated, the target point information (position and orientation) allows the integration of the physical retroreflector nests into the main structure and moving fixtures. 11. During the tooling commissioning stage, the real position and orientation of every target point shall be measured and recorded as a nominal value for the automating of the data acquisition process of the following measuring tasks.
Simulation of the Metrology-Driven Measuring Procedure
Once the a priori simulation is realized and the LT and target points position are validated, a similar simulation process shall be executed to estimate the real-time position and orientation of the mobile load frames within the ARE assembly process. This specific simulation enables estimating the measurement uncertainty for each of the mobile carriage poses (position and orientation) which allows evaluating the fit-for-purpose quality of the metrology-driven solution.
8.
Once the LT error model is updated according to those initial measurements realized in step 6, an a priori Monte Carlo simulation shall be performed to predict the measurement uncertainty within the real assembly scenario. The Monte Carlo simulation is conducted according to the JCGM 101:2008 technical recommendation [18], which suggests materializing an iterative simulation process of the previously measured points by introducing a statistical error to the LT error model [19]. Hence, it allows an understanding of the influence of error on the defined virtual measurement scenario. 9.
Simulation results are obtained and summarized aiming to (a) characterize the measurement uncertainty, and (b) the target point visibility. If results are under compliance (uncertainty values below 0.15 mm), the same LT configuration is considered for the rest of the load frames, which means repeating steps 2-9 for every load frame. If results are not as good as expected, the LT location is changed and all the previous simulation steps are repeated until a suitable LT position is found. 10. Once the LT location and target points distribution is validated, the target point information (position and orientation) allows the integration of the physical retroreflector nests into the main structure and moving fixtures. 11. During the tooling commissioning stage, the real position and orientation of every target point shall be measured and recorded as a nominal value for the automating of the data acquisition process of the following measuring tasks.
Simulation of the Metrology-Driven Measuring Procedure
Once the a priori simulation is realized and the LT and target points position are validated, a similar simulation process shall be executed to estimate the real-time position and orientation of the mobile load frames within the ARE assembly process. This specific simulation enables estimating the measurement uncertainty for each of the mobile carriage poses (position and orientation) which allows evaluating the fit-for-purpose quality of the metrology-driven solution.
As previously explained, the JCGM 101:2008 guide (evaluation of measurement data-Supplement 1 to the "Guide to the expression of uncertainty in measurement"-propagation of distributions using a Monte Carlo method) describes practical guidance on the application of Monte Carlo simulation for the estimation of uncertainty in measurement [18]. For the present work, this guide is employed to determine the measurement uncertainty within the simulation environment. Here, the specific approach for the measurement uncertainty estimation is to conduct a rigid body transformation between the simulated points (real) and the nominal reference points according to the Monte Carlo method. In total, 1.000 iterations are performed for a suitable propagation of the error distribution model, so the standard deviation of the rigid body transformation parameters is obtained and included within the expanded measurement uncertainty result. Figure 19 shows the pose measurement uncertainty estimation process.
2022, 2
As previously explained, the JCGM 101:2008 guide (evaluation of measu data-Supplement 1 to the "Guide to the expression of uncertainty in measurement"gation of distributions using a Monte Carlo method) describes practical guidance application of Monte Carlo simulation for the estimation of uncertainty in measu [18]. For the present work, this guide is employed to determine the measurement tainty within the simulation environment. Here, the specific approach for the m ment uncertainty estimation is to conduct a rigid body transformation between th lated points (real) and the nominal reference points according to the Monte Carlo m In total, 1.000 iterations are performed for a suitable propagation of the error distr model, so the standard deviation of the rigid body transformation parameters is o and included within the expanded measurement uncertainty result. Figure 19 sho pose measurement uncertainty estimation process.
Results
The measurement uncertainty results simulated in Chapter 4 are presented h itially, the obtained results according to the location and orientation of either the strument or the target points to the ARE assembly layout and the considered coo system are shown. Then, the results for the "real-time" assembly process of th frames components are described. In particular, the results for the FR74 compon shown, due to the fact that it is the most critical component during the simulation placed in every position of the insertion trajectory, including the fixed locations o and FR72.
Simulation Results on the Measuring Set-up Definition Process
As an overall accuracy request, the geometric accuracy of the main tooling m main below 0.15 mm for the different FR positions, and therefore the metrology solution shall ensure an accuracy 3 to 10 times better (20-65 µm) to meet manufa tolerances according to ISO 14253-1 Guide to the "decision rules for proving confor nonconformity with specifications" [20]. To meet those accuracy requirements, the tion tool makes it possible to run an iterative process that looks for the best config for the LT and targets points.
After conducting multiple simulations, it is concluded that the most suitable
Results
The measurement uncertainty results simulated in Chapter 4 are presented here. Initially, the obtained results according to the location and orientation of either the LT instrument or the target points to the ARE assembly layout and the considered coordinate system are shown. Then, the results for the "real-time" assembly process of the load frames components are described. In particular, the results for the FR74 component are shown, due to the fact that it is the most critical component during the simulation as it is placed in every position of the insertion trajectory, including the fixed locations of FR70 and FR72.
Simulation Results on the Measuring Set-Up Definition Process
As an overall accuracy request, the geometric accuracy of the main tooling must remain below 0.15 mm for the different FR positions, and therefore the metrology-driven solution shall ensure an accuracy 3 to 10 times better (20-65 µm) to meet manufacturing tolerances according to ISO 14253-1 Guide to the "decision rules for proving conformity or nonconformity with specifications" [20]. To meet those accuracy requirements, the simulation tool makes it possible to run an iterative process that looks for the best configuration for the LT and targets points.
After conducting multiple simulations, it is concluded that the most suitable LT location is as depicted in Figure 20. The obtained layout enables measuring multiple target points located in the tooling such as the fixed points (FPs), the mobile points (MPs) and the parking points (PPs). It shall be highlighted that a coincident location of the LT with the turning axis has been considered during the simulation, but the obtained results show that the visibility of several target points is considerably reduced. Concerning the location of the batch of target points, 20 points are volumetrically distributed as a result of the simulation. A total of 17 out of 20 points are attached to the main structure and the remaining 3 points are fixed to the mobile carriage. Figure 20 shows the target point distribution. Regarding the measurement visibility of those target points, Figure 21 shows the result for the performed line of sight analysis. Here, the most critical assembly process step is shown, wherein FR74 and FR72 are already located, which makes it more difficult to ensure the visibility of MPs for the FR70 position. However, no limitations are foreseen as depicted in Figure 21. The achieved expanded measurement uncertainty (U) for the target points is about 50 µm (k = 2) (see Figure 22). Thus, the accuracy request for the tooling metrology-driven solution is met within the simulation environment. Regarding the measurement visibility of those target points, Figure 21 shows the result for the performed line of sight analysis. Here, the most critical assembly process step is shown, wherein FR74 and FR72 are already located, which makes it more difficult to ensure the visibility of MPs for the FR70 position. However, no limitations are foreseen as depicted in Figure 21. Regarding the measurement visibility of those target points, Figure 21 shows the result for the performed line of sight analysis. Here, the most critical assembly process step is shown, wherein FR74 and FR72 are already located, which makes it more difficult to ensure the visibility of MPs for the FR70 position. However, no limitations are foreseen as depicted in Figure 21. The achieved expanded measurement uncertainty (U) for the target points is about 50 µm (k = 2) (see Figure 22). Thus, the accuracy request for the tooling metrology-driven solution is met within the simulation environment. The achieved expanded measurement uncertainty (U) for the target points is about 50 µm (k = 2) (see Figure 22). Thus, the accuracy request for the tooling metrology-driven solution is met within the simulation environment.
Simulation Results on the Metrology-Driven Assembly Process
A similar simulation framework is established to develop and evaluate the metrology-driven assembly process for the frame composite components. As explained before, the FR74 is the most critical component within the assembly process since it is placed in every position of the insertion trajectory, including the fixed locations of FR70 and FR72. The obtained simulation results are explained through several tables. Table 1 represents the nominal FR poses (position and orientation) and values. Table 2 depicts the obtained simulation results for those FR poses and their measurement uncertainty data. The data consider several FR74 positions during the insertion process through the skin component. Some of these positions coincide with the fixed locations of the FR72 and FR70 components, and therefore these positions are considered points of interest within this evaluation. The measurement uncertainty values are estimated for a level of confidence interval of 95% (k = 2). Table 3 shows the deviations between the nominal and simulated FR poses, assessing the systematic error contribution of the metrology-driven assembly process. The results show that the simulation results are below the accuracy request (0.15 mm for position), and therefore it is demonstrated that the measuring set-up, as well as the developed metrology-driven solution, are suitable for their practical implementation.
Simulation Results on the Metrology-Driven Assembly Process
A similar simulation framework is established to develop and evaluate the metrologydriven assembly process for the frame composite components. As explained before, the FR74 is the most critical component within the assembly process since it is placed in every position of the insertion trajectory, including the fixed locations of FR70 and FR72. The obtained simulation results are explained through several tables. Table 1 represents the nominal FR poses (position and orientation) and values. Table 2 depicts the obtained simulation results for those FR poses and their measurement uncertainty data. The data consider several FR74 positions during the insertion process through the skin component. Some of these positions coincide with the fixed locations of the FR72 and FR70 components, and therefore these positions are considered points of interest within this evaluation. The measurement uncertainty values are estimated for a level of confidence interval of 95% (k = 2). Finally, Table 3 shows the deviations between the nominal and simulated FR poses, assessing the systematic error contribution of the metrology-driven assembly process. The results show that the simulation results are below the accuracy request (0.15 mm for position), and therefore it is demonstrated that the measuring set-up, as well as the developed metrology-driven solution, are suitable for their practical implementation. The results shown in Table 3 demonstrate that the measuring approach meets the assembly demands in terms of accuracy for the different fixing locations of the FR components. The systematic deviations are below 0.017 mm for the translation values and below 0.001 • for the orientations, while the measuring uncertainty values traduced to rigid transformation values are on average 0.1 mm (for U Tx , U Ty , U Tz ) and 0.0035 • (for U Rx , U Ry , U Rz ).
Conclusions and Future Work
This article introduces the MAA approach within the traditional assembly process of the ARE fuselage component. It shows a practical implementation of a metrologydriven solution for the assembly of an aeronautical assembly, making the process more accurate, faster, repetitive, automatic and collaborative in terms of aided-load handling. The article suggests a simulation environment that enables predicting the performance of a certain measuring set-up within the ARE assembly scenario. The digital twin of the complete measurement scenario, including the tooling, the aeronautical components and the metrology-driven solution, makes it possible to conduct an a priori metrology simulation. Thus, the most appropriate measuring approach is predicted in advance, considering aspects such as the most appropriate location of the LT instrument, the visibility of the target points and the achievable measurement uncertainty for the XYZ coordinates of target points as well as the position (XYZ) and orientation (RxRyRz) of the FR components to be assembled. The obtained measurement uncertainty values are compared with the ARE assembly specification and the results are under specification. This way, the metrologydriven solution is validated to be fit-for-purpose.
The future work that is currently being executed is the physical realization of the ARE prototype. In addition to all the advantages obtained from the simulation work, mainly focused on the prediction of the most appropriate metrology-driven solution, it shall be highlighted that the measurement workflow within the physical ARE prototype shall be realized with the measurement procedure developed within the simulation environment. In this manner, the same simulation solution can be used and updated for the real performance scenario, which means that a comparison shall be run between the a priori and posterior information. However, few differences are expected within the real implementation. One of the aspects that may directly affect the performance of the real implementation of this metrology-driven solution shall be the accurate characterization of the fixed reflectors located in the main structure and the mobile support. This measurement shall be performed once the main tooling components are assembled into the ARE fuselage. The realization of the target points shall be performed with the Reflectors for Fixed Installation (RFI) type of fixed reflectors and the so-called TBR type reflectors for the mobile components. Whilst the fixed reflectors are always seen from the same perspective, the mobile reflectors change their viewing angle, which requires specific retroreflectors that assume such angle variability. A second possible aspect is the potential deviation of the real scenario from the simulated one, which implies a potential deviation within the retroreflector line-of-sight analysis.
Concerning future work, TEKNIKER will keep studying how to implement the realtime uncertainty assessment tool through the improvement of the simulation framework. At the moment, an offline measurement uncertainty assessment is estimated, although the tracking of the load frame insertion operation is enabled in real-time by the LT instrument. Further optimization aims to also include this uncertainty evaluation capability in real-time, which implies to adapt the Monte Carlo-based iterative approach driven by more efficient computing approaches.
Author Contributions: U.M. contributed to the state-of-the art and definition of the introduction; G.K. contributed to the writing, proposal and definition of the novel assembly approach; J.E. contributed to the virtual implementation and simulation of the metrology aided assembly process; J.M. contributed to the definition and concurrent engineering of the tooling design and assembly sequence. All authors contributed to the editing and revision of the manuscript. All authors have read and agreed to the published version of the manuscript.
Data Availability Statement:
The submitted work is original and has not been published elsewhere before.
|
v3-fos-license
|
2020-09-10T10:16:40.645Z
|
2020-09-03T00:00:00.000
|
236822052
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0259236&type=printable",
"pdf_hash": "1a32321caf63cd69520126dd502858b3dd066554",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2397",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "651262197f14bebc630d55bf385842f112dc7490",
"year": 2021
}
|
pes2o/s2orc
|
Yield of tumor samples with a large guide-sheath in endobronchial ultrasound transbronchial biopsy for non-small cell lung cancer: A prospective study
Background Adequate tumor tissue is required to make the best treatment choice for non-small cell lung cancer (NSCLC). Transbronchial biopsy (TBB) by endobronchial ultrasonography with a guide sheath (EBUS-GS) is useful to diagnose peripheral lung lesions. The data of tumor cell numbers obtained by two different sizes of GSs is limited. We conducted this study to investigate the utility of a large GS kit to obtain many tumor cells in patients with NSCLC. Methods Patients with a peripheral lung lesion and suspected of NSCLC were prospectively enrolled. They underwent TBB with a 5.9-mm diameter bronchoscope with a large GS. When the lesion was invisible in EBUS, we changed to a thinner bronchoscope and TBB was performed with a small GS. We compared the tumor cell number prospectively obtained with a large GS (prospective large GS group) and those previously obtained with a small GS (small GS cohort). The primary endpoint was the tumor cell number per sample, and we assessed characteristics of lesions that could be obtained by TBB with large GS. Results Biopsy with large GS was performed in 55 of 87 patients (63.2%), and 37 were diagnosed with NSCLC based on histological samples. The number of tumor cells per sample was not different between two groups (658±553 vs. 532±526, estimated difference between two groups with 95% confidence interval (CI); 125 (-125–376), p = 0.32). The sample size of the large GS group was significantly larger than that of the small GS cohort (1.75 mm2 vs. 0.83 mm2, estimated difference with 95% CI; 0.92 (0.60–1.23) mm2, p = 0.00000019). The lesion involving a third or less bronchus generation was predictive factors using large GS. Conclusions The sample size obtained with large GS was significantly larger compared to that obtained with small GS, but there was no significant difference in tumor cell number. The 5.9-mm diameter bronchoscope with large GS can be used for lesions involving a third or less bronchus generation.
Introduction
Background 2 Scientific background and explanation of rationale Theories used in designing behavioral interventions
Methods
Participants 3 Eligibility criteria for participants, including criteria at different levels in recruitment/sampling plan (e.g., cities, clinics, subjects) Method of recruitment (e.g., referral, self-selection), including the sampling method if a systematic sampling plan was implemented Recruitment setting Settings and locations where the data were collected Interventions 4 Details of the interventions intended for each study condition and how and when they were actually administered, specifically including: Unit of assignment (the unit being assigned to study condition, e.g., individual, group, community) Method used to assign units to study conditions, including details of any restriction (e.g., blocking, stratification, minimization) Inclusion of aspects employed to help minimize potential bias induced due to non-randomization (e.g., matching) Whether or not participants, those administering the interventions, and those assessing the outcomes were blinded to study condition assignment; if so, statement regarding how the blinding was accomplished and how it was assessed.
Unit of Analysis 10 Description of the smallest unit that is being analyzed to assess intervention effects (e.g., individual, group, or community) If the unit of analysis differs from the unit of assignment, the analytical method used to account for this (e.g., adjusting the standard error estimates by the design effect or using multilevel analysis) Statistical Methods
11
Statistical methods used to compare study groups for primary methods outcome(s), including complex methods of correlated data Statistical methods used for additional analyses, such as a subgroup analyses and adjusted analysis Methods for imputing missing data, if used Statistical software or programs used
Participant flow 12
Flow of participants through each stage of the study: enrollment, assignment, allocation, and intervention exposure, follow-up, analysis (a diagram is strongly recommended) o Enrollment: the numbers of participants screened for eligibility, found to be eligible or not eligible, declined to be enrolled, and enrolled in the study o Assignment: the numbers of participants assigned to a study condition o Allocation and intervention exposure: the number of participants assigned to each study condition and the number of participants who received each intervention o Follow-up: the number of participants who completed the followup or did not complete the follow-up (i.e., lost to follow-up), by study condition o Analysis: the number of participants included in or excluded from the main analysis, by study condition Description of protocol deviations from study as planned, along with reasons Recruitment 13 Dates defining the periods of recruitment and follow-up Baseline Data 14 Baseline demographic and clinical characteristics of participants in each study condition Baseline characteristics for each study condition relevant to specific disease prevention research Baseline comparisons of those lost to follow-up and those retained, overall and by study condition Comparison between study population at baseline and target population of interest Baseline equivalence 15 Data on study group equivalence at baseline and statistical methods used to control for baseline differences Numbers analyzed 16 Number of participants (denominator) included in each analysis for each study condition, particularly when the denominators change for different outcomes; statement of the results in absolute numbers when feasible Indication of whether the analysis strategy was "intention to treat" or, if not, description of how non-compliers were treated in the analyses Outcomes and estimation 17 For each primary and secondary outcome, a summary of results for each estimation study condition, and the estimated effect size and a confidence interval to indicate the precision Inclusion of null and negative findings Inclusion of results from testing pre-specified causal pathways through which the intervention was intended to operate, if any Ancillary analyses 18 Summary of other analyses performed, including subgroup or restricted analyses, indicating which are pre-specified or exploratory Adverse events 19 Summary of all important adverse events or unintended effects in each study condition (including summary measures, effect size estimates, and confidence intervals)
Interpretation 20
Interpretation of the results, taking into account study hypotheses, sources of potential bias, imprecision of measures, multiplicative analyses, and other limitations or weaknesses of the study Discussion of results taking into account the mechanism by which the intervention was intended to work (causal pathways) or alternative mechanisms or explanations Discussion of the success of and barriers to implementing the intervention, fidelity of implementation Discussion of research, programmatic, or policy implications Generalizability 21 Generalizability (external validity) of the trial findings, taking into account the study population, the characteristics of the intervention, length of follow-up, incentives, compliance rates, specific sites/settings involved in the study, and other contextual issues Overall Evidence
22
General interpretation of the results in the context of current evidence and current theory
|
v3-fos-license
|
2020-04-29T15:08:17.610Z
|
2020-04-29T00:00:00.000
|
216619378
|
{
"extfieldsofstudy": [
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://heritagesciencejournal.springeropen.com/track/pdf/10.1186/s40494-020-00387-y",
"pdf_hash": "304a0de210cc61e5231a2fa93a5773900a46d95e",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2398",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "304a0de210cc61e5231a2fa93a5773900a46d95e",
"year": 2020
}
|
pes2o/s2orc
|
Innovative methods for the removal, and occasionally care, of pressure sensitive adhesive tapes from contemporary drawings
Aged pressure sensitive tapes (PSTs) can compromise the integrity and readability of drawings and paper artworks. Typically, PSTs on contemporary artifacts are difficult to remove owing to degradation processes and to the intrinsic sensitiveness of paper, inks and dyes to the solvents and tools used in the traditional conservation practice. Alternatively, we provide here a critical overview and expansion on the use of two recently developed methodologies for the removal of PSTs, based on the confinement of cleaning fluids in retentive gels. Various combinations of PSTs backings and adhesives were addressed on paper mock-ups containing different types of artistic media (inks, dyes), monitoring the ability of a hydrogel and an organogel to gradually exchange, respectively, an oil-in-water microemulsion or diethyl carbonate through the PSTs backings, swelling the adhesive layers and enabling safe PST removal. It was shown that the two methodologies are complementary as they target the removal of tapes with different components. In all cases, selective tape removal was carried out without uncontrolled bleeding of inks or transport of dissolved matter through the paper matrix, thanks to the retentiveness of the gels. The two cleaning systems were then assessed on four completely different artworks on paper, where they proved to be versatile tools to remove aged PSTs, or re-adhere detackified tapes that were part of the original artwork. Overall, the two methodologies complement each other and allowed overcoming the limitations of traditional cleaning approaches.
Introduction
Historical evidence tells us that, after becoming commercially available in the late 1920s, Pressure Sensitive Tapes (PSTs) have been a common tool used in museums, archives and libraries for the repair, identification and protection of cultural heritage collections [1]. PSTs have a multi-layered structure composed of a pressure sensitive adhesive and a backing. Minor components include a release coat, ensuring easy unrolling of the tape, and a primer that enhances adhesion between the backing and the adhesive layer. Backing materials may include paper, fabric, cellophane, cellulose acetate, and oriented polypropylene, while adhesives include natural and synthetic rubbers, acrylic copolymers, and silicones [2][3][4]. The most detrimental and frequently encountered PSTs on paper artworks are masking tape and cellophane [2], both of which have natural and/or synthetic rubber-based adhesives that over time oxidize, change in consistency and color (PSTs start to yellow), and can also become oily and penetrate the cellulose substrate. In the final degradation stages, adhesives usually turn dark brown and can crystallize, becoming hard and brittle and losing their adhesive power; besides, components such as tackifying resins and plasticizers can migrate into the underlying Open Access *Correspondence: antonio.mirabile@gmail.com 1 Paper Conservator, 75009 Paris, France Full list of author information is available at the end of the article paper fibers making them translucent. Occasionally these materials can affect contemporary inks such as ballpoint pen, felt-tip pen and printing inks, causing them to bleed, migrate and darken, thus modifying the readability and interpretation of the artworks. A new group of acrylicbased adhesives, which do not discolor appreciably, came on the market in the 1980s; however they are subject to cold flow, and penetrate into paper. According to 3 M literature, acrylic adhesives are coated to the backing and pre-crosslinked [2]. This causes the adhesives to be poorly soluble in most of the solvents typically used in paper conservation (listed in Sect. "Conservation treatments"), as we observed in preliminary solubility tests; few solvents (e.g. tetrahydrofuran) are able to solubilize the adhesives, but the solubilized adhesives migrate in the paper substrate and stain it. Besides, in all cases the solvents produced bleeding and lateral spreading of inks. Overall, time and experience have shown that PSTs on paper can be disfiguring, damaging, and difficult (sometimes impossible) to remove.
Conservators are familiar with a variety of methods for removing PSTs, including mechanical cleaning and wet methods such as immersion, poultices, cotton swabs, and suction table. However each method has associated risks/ disadvantages that may produce undesirable changes such as skinning or bleeding of artistic media [5], tidelines, overcleaning, and further penetration of the adhesives into the cellulose fibers. In particular, limitation and control of tidelines is crucial on paper artworks, as this type of alteration is not always instantaneously visible. Tidelines occur when introduced moisture moves laterally through the paper fibres, carrying water or dissolved materials such as dirt, media, degraded size, short-chain cellulose oxidation and breakdown products, fillers and other additives. Transported materials concentrate at the wet/dry interface and remain there as moisture evaporates, leaving a discoloration halo, and in some cases initiating oxidative reactions that produce yellow-brown cellulose degradation products. Most removal methods involve the use of volatile and toxic organic solvents considerably hazardous to conservation professionals.
Given the lack of safe methods for the removal of PSTs, there is still the need for reliable cleaning systems. Research in the field of colloid and materials science has proved to be a fundamental framework to develop solutions for the cleaning of works of art [6][7][8][9][10]. We decided to tackle the complex issue of PSTs removal by designing two new methods based respectively on the use of: 1. a hydrogel as a scaffold to confine an oil-in-water (o/w) nanostructured fluid [7]; 2. a polymeric organogel to confine diethyl carbonate (DEC) [8]. The two methods are complementary in that they use cleaning fluids with different properties, so as to allow a wide range of cleaning applications. The nanostructured fluid, named "EAPC" as it contains ethyl acetate and propylene carbonate besides water (more than 70% w/w) and a surfactant, is a versatile o/w fluid able to swell and detach synthetic polymer coatings [6]. DEC is a "green" polar aprotic solvent, part of the family of alkyl carbonates that constitute a valid alternative to esters and ketones in most applications [11][12][13][14], including the softening of natural and synthetic polymeric adhesives. In our previous works we showed that both methods are able to grant controlled release of fluids on paper and safe removal of PSTs without alteration of the artifacts, thanks to the retentiveness of the confining scaffolds. In the present contribution, we provide extensive assessment of the two methodologies in advanced cleaning scenarios with added difficulty, which served as a new testing ground for the methods. Such challenging cleaning case studies allowed us to extensively check the versatility and applicability of the two methodologies over a plethora of substrates (e.g. several different inked/dyed paper artworks), exploring the full potential and complementarity of the hydrogel and organogel loaded with different cleaning fluids. This led to further insight on the use of these systems use for the removal of PSTs, and allowed us highlighting the main applicative aspects useful to cultural heritage conservation professionals.
In particular, we aimed to: (i) assess the efficacy of the two cleaning systems, highlighting their complementary use on PSTs with different composition; (ii) assess their safe use on highly challenging inked paper artworks, with sensitive components and difficult working conditions (e.g. presence of PSTs directly on intricate motifs of sensitive dyes, or the removal of PSTs from inner hinges, with the impossibility to flatten the artwork), monitoring the properties of the treated paper substrates. Besides the scientific data, aspects related to conservation theory and professional code of ethics were also introduced. The methods were first assessed on paper mock-ups representative of actual paper artworks, monitoring the cleaning efficacy and possible alterations induced on the paper substrate. Then, we faced the removal of PSTs from four completely different artworks on paper, which provided new challenging case studies with added difficulty, or alternative uses of the gels:
Preparation of the cleaning systems
The nanostructured fluid "EAPC" is an o/w fluid prepared using ethyl acetate (EA) and propylene carbonate (PC) as dispersed organic phase. 1-pentanol (1-PeOH) and sodium dodecyl sulphate (SDS) are used, respectively, as co-surfactant and surfactant. The composition is as follows: 73.3% H 2 O, 3.7% SDS, 7% 1-PeOH, 8% EA, 8% PC. EAPC preparation procedure, characterization and use for the removal of synthetic polymer coatings from artworks are reported in the literature [15]. The structure of EAPC has been fully characterized using small-angle neutron scattering with contrast variation [15], and the localization of each component in the fluid was determined. Nanosized ellipsoidal droplets (major axis 12.8 nm) [15] mainly composed of EA are stabilized in a mixed continuous phase (water and about 20% PC) by SDS and 1-pentanol. PC is mainly located in the continuous phase, also partitioned at the micelle interface, and confers enhanced cleaning capacity thanks to its high dipole constant. For its use on paper artworks, EAPC was confined into hydrogels made of networks of poly(hydoxyethyl metacrylate) (pHEMA) semi-interpenetrated (semi-IPN) with polyvinylpyrrolidone (PVP). The preparation and characterization of the p(HEMA)/PVP hydrogels is reported in previous works [16]: essentially, p(HEMA)/PVP semi-IPN networks are obtained through the free radical polymerization, in aqueous medium, of HEMA, using AIBN as initiator and MBA as cross-linker, in the presence of PVP linear chains, which are physically embedded into the growing p(HEMA) network. These gels are particularly suitable for the cleaning of watersensitive artifacts (e.g. paper) thanks to their high hydrophilicity (water content < 70%) and retentiveness, their transparency, and their mechanical stability that ensures the absence of gel residues on treated supports [16]. For applications on paper mock-ups, hydrogels were loaded with EAPC through immersion for at least 12 h. It has been shown that these gels are able to behave as sponges, uploading EAPC without substantial alteration of the fluid and gel structure and functionality [17].
DEC was instead confined into PEMA-based organogels, which were prepared by radical polymerization of EMA solubilized in DEC; AIBN and EGDMA were added to the solution respectively as initiator and crosslinker. The gelation procedure is described elsewhere [18,19]. After gelation, gel sheets were immersed in DEC for 24 h in order to fully swell with the solvent. PEMA gels have high solvent content (> 75% with DEC), high retentiveness, and good mechanical properties, which allow their feasible manipulation (avoiding gels residues on treated substrates) and the possibility of shaping them to exactly match the dimensions of PSTs that need to be removed [18].
The formulations used in this contribution were specifically targeted to the requirements of the reported case studies. Multipurpose formulations of p(HEMA)/ PVP-based hydrogels and o/w nanostructured fluids for the cleaning of works of art are commercially available under the brand of Nanorestore Gel ® and Nanorestore Cleaning ® ; organogels should be soon available under the same brand.
Preparation and characterization of mock-ups
The PSTs used in this study are commercially available tapes; PST types, acronyms, and supplier companies are listed in Table 1. The table also summarizes the most effective combinations of gels and cleaning fluids for the removal of the PSTs (see Sect. "Results and discussion" for details). The chemical composition of PSTs backings and adhesives was investigated by Attenuated Total Reflectance Fourier Transform Infrared spectroscopy (ATR-FTIR), performed directly on the tapes. A Thermo Nicolet Nexus 870 FTIR spectrometer with a Golden Gate diamond cell was used; spectra were collected with a mercury cadmium telluride detector (MCT, sampling area of 150 µm 2 ), averaging 128 scans in the 4000-650 cm −1 range, with a spectral resolution of 4 cm −1 .
The dynamics of the interaction between the DECloaded PEMA gels and model PSTs backings were studied by means of Confocal Laser Scanning Microscopy (CLSM). PSTs mock-ups were prepared by attaching the tapes onto glass slides. A Leica TCS SP2 instrument equipped with a 20 × air objective was used. For fluorescent labeling, two different probes were used: DEC loaded within gels was labeled by equilibrating polymeric films in an 100 μM DEC solution of Coumarin 6 (Cou6); PSTs adhesives were labeled by immersion in 10 μM aqueous solution of Rhodamine B Isothiocyanate (RhBITC). The fluorescent probes Cou6 and RhBITC were respectively excited with 488 and 561 nm laser lines. Fluorescence signals were collected in the ranges 498-513 nm for Cou6, and 591-616 nm for RhBITC. The experimental setup was designed in order to mimic a real application: labeled PSTs were attached on a glass sample holder, and gels loaded with Cou6-labeled DEC were laid on them; each gel slice was cut to match the shape of the tape.
For the preparation of inked/dyed paper artwork mock-ups, a selection of contemporary inks [20,21] (see Table 2) was applied on paper samples (A4 sheets, 80 g/ m 2 , Fabriano Leonardo). Solubility tests on the inked mock-ups were carried out using EAPC and DEC either non-confined or uploaded in the gels. Droplets of the fluids were placed on the inks/dyes and let dry, while the pHEMA/PVP-EAPC and PEMA-DEC gels were applied on the surface (after removing the excess of EAPC or solvent from the gels surface with absorbent paper) for 5 min, followed by visual inspection of possible ink/dye bleeding and lateral spreading.
PSTs (reported in Table 1) were applied on the inked portions of paper mock-up samples, which were then aged for 2 months alternating cycles of thermal aging (80 °C, 4 days), aging at high relative humidity (RH = 80%, 25 °C, 2 days), and photochemical aging (4 days). Thermal aging was carried out in an oven. The RH in the humidity chamber was kept constant using a glycerol aqueous solution (51% w/w). For the photochemical aging, a Neon Light Color 765 Basic daylight Beghelli Lamp was used (160 mW/lm, 380-700 nm), placing the samples in a closed chamber at room conditions (36 °C, RH 40%), where the samples' surface was exposed to ca. 11,000 Lux of homogeneous illumination. These conditions are meant to accelerate the natural aging that would be experienced by objects on display in museums, where illuminations of 50-100 lx are typically used.
To investigate the effects of DEC on the paper substrate, Whatman ® paper samples (without inks or PSTs) were treated with DEC, applying 40 μL of solvent on paper disks of 5 cm diameter. The disks were then aged following the same procedure as detailed above. The characterization of the paper mock-ups treated with DEC was performed by ATR-FTIR (see above for experimental conditions), pH measurements, and differential thermogravimetry (DTG). For pH measurements, the regulations of the Technical Association of Pulp and Paper Industries (TAPPI) were adopted [22]. Cold extraction on paper samples was carried out by putting 125 mg of samples (cut into small pieces) in 9 ml of deionized water under stirring, at room temperature; the pH of the extracting water was read after 1 h. DTG was performed to measure the pyrolysis temperature (T p ) of cellulose, Table 1 Description of PSTs used for the preparation of paper artwork mock-ups a The composition of backings and adhesives was assessed with ATR-FTIR (see Sect. "Results and discussion") b The most effective cleaning systems for each type of PST are indicated, i.e. where the controlled release of the cleaning fluid led to penetration of the backing and swelling/softening of the adhesive, which could be easily removed with no risks for the mockups/artworks c In these cases, penetration of the fluid through the backing produced some alteration of the backing (see Sect. "Results and discussion" for details) as an index of the degradation of cellulosic samples; a decrease in T p is typically observed when depolymerization and swelling of cellulosic fibers take place [23]. Analyses were performed on 5-7 mg of samples, using a SDT Q600 (TA Instruments) apparatus, increasing temperature up to 500 °C (10 °C/min) in a nitrogen atmosphere (100 mL/min). The reported pH and DTG data are the average of three repeats for each technique.
Removal of PSTs
The removal of PSTs from inked portions of the paper mock-ups was carried out by direct application of the gels onto the tapes. The excess of EAPC or solvent was removed form the gels surface as described above; then, the swollen gels were shaped with a scalpel to match the dimension of the PST, and applied on the samples for few minutes. After swelling/softening of the adhesive, the tape backing was peeled off using tweezers, and then adhesive residues were removed coupling the use of gels with gentle mechanical action (see the "Results and discussion" section for more details). The same application procedure was followed for the removal of PSTs from actual paper artworks. 2D FTIR imaging was performed on the paper mockups to check the presence of possible adhesives residuals after PST removal. A Cary 620-670 FTIR microscope, equipped with an FPA 128 × 128 detector (Agilent Technologies), was used. The spectra were recorded directly on the surface of the samples (or of the Au background) in reflectance mode, with open aperture and a spectral resolution of 8 cm −1 , acquiring 128 scans for each spectrum. A "single-tile" analysis results in a map of 700 × 700 µm 2 (128 × 128 pixels), and the spatial resolution of each Imaging map is 5.5 µm (i.e. each pixel has dimensions of 5.5 × 5.5 µm 2 ).
Results and discussion
A dedicated research project offers a unique opportunity to document materials and conservation treatments of works of art beyond levels normally attainable in a conservation studio. Our research on PSTs removal involved different stages, as described before: the preparation and aging of mock-ups (inked/dyed paper artworks); the characterization of the main PSTs components; the assessment of the gels, loaded with cleaning fluids, for the removal of PSTs from the mock-ups; the assessment of the gels on real paper artworks. A range of analytical techniques was employed in the study, including pH measurements, ATR and 2D FTIR Imaging, DTG, and confocal microscopy.
Regarding the backings, Packing Brown Tape (BT) and Ordinary Tape (OT) shows the typical polypropylene infrared pattern with diagnostic peaks at 2950, 2916, 2865, and 2838 cm −1 (CH stretching modes), 1450 and 1375 cm −1 (CH bending) [24,25]; the spectra of Filmoplast P (FPP) and Masking Tape (MKT) backings show the characteristic profile of cellulose with bands between 2930 and 2850 cm −1 (CH stretching), and a broad band centered at about 1000 cm −1 [29]; Magic Tape (MT) backing is made up of cellulose acetate, as evidenced by the presence of a strong peak at 1735 cm −1 (C = O stretching of the ester group) and typical bands at 1216 and 1031 cm −1 (respectively the C-O-C stretching of acetyl group and the C-O stretching of pyranose ring) [24,25,30]; the spectrum of the Insulating Tape (IT) backing was assigned to polyvinylchloride backing owing to the presence of bands at 2915 and 2849 cm −1 (CH 2 stretching mode), 1426 cm −1 (CH 2 bending), 1329 and 1253 cm −1 (CH bending of CH-Cl group), 960 cm −1 (CH 2 rocking), and 690 cm −1 (C-Cl stretching) [31,32].
These PSTs compositions, summarized in Table 1, are in agreement with those reported in the literature.
Understanding the composition of the PSTs gave us indications on the type of cleaning fluids able to penetrate the backing and swell/soften the adhesive layer, which is essential to remove PSTs in a non-invasive way from paper artworks. Based on chemical affinity, the EAPC fluid is expected to efficiently penetrate paper and cellulose acetate backings, while poor interaction is expected with more hydrophobic backings. Instead, DEC is expected to diffuse hydrophobic polypropylene and polyvinyl chloride backings. In fact, using Confocal Laser Scanning Microscopy (CLSM), we demonstrated that EAPC, loaded in the pHEMA/PVP hydrogel, can penetrate across the hydrophilic paper-based backing of FPP without modifying it [33]; Fig. 3a-c shows that, after 20 min, the green Rhodamine 110-labeled EAPC penetrates the backing, which is then homogeneously fluorescently labeled (Fig. 3c). The structure of EAPC is partially disrupted in the process, and a water-rich phase of the fluid reaches the bottom layer of the backing in contact with the adhesive. In the case of MT, after 20 min, EAPC from the hydrogel penetrates the cellulose acetate backing (the pale green region between the red adhesive and the bright green gel shown in Fig. 3d). Figure 3e shows a 2D horizontal section of the MT adhesive after 20 min of interaction with EAPC: the red and the green emissions are separately displayed (while the transmission is shown in grayscale); the overlay of these images (with colocalization of the probes appearing as yellow) highlights the successful penetration of EAPC in the upper parts of the MT adhesive. It was concluded that EAPC produces chemical modifications in the cellulose acetate MT backing (ethyl acetate probably solubilizes the diethyl phthalate plasticizer in cellulose acetate) and is absorbed inside the adhesive [33]. Remarkably, the confinement of the NFS inside the gel prevents alteration of the paper substrate, as opposed to the use of non-confined aqueous fluids that causes the introduction of moisture into paper fibers, and produces distortions and cockling [33].
On the other hand, EAPC is not able to efficiently interact with the polypropylene backing of OT [33]: as shown in Fig. 3f , g, after 20 min EAPC is separated from the adhesive layer by the backing, which remains unlabeled. In this case effective backing penetration and adhesive swelling was obtained with DEC [18], see Fig. 4. As expected, similar trends were observed for the interaction of EAPC and DEC with the backings of MKT (paper) and BT (polypropylene) considered in this study, while more insight was provided on the behavior of the polyvinyl chloride-based backing of IT. Figure 4 shows the CLSM images acquired at different times on PEMA-DEC gels applied on the top surface of Ordinary, Brown, and Insulating PSTs (i.e. respectively OT, BT, and IT) lying on glass slides. DEC was labeled with green Cou6, the adhesives with red RhBITC, while the backing layer of the PSTs was not labeled and appears black in the images. In the CLSM instrumental setup, light is shone on the system from below; therefore, in the case of opaque PSTs backings (as for BT and IT) light stops at the backing level, and cannot reach the DEC-loaded gel, which appears black in the images. At time t = 0, the red fluorescence of the adhesive is clearly observable, with no alteration caused by the solvent. In all cases, after 15-20 min, the solvent has penetrated through the backing, and the adhesive layers have become swollen with DEC, as also highlighted by their change in color from red to yellow-greenish hues. These results confirmed that BT, having a polypropylene backing, displays a similar behavior than OT, with swelling of the acrylic adhesive layer in ca. 20 min of gel application. The images clearly show that also the polyvinyl chloride backing of IT allows DEC penetration, with significant swelling of the styrene-butadiene rubber adhesive after 15 min of application. It is worth noting that the polypropylene backing is not macroscopically deformed after interaction with PEMA-DEC, even though some swollen areas were previously observed on the treated backing surface with Field Emission Gun Scanning Electron Microscopy (FEG-SEM) [18]. Instead, the polyvinyl chloride backing was visibly distorted and wrinkled after the application of the gel. The results of the applications of the gels and cleaning fluids on the PST mockups have been also summarized in Table 1.
Besides checking the ability of cleaning fluids to penetrate the PSTs backing, it was also fundamental to assess the behavior of the fluids in contact with graphic media typically found on paper artworks. Namely, the solubilizing power of EAPC and DEC towards a representative selection of inks and dyes (see Table 2) was tested, applying either droplets or gels loaded with the fluids (application time 5 min). As shown in Fig. 5a, the direct application of non-confined EAPC led to solubilization and spreading of the graphic media; instead, the confinement of the fluid in the highly retentive p(HEMA)/PVP The DEC solvent is labeled green with Cou6, while the adhesive layers are labeled red with RhBITC. The PSTs backings are not labeled, and thus appear black. Because light is sent on the system from below, the DEC-loaded gel is visible only in the case of OT, where the PST is transparent and light passes through the backing, up to the gel. In the other cases, light stops at the backing and cannot reach the gel, which appears as black. At time t = 0 the adhesive layer is well defined and not affected by DEC; after penetration of the solvent through the backing (t = 15-20 min) the adhesive layers become swollen with DEC, as also highlighted by the change in color from red to yellow-green semi-IPN allowed its gradual release onto the inked/ dyed paper substrates, avoiding uncontrolled solubilization and lateral spreading. In some cases, extraction and migration of the dyes in the EAPC-loaded gel occurred, but without spreading of the dyes across the paper surface (Fig. 5b). DEC is inert to all the considered media, expect R-BPP, as shown by the absence of tidemarks or ink spreading in the spots where droplets of the solvent were applied (see Fig. 5c); no migration of the inks in the PEMA gel was observed (Fig. 5d). In the case of R-BPP, the free solvent causes spreading and tidelines ( Figure E), but the confinement of the solvent in the PEMA gel effectively prevents the solubilization and spreading of the sensitive ink (Fig. 5e, f ).
Because DEC is an organic solvent not commonly employed in the restoration practice, some further assessment regarded possible short-and long-term drawbacks related to its use on paper. Whatman ® paper samples, treated with DEC, were analyzed by means of ATR-FTIR, DTG, and pH measurements, to assess possible paper degradation. Figure 6 displays the ATR-FTIR spectra of four Whatman ® paper samples, i.e. pristine, treated with DEC, aged, treated with DEC and then aged. No alteration in the IR spectra of paper was observed following treatment with the solvent and aging. Likewise, no significant changes in the pH and cellulose pyrolysis temperature (T p ) of the Whatman ® samples were observed (see Table 3): as expected, the ageing procedure induced a small decrease in both pH and T p , indicating a slight degradation, but no additional damage was induced on the paper fibers by DEC. Table 2. a EAPC applied either non-confined (droplets tidemarks) or as loaded in pHEMA-PVP gels. b The same paper samples, after removal of the gels. c DEC applied either as non-confined (the red dotted lines indicate the spots where droplets evaporated without leaving tidelines) or as loaded in PEMA gels. d The same paper samples, after removal of the gels. e Application of free DEC (droplet tidemark) or PEMA-DEC on R-BPP. f The same paper sample, after removal of the gel Fig. 6 ATR-FTIR spectra acquired on Whatman ® paper mock-ups, either pristine or treated with DEC, before and after accelerated aging The effectiveness of pHEMA/PVP-EAPC for the removal of tapes from paper artworks has been recently shown in some case studies, where MT and FPP PSTs were addressed [33], while PEMA-DEC was assessed on OT PSTs [18]. Here, the cleaning efficacy of the PEMA-DEC gel was further checked on an inked (B-BPP) paper mock-up with a BT PST, after aging of the mock-up. The gel was applied on top of the tape, which allowed the removal of the polypropylene backing; then, a successive application was carried out to swell, soften and remove the acrylate adhesive from the inked paper fibers. FTIR 2D Imaging was carried out on the inked surface before the application of the PST, after the removal of the backing, and after the final application of the gel. Figure 7 shows the IR maps obtained on the paper surface (the chromatic scale of the maps qualitatively shows the intensity of the band as follows: blue < green < yellow < red). The use of the gel, coupled with gentle mechanical action, allowed the substantial removal of the adhesive, as indicated by the visual inspection of the fibers under visible light, and by the strong intensity decrease of the C = O stretching band (1735 cm −1 ) of the acrylate adhesive in the IR maps. It must be noticed that the detection limit of FPA detectors is significantly lower than conventional MCT detectors for trace amounts of analytes heterogeneously distributed on a substrate. In fact, in this case localized analyte traces can be detected thanks to the high spatial resolution of the FPA detector [34]. In the visible light images, the slightly different hue of the cleaned area is due to the fact that fibers not coated by the PST were directly exposed to the light aging during the preparation of the mock-up, and thus experienced different photochemical degradation than those coated by the tape.
It must be noticed that, owing to the good mechanical properties of the pHEMA/PVP and PEMA gels, their application and removal do not leave detectable polymer residues on the treated surface, as previously verified on paper samples via ATR and 2D FTIR Imaging [17,18].
Conservation treatments
Beside tests on mock-ups, the pHEMA/PVP-EAPC and PEMA-DEC systems were assessed in the present contribution on a selection of case studies, to further corroborate their use in the field of paper artworks conservation. The results obtained were critically compared with those previously achieved using these cleaning tools, so as to provide a complete overview on their efficacy and feasibility.
As a general premise, when approaching works of art on paper, conservation treatment decision-making is naturally carried out on a case-by-case basis, as the removal of adhesive tape residues from works on paper poses ethical and aesthetic questions. PSTs are typically removed only in case they have structurally damaged or pose a risk to the paper substrate, or if they have discoloured and therefore negatively affect the readability, interpretation and enjoyment of an object. Where possible, the examination and evaluation of the drawings involve three stages: firstly, the documentation of techniques and materials used by the artist; then, a thorough evaluation of the systems for PSTs removal; finally, the characterization of the removed tapes and the observation of the removal results.
Currently, the restoration practice foresees the use of aqueous systems (water and a 30% ethanol-water blend) and organic solvents such as ethanol, acetone, cyclohexane, ethyl acetate, tetrahydrofuran, toluene, xylene and N,N-dimethylformamide [2]. These fluids were tested as reference cleaning tools on the aged paper mock-up samples, using cotton swabs; as a matter of fact, they all caused, with the exception of cyclohexane, lateral migration of the inks and/or penetration of the adhesive into the paper fibers. Therefore, only the pHEMA/PVP-EAPC and PEMA-DEC systems were used in the case studies. Based on the data discussed in the previous paragraphs, pHEMA/PVP-EAPC was used to remove FPP (with cellulose-based backing), and MT (with cellulose acetate backing), while PEMA-DEC was used on OT and BT that feature polypropylene backings.
We worked on four contemporary works of art on paper: a red felt-tip pen (FTP) notebook by Brazilian artist Renato Bezerra de Mello, a tempera and watercolour recto and verso drawing by Maria Helena Vieira da Silva and Helen Philips Hayter, a black ink drawing by Keith Haring, and a painted collage by Pierre Buraglio.
Renato Bezerra de Mello
This case study was selected as it represents a challenging and advanced scenario, owing to some important characteristics. This series of Renato's artworks are drawn on a Moleskine ® sketchbook, where each page is covered by small red FTP strokes composing concentric circles (Fig. 8). FTPs feature reservoirs with a core of absorbent material that serves as a carrier for the ink [35], which in turn can contain different formulations of dyes, solvents (e.g. ethanol, isopropanol) and additives such as pH modifiers, humectants, antifoaming agents, surfactants, biocides, fillers and extenders [36][37][38][39][40]. As a result, FTPs are among the most unstable artistic media, highly sensitive to photochemical degradation and to any kind of wet treatment. Luckily, in our case, De Mello's artwork was in very good conditions, but PSTs with paper backings were present on the gutter of three pages, covering part of the drawings (see Fig. 8b). The added complexity of this case study was thus the need to remove PSTs from an inked three-dimensional substrate (inside margins of the notebook, close to the spine), without bleeding the extremely sensitive FTP ink: the minimal alteration of the FTP strokes would have caused severe aesthetic alteration on the intricate network of fine red lines. Besides, flattening the notebook was impractical, and the use of heat to remove the PSTs was discouraged owing to the risk of inducing migration of the adhesive into the paper fibres, which would then require potentially invasive abrasion with eraser gums. The possibility of working with flexible and retentive gels, able to adapt to the notebook three-dimensionality and to release cleaning fluids at controlled rate, was key to this cleaning intervention, which served as a testing ground to probe the use of the pHEMA/PVP-EAPC hydrogels in such difficult conditions.
The PSTs were easily identified as Masking Tapes (MKT), thanks to the presence of their characteristic creped paper backing [4]; these tapes typically contain natural or synthetic rubbers as adhesives (see also the FTIR analyses in Sect. "Results and discussion"). A set of solubility tests showed that the red ink is extremely sensitive to water and solvents. The safe and effective removal of the paper-based PSTs was thus performed with the following procedure: the pHEMA/PVP-EAPC hydrogel was cut to match exactly the size of the PSTs, and then applied on top of the PSTs. After 5 min, the softening of the PST was tested with a scalpel. After 5 min, the softened PST was detached with gentle mechanical action, using a scalpel, with no alteration of the red strokes or abrasion of the underlying paper support, and without leaving observable adhesive residues on the artwork. The successful application of the gels in this challenging case was deemed as a significant improvement over traditional cleaning methods.
Vieira da Silva/Philips
This artwork was realized by Maria Helena Vieira da Silva (recto side, tempera) and Helen Philips Hayter (verso side, watercolor). The paint media contain plasticizers (e.g. glycerin), humectants (sugar syrup or honey), wetting agents and preservatives, which overall makes the paints highly sensitive to water and solvents, discouraging the use of moisture and/or organic solvents for cleaning operations. The risk to cause staining and solubilisation of the media must be minimised, particularly in cases like this, where the paper substrate is thick and absorbing. In this case, it was necessary to remove two different types of repairs, i.e. brown paper-based and translucent PST hinges that had been used to attach the watercolor side to its old mount (see Fig. 9).
The PST was an MT tape with a cellulose acetate backing and acrylate-based adhesive, as confirmed by ATR analysis [33]. The removal of the PST hinge was realized with the pHEMA/PVP gel loaded with EAPC [33] and cut to match the tape profile. After 5 min the softening of the PST was tested with a scalpel, and then the PST was removed with gentle mechanical action with no loss of the underlying watercolour layer or abrasion of the artwork substrate.
The brown paper-based hinge represented a different challenge, which we discuss here as an expansion of the potential use of the pHEMA/PVP gels. These types of hinges have brown paper coatings and vegetal glue adhesives, in fact they are typically applied by humidification; therefore, it was expected that water, released at controlled rate at the coating-adhesive interface, could be able to swell the adhesive layer allowing the detachment of the hinge. In this case, the use of EAPC was deemed as non-necessary, also considering that the fluid is able to penetrate the paper coating easily and potentially access to the highly sensitive paint media. Therefore, in this case it was decided to apply the p(HEMA)/PVP gel simply loaded with water. The hinges were removed with 5 min application; the moisture of the hydrogel slowly
Keith Haring
The selected artwork (Untitled, 1983), belonging to the artist's Naples Series, had six PSTs (with polypropylene backings and either acrylate copolymers or natural rubber as adhesives) on the verso side, most likely old mounting hinges (see Fig. 10). These works, made with Haring's signature continuous line on a monochrome ground, carry a political, social and aesthetic message that would be compromised by media bleeding or local discoloration. The drawing, made using a black felt-tip pen, presents two disfiguring oxidized and yellowish areas on the upper edge, due to the penetration of the aged and discoloured PSTs adhesives. Preliminary micro-solubility tests had indicated that the original ink is highly sensitive to most solvents used in PST removal, but substantially inert to DEC. The removal of the six PSTs was carried out in two steps [18]. First, PEMA-DEC gels were placed onto the PSTs for a maximum of 30 min (in this case the gels were covered with a polyester film in order to further reduce solvent evaporation through longer applications); then, the detached backings were simply removed with tweezers, with no damage or alteration of the inked artwork. After the removal of the backings, gentle mechanical action (using a crepe-rubber eraser, a common procedure for graphic arts) was combined with the use of DEC, controlled by means of a vacuum suction table, in order to remove the deeply ingrained adhesive residues.
Pierre Buraglio
In the painted collage Les très riches heures de P.B. (1982), Buraglio arranges strips of thick brown packing tape (BT, with transparent polypropylene backing and brown acrylic-based adhesive) over two adjoining spreads of newspaper, scribbled over and embellished with colored pencil (see Fig. 11). In this artwork, the artist transformed humdrum newsprint into a window view, the "scene one" of art-making and ornamented daily life. This case study represents a different conservation scenario than the artworks discussed in the previous sections. In fact, in this case there was no need of removing the PSTs, since they are integral part of the collage; however, the aged PSTs were partially delaminated and detackified, likely owing to the high porosity of the paper substrate, to the presence of artistic media between paper and the PSTs, and possibly to uneven pressure made by the artist at the time of PSTs application. Where delamination occurred, the PSTs lifted from the substrate, and there was the need to re-adhere the tapes to the brown adhesive layer and to the paper substrate, so as to recover the original aesthetic look of the artwork.
The complete removal of the PSTs followed by their repositioning with a new adhesive was not possible because it is extremely risky to remove tapes of such dimensions (about 80 cm) without removing part of the graphic media, or without deforming the tapes. Instead, a practical and effective solution was to use a PEMA organogel loaded with DEC which, as previously illustrated, allows the swelling and reactivation of the adhesive in short application times (5 min); in this case, after contact with the PEMA-DEC gel, the "re-activated" PSTs were put under weight (rather than removed), leading to the re-adhesion and setting of the delaminated layers. This intervention represents a new applicative mode of the PEMA-DEC gels, showing that these are versatile and adaptable tools.
Conclusions
This contribution provided an overview and expanded on the application of two recently developed methodologies for the removal of PSTs from paper artifacts, which employ confining systems and "green" cleaning fluids as alternatives to non-confined solvents traditionally used in the restoration practice. New cleaning case studies were introduced to test the versatility and applicability of the two methods in challenging conditions, which led to an extensive assessment of their full potential and to advancement in the use of these methodologies. The two methodologies are complementary, in that they are based on the confinement of, respectively, an aqueous o/w nanostructured fluid in a hydrogel (pHEMA/PVP-EAPC) and an organic solvent in an organogel (PEMA-DEC), allowing to target different types of PSTs. Namely, confocal microscopy measurements confirmed that the PEMA-DEC system is able to release the solvent at controlled rate through polypropylene and polyvinylchloride backings, complementing the ability of pHEMA/PVP-EAPC to exchange the nanostructured fluid through more hydrophilic (or less hydrophobic) backings such as cellulose and cellulose acetate. In both cases, the retentiveness of the gels is fundamental to grant the penetration of fluids through the backing layer of PSTs, and the controlled swelling and softening of the adhesive layers underneath the backings, allowing the safe removal of PSTs without undesired bleeding of artistic media, Fig. 10 a Keith Haring drawing before PST removal, b verso side with six PSTs, c general view of the drawing after the removal, d and e details of the application of the PEMA gels loaded with DEC on the tapes transport of solubilized matter through the porous matrix of paper artworks, or fast cleaning fluids evaporation, i.e. typical issues when non-confined solvents are used. Namely, no adhesive residual, no tidelines, no media bleeding, no skinning of the paper substrate nor dimensional instability were noticed at the surface of the paper models. In some cases, extraction and migration of the dyes in the EAPC-loaded gel was noticed on inked paper mock-ups, but without uncontrolled spreading of the dyes across the paper surface. 2D FTIR Imaging confirmed that the application of the PEMA-DEC gel, coupled with gentle mechanical action on the swollen/ softened adhesive layer, leads to substantial removal of the adhesive down to the micron scale.
When moving from models to real artworks, it is clear that confinement in a gel of the EAPC fluid or of DEC is key for conservation treatment. Confinement enables the control of release, penetration and lateral spreading of the liquid phase, minimizing the contact with sensitive components of the artwork and limiting the possible movements of the inks. Several combinations of PSTs backings and adhesives were addressed, on different types of artistic media, so as to provide a complete framework. In cases where the PSTs are part of the original artistic materials, the controlled release of fluids by the gels was useful to gradually swell the aged and detackified adhesives, "reactivating" the adhesive layer, which can then be fixed to the substrate simply applying a weight for a short time. The presented case studies demonstrate that a highly versatile tool is available to conservators, guaranteeing complete control of the removing fluids during all of the steps required for PST removal from paper artworks.
This confinement-based methodology ensures the achievement of unprecedented safe and efficient removal of PSTs from paper artworks without affecting the chemical, physical or optical properties of the drawing. Finally, it must be noticed that that the reagents and materials used for the preparation of these systems are affordable and feasibly accessible. Besides, commercial multipurpose formulations of p(HEMA)/ PVP-based hydrogels and o/w nanostructured fluids for the cleaning of works of art are commercially available to restorers worldwide.
|
v3-fos-license
|
2019-03-16T13:10:49.182Z
|
2015-12-03T00:00:00.000
|
55062611
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://www.omicsonline.org/open-access/symptom-and-comorbidity-burden-in-chronic-disease-comparison-of-hiv-infection-and-diabetes-mellitus-in-aging-patients-2155-6113-1000527.pdf",
"pdf_hash": "081e94c7513ccae5b67e445fb48adfc9b472b793",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2399",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"sha1": "371def7dbc745b8e93d8951b95f26572393f55bb",
"year": 2015
}
|
pes2o/s2orc
|
Symptom and Comorbidity Burden in Chronic Disease: Comparison of HIV- Infection and Diabetes Mellitus in Aging Patients
Eva Wolf1, Christian Hoffmann2, Knud Schewe2,12, Stephan Klauke3,12, Robert Baumann4, Martin Karwat5, Frank Schlote6, Franz Mosthaf7, Hans Heiken8, Axel Baumgarten9, Albrecht Ulmer10, and Hans Jaeger11,12 for the dagnae 50/2010 study group 1MUC Research, Munich, Germany 2ICH Study Center, Hamburg, Germany 3Infektiologikum Frankfurt, Frankfurt, Germany 4Private Practice Dr. R. Baumann, Neuss, Germany 5Private Practice Dr. M. Karwat, Munich, Germany 6Praxisgemeinschaft Turmstrasse, Berlin, Germany 7Private Practice for Hematology, Oncology and Infectious Diseases, Karlsruhe 8Praxis Georgstrasse, Hannover, Germany 9Medical Center for Infectious Diseases (MIB), Berlin, Germany 10Private Practice Dres. Ulmer/Frietsch/Mueller/Roll, Stuttgart, Germany 11MVZ Karlsplatz – HIV Research and Clinical Care Centre, Munich, Germany 12dagnae e.V., The German Association of Physicians Specialized in HIV Care, Berlin, Germany
Introduction
The number of people living with HIV/AIDS in the 50-year-plus age group is rising, especially in industrialized nations. With the widespread use of highly active antiretroviral therapy (HAART), physicians and patients are facing new challenges in the treatment and management of age-related comorbidities [1][2][3][4][5]. Several cohort studies showed that HIV-infected patients are at increased risk for individual comorbidities, i.e. for bone [1,[3][4][5], renal [1,6,7], neurocognitive and cardiovascular disorders [1,8,9] as well as non-AIDS-defining malignancies [10]. The relative contribution of HIV-infection and HIV-related factors such as inflammation, immune activation and altered coagulation to the development of different comorbidities in aging individuals is still inconclusive. Other risk factors, life style factors or the use of antiretroviral drugs may contribute to excess morbidity [2,8,[11][12][13][14][15][16][17].
With this study, we tried to identify the differences in the total burden of disease between HIV-infected patients in comparison to two HIV-negative comparator groups. The first group included patients without any severe chronic or malignant disease, the second included patients with diabetes mellitus type 2, another chronic disease representing a major global cause of premature morbidity and excess mortality [18][19][20].
Study design and study population
50/2010 (the 50-year-plus age group in 2010) was a noninterventional, multi-center nationwide prospective 18-months cohort study initiated by The German Association of Physicians Specialized in HIV Care (dagnae e.V.) comparing HIV-infected patients with two HIV-negative patient groups, i.e. patients with diabetes mellitus type 2 (DM) and control patients. Inclusion criteria were an age ≥50 years and the absence of any life-threatening or malignant disease requiring treatment. Additional inclusion criteria for HIV-infected patients were chronic HIV-infection and no acute opportunistic infection at the time of study entry.
Inclusion criteria for the DM group was clinical diagnosis of DM verified by its documentation in the patient file or HbA1c levels ≥6.5%, fasting plasma glucose levels ≥126 mg/dL or the use of anti-diabetic drugs.
Inclusion criteria for control patients were the absence of any severe or unstable chronic disease at baseline. Health conditions and comorbidities that did not meet the criteria for exclusion in either study group were hypertension, dyslipidemia, depression, history of heart attack or cerebrovascular event (if more than one year ago) and chronic infection with hepatitis C or hepatitis B virus (if diagnosis was obtained more than one year ago). Concomitant hepatitis C infection requiring treatment was excluded. Patients with a history of intravenous drug use receiving replacement therapy were not excluded.
The recruitment of women was stopped in a group if they exceeded 20% (according to the gender distribution of the HIV-infected population in Germany).
Regulatory requirements
Prior to the conduct of the study, approvals of the institutional review boards, i.e. the competent ethics committees, were obtained for all study sites. All subjects had to give written informed consent before inclusion in the study.
Data collection
Biological, clinical and psychosocial parameters were collected at baseline, months 6, 12 and 18. The case report form included sociodemographic variables, HIV-related history, vital signs, body weight, laboratory parameters, relevant comorbidities and conditions, and concomitant medication. At each visit patients had to complete questionnaires including life style factors and symptoms of aging. For the male study population, the Aging Males' Symptoms (AMS) scale was used, a self-reported, 17-item questionnaire including a somatic, a psychological and a sexual subscale. The total AMS score has a range from 17 to 85; a total score ≥37 indicates moderate or severe subjectively perceived complaints [21]. Women were interviewed regarding the onset of menopause and related symptoms.
As an accepted indicator of weakness/frailty and health status, grip strength of the dominant hand was measured using a dynamometer (Saehan Dynamometer, model SH5001) [22,23]. The prevalence of weakness was defined as grip strength of 30 kg or less for males and 18 kg or less for females [24,25]. Factors associated with weak grip strength were evaluated using logistic regression analysis. Covariables considered were HIV-infection, DM, BMI (<20 versus (vs) ≥20 kg/ m 2 ), number of documented comorbidities (≥2 vs <2), age (≥65 vs <65 years), gender and physical activity (<2x vs ≥2x 45 min/week). In sensitivity analyses, hepatitis B virus (HBV) and hepatitis C virus (HCV) infection were included as covariables in the model to adjust for potential confounding.
The Hospital Anxiety and Depression Scale (HADS), a selfscreening questionnaire, was used to assess anxiety and depression.
Health-related quality of life was evaluated using the 36-item Short Form Health Survey questionnaire (SF-36). Results will be presented elsewhere.
The 10-year risk for coronary heart disease (CHD) including myocardial infarction or coronary death was calculated using the Framingham risk equation considering age, the presence of diabetes mellitus, smoking, JNC-V blood pressure categories, and NCEP total cholesterol categories [26]. A 10-year risk ≥20% or a history of CHD were defined as high risk. Multivariate logistic regression analysis adjusting for age categories, diabetes mellitus, a positive family history for CHD and smoking was used to evaluate whether HIV-infection by itself represents an additional risk for CHD. Due to low numbers, patients aged ≥80 years were excluded from this evaluation.
Statistics
Continuous variables were described as medians with interquartile ranges (IQR). The Mann-Whitney U test and the Kruskal Wallis test were used for comparison of continuous variables between two and more groups. The Wilcoxon signed rank test was used to test for significant changes in continuous variables within groups. Fisher´s exact test and the chi 2 test were used for comparison of frequencies between two and more groups. The McNemar test was used to test for significant changes in frequencies within groups. The P-level for significance was P<0.05 (n.s. = not significant).
Study population
The study population consisted of 761 participants including 255 patients with chronic HIV-infection (HIV group), 249 HIV-negative patients with diabetes mellitus type 2 (DM group) and 257 HIV-negative controls, all aged ≥50 years. The study participants were recruited in 37 German clinics between July 2008 and December 2009. The last followup visit was in August 2011.
Socio-demographic characteristics of the study population are shown in Table 1. Twenty-one point two percent of patients were of female gender. Chronic hepatitis B was present in 2.4%, 0.8% and 1.2% in HIV, DM and control patients while chronic hepatitis C was found in 5.9%, 2.0% and 2.7%, respectively. Replacement therapy in patients with a history of i.v. drug use was used in 2.0%, 0.4% and 2.0% of HIV, DM or control patients, respectively.
Physical constitution, activity and weakness
Physical constitution, physical activity and weakness at study entry are shown in Table 2. In the HIV group, significantly more patients had a low BMI <20kg/m 2 in comparison to the HIV-negative groups (10.5% vs ≤1%); this was true for men and females. In contrast, high BMI was most prevalent in the diabetes mellitus group. BMI did not change significantly during follow-up in either group. Physical activity was comparable between the HIV group and the control group (about half of the patients reported physical activity for at least 2 times per week for ≥45 min), but was lower in the DM group.
In male patients, the median grip strength was significantly lower in the HIV group than in the DM group or in the control group. In women, we did not observe significant differences. Grip strength did
Symptoms of aging
The Aging Males' Symptoms (AMS) questionnaire was completed by 514 male patients at baseline and 425 patients at month 12. At baseline, median AMS summary scores were 37.0, 33.3 and 31.0 in HIV, DM and control groups, respectively. There were significant differences between the HIV and the control group (P<0.001) and between the DM and the control group (P=0.0039) but not between the HIV and DM group.
AMS summary scores ≥37 indicating moderate or severe complaints were significantly more prevalent in HIV and DM groups than in the control group. The prevalences were 51% in the HIV group (47% in patients with HIV-RNA <50 copies/mL, 75% in patients with HIV-RNA ≥50 copies/mL, P=0.008), 45% in the DM group compared to 29% in the control group (pairwise comparisons with the control group, P<0.0001 and P=0.002, respectively).
HIV-infection (OR 2.2, 95% CI 1.5-3.5) and diabetes mellitus (OR 1.9, 95% CI 1.2-2.9) were independently associated with a high AMS summary score after adjusting for age categories (60-69, 70-79 vs 50-59 years), high BMI (>28 vs ≤28 kg/m 2 ) and stable partnership. Adjusting for HBV-and HCV-infection did not affect these associations. Stable partnership (OR 0.6, 95% CI 0.4-0.8) was also associated with a lower AMS score. By month 12, there was a small but significant increase in AMS summary score of +2 points in each group.
In women, the onset of menopause was similar across the three patient groups. Median age was 50.
Cardiovascular risk factors and risk for coronary heart disease (CHD)
In total, 566 patients had complete data for calculation of CHD risk at the baseline visit (204 HIV, 196 DM, and 166 control patients). Among the three groups, 39.7 %, 58.7 % and 26.5 % were at high risk for CHD, respectively (Figure 1). History of CHD and low plasma HDL (high density lipoprotein) levels were significantly more frequent in the HIV and DM groups than in the control group. In contrast, HIV and control patients had significantly higher total cholesterol and LDL (high density lipoprotein) levels than DM patients who were significantly more frequently on lipid lowering drugs (Table 3). Logistic regression analysis adjusting for gender, age categories, DM, smoking and positive family history for CHD showed that HIV-infection remained independently associated with high CHD risk (OR 1.6, 95% CI 1.0-2.5, P=0.046, Table 4).
Comorbidities, malignancies and other clinical manifestations
The prevalence of comorbidities or other medical disorders at baseline was highest in the DM group, followed by the HIV group. At least two comorbidities or other medical disorders were documented in 76.3% of the DM group, in 67.8% of the HIV group (70% in patients with HIV-RNA <50 copies/mL, 63% in patients with HIV-RNA ≥50 copies/mL) and in 59.1% of the control group (P<0.001) (76.0%, 67.1% and 57.9%, respectively, when excluding HBV-and HCV-infection). Pairwise comparisons showed significant differences between groups. For certain comorbidities, we observed significantly different prevalences in the three groups, with post-hoc tests showing significantly higher prevalences in HIV-infected patients than in control patients, The difference between HIV-infected patients and control patients was significant (P<0.001), as was the difference between DM and control patients (P=0.015). The corresponding relative incidence rates were significantly different from 1. The lower 95% confidence interval limits were 2.1 and 1.2, respectively.
Discussion
This large cohort study is the first to our knowledge to evaluate health conditions and the disease burden in aging HIV-infected patients in comparison to patients with another age-affected condition, diabetes mellitus type 2 and to individuals without any severe chronic disease. HIV-infection was associated with a higher burden of comorbidity, i.e. cardiovascular diseases and malignancies, when compared to patients without severe chronic disease. However, the overall prevalence of all comorbidities and other medical disorders was highest in patients with DM. For certain comorbidities including renal disorders, neurological disorders or history of myocardial infarction, HIV-infected patients were comparably affected. The calculated risk of coronary heart disease was significantly higher in HIV-infected patients (of whom 96% received antiretroviral treatment) than in HIV-negative controls without chronic disease. However, the excess risk attributed to HIVinfection was lower than that attributed to diabetes mellitus type 2. Our results are in concordance with a systematic review including a metaanalysis and a large cohort study showing that people living with HIV are at increased risk of cardiovascular disease beyond that explained by cardiovascular risk factors [27,28].
The frequency and the burden of symptoms as assessed by the Aging Males' Symptoms (AMS) scale were comparable between HIV and DM patients. Both groups had significantly higher AMS summary scores than the control group. High symptoms scores were even more common in HIV-infected patients with detectable plasma viremia than in patients with viral suppression to <50 HIV-RNA copies/ mL. Symptoms of aging included psychological, somatic and sexual complaints.
Weakness, defined as low grip strength, was more prevalent in HIV-infected individuals than in the two HIV-negative groups. Grip strength is widely used as an indicator of physical functioning and is considered as an independent predictor of morbidity and mortality [23,29,30]. After adjusting for age, DM, BMI, and physical activity, the association between weakness and HIV-infection remained significant.
The use of concomitant drugs for the treatment of disorders which are often associated with DM was much higher in the DM group than in the two other groups, i.e. the use of antihypertensive agents, lipid lowering drugs and anticoagulants. Interestingly, despite a higher cardiovascular risk in HIV-infected patients in comparison to control patients, the use of antihypertensive drugs was less common in HIV- infected patients, and the use of lipid lowering drugs was comparable between HIV-infected and control patients. The use of psychotropic drugs was higher in HIV-infected patients than in the HIV-negative groups.
Although the incidence of malignancies was small during the 18-month follow-up, we observed a significantly higher incidence of non-AIDS-defining malignancies (NADM) in the HIV-infected group than in the control group. These findings are in concordance with data showing a higher incidence of NADM in HIV-infected persons in the HAART era than in the general population, adjusted for age, race, and gender [31,32]. Based on nationwide cohort data, the incidence of cancer among HIV-infected patients in France was 1.4 per 100 person years for the year 2006, with NADM representing 68% of cases and including a large variety of entities. The relative risk for HIV-infected patients when compared to the general population in France was estimated to be 3.5 for men and 3.6 for women; it was particularly elevated in younger patients [33]. In our study, the incidence of malignancies was at least 2.1 times higher in HIV-infected patients than in control patients (lower limit of the 95% confidence interval); however, the observed difference in NADM incidence between HIV-infected patients and control patients may not only be attributed to HIV-infection but also to individual life style factors such as smoking or alcohol consumption. This study has several limitations. Since this was a cohort study, patients were not evenly distributed with respect to cofactors associated with the outcomes of interest. However, the presence of certain cofactors may be attributed to the chronic disease itself or -in case of life style factors -may be attributed to the specific patient group. Therefore an even distribution of covariables may not be desirable as it would not have reflected the specific patient populations seen in clinical routine and would hence not have been appropriate for evaluating the disease burden in these populations. Gender and age distribution were brought into line during recruitment of patient groups (using frequency matching). For estimating associations between outcomes of interest and HIV-infection or diabetes mellitus, confounding was minimized by either stratification of analyses or by multivariable analyses.
One strength of this study lies in the majority of HIV-positive and HIV-negative patients seeking care at the same centers and therefore receiving similar screening methods, therapeutic options and treatment of comorbidities. Although the sample size was large, the study was not powered to detect differences in incident comorbidities. Apart from incident malignancies (the presence of which was excluded at baseline), we focused on prevalent diseases and disorders and were able to detect specific differences between groups. Of note, we did not adjust the level of significance for multiple testing and some outcomes only showed a weak association with HIV-infection. Concerning the DM group, factors associated with clinical outcomes -such as the duration of the disease and the indication for treatment -were not part of the assessment. Therefore, the DM group might have been a very heterogeneous group.
We were able to describe the burden of disease in three aging patient groups under medical treatment or medical observation. Aging antiretrovirally treated HIV-infected patients had a higher burden of comorbidity and symptoms than controls without severe chronic disease, but not than HIV-negative patients with diabetes mellitus. We hence emphasize the need for prevention strategies and good screening tools with regard to cardiovascular, renal, neurologic and malignant diseases in this patient population.
|
v3-fos-license
|
2020-06-17T13:04:27.436Z
|
2020-06-17T00:00:00.000
|
219709544
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcell.2020.00486/pdf",
"pdf_hash": "b702dc73d67add2df51514e941028b94497f22a5",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2400",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "b702dc73d67add2df51514e941028b94497f22a5",
"year": 2020
}
|
pes2o/s2orc
|
Histone Deacetylase Inhibitors to Overcome Resistance to Targeted and Immuno Therapy in Metastatic Melanoma
Therapies that target oncogenes and immune checkpoint molecules constitute a major group of treatments for metastatic melanoma. A mutation in BRAF (BRAF V600E) affects various signaling pathways, including mitogen activated protein kinase (MAPK) and PI3K/AKT/mammalian target of rapamycin (mTOR) in melanoma. Target-specific agents, such as MAPK inhibitors improve progression-free survival. However, BRAFV600E mutant melanomas treated with BRAF kinase inhibitors develop resistance. Immune checkpoint molecules, such as programmed death-1 (PD-1) and programmed death ligand-1(PD-L1), induce immune evasion of cancer cells. MAPK inhibitor resistance results from the increased expression of PD-L1. Immune checkpoint inhibitors, such as anti-PD-L1 or anti-PD-1, are main players in immune therapies designed to target metastatic melanoma. However, melanoma patients show low response rate and resistance to these inhibitors develops within 6–8 months of treatment. Epigenetic reprogramming, such as DNA methylaion and histone modification, regulates the expression of genes involved in cellular proliferation, immune checkpoints and the response to anti-cancer drugs. Histone deacetylases (HDACs) remove acetyl groups from histone and non-histone proteins and act as transcriptional repressors. HDACs are often dysregulated in melanomas, and regulate MAPK signaling, cancer progression, and responses to various anti-cancer drugs. HDACs have been shown to regulate the expression of PD-1/PD-L1 and genes involved in immune evasion. These reports make HDACs ideal targets for the development of anti-melanoma therapeutics. We review the mechanisms of resistance to anti-melanoma therapies, including MAPK inhibitors and immune checkpoint inhibitors. We address the effects of HDAC inhibitors on the response to MAPK inhibitors and immune checkpoint inhibitors in melanoma. In addition, we discuss current progress in anti-melanoma therapies involving a combination of HDAC inhibitors, immune checkpoint inhibitors, and MAPK inhibitors.
INTRODUCTION
Melanoma arises from melanocytes in the skin or mucosa (Chodurek et al., 2014). Metastatic melanoma accounts for about 1-2% of skin cancers (Jiang et al., 2017). However, it is responsible for 90% of all mortality in skin cancer patients. Over the past decade, a better understanding of the molecular basis of melanoma has led to the development of anti-cancer drugs that target molecular signaling pathways that are activated in malignant metastatic melanoma.
The tumor microenvironment plays a major role in the proliferation of melanoma cells and anti-cancer drug resistance . The tumor microenvironment consists of cancer cells, endothelial cells, fibroblasts, and innate and adaptive immune cells. Cancer cells interact with immune cells such as natural killer (NK) cells, macrophages (M1/M2), myeloid-derived suppressor cells (MDSCs), and cytolytic T lymphocytes (CTLs). Cancer cells can evade the antitumor response of CTLs (Freeman et al., 2019). Immune checkpoint molecules, such as PD-1 and PD-L1, regulate the interactions between cancer cells and immune cells. The interaction between PD-1 and PD-L1 leads to immune evasion of cancer cells (Hei et al., 2020). Immunotherapy aims to suppress immune evasion (tumor tolerance) by targeting the interactions between cancer cells and immune cells.
Over the last decade, immune checkpoint inhibitors (nivolumab and pembrolizumab) targeting PD-1/PD-L1 interactions have been approved by the FDA. In a clinical trial of elderly patients (>75 years old) with metastatic melanoma, nivolumab (anti-PD-1 antibody) showed clinical benefits and was well tolerated (Ridolfi et al., 2020). Pembrolizumab, an anti-PD-1 antibody, improved progression-free survival compared to BRAF inhibitors and PD-L1 inhibitors in clinical trial of stage III melanomas (Lorenzi et al., 2019). A phase Ib trial of avelumab, an anti-PD-L1 antibody, in 51 patients with stage IV unresectable melanoma showed an objective response rate (ORR) of 21.6% (Keilholz et al., 2019). Thirty-nine patients experienced side effects, including infusion-related reactions, fatigue, and chills (Keilholz et al., 2019).
Histone acetylation/deacetylation plays a critical role in the expression of genes involved in immune evasion of cancer cells (Knox et al., 2019). Histone modification is closely associated with cancer progression (Halasa et al., 2019). High expression levels of several HDACs have been associated with poor survival in cancer patients (Dembla et al., 2017). Thus, HDACs may regulate expression of PD-1 and PD-L1. These reports suggest that HDACs may be targets for the development of antimelanoma therapies.
Herein, we review the roles of signaling pathways and immune checkpoint molecules in melanoma progression and anti-cancer drug resistance. We address the roles of HDACs in the regulation of oncogenic signaling pathways and immune evasion by cancer cells. We also discuss current progress in combination therapies that employ histone deacetylases inhibitors, targeted treatments, and immune therapy for treatment of malignant melanoma.
THE MECHANISMS OF ANTI-CANCER DRUG RESISTANCE IN MELANOMA
Melanoma is a common and potentially lethal type of skin cancer. Almost half of all cutaneous melanomas have the BRAF V600E gene mutation that results in activation of MAPK signaling (Feng T. et al., 2019;Rossi et al., 2019;Woo et al., 2019). BRAF V600E mutant metastatic melanomas display activation of both MAPK-dependent and -independent signaling pathways for survival under MAPK inhibitor treatment in a PDX mouse model (Feng T. et al., 2019). BRAF/MEK inhibitors have some clinical benefits. However, melanoma patients develop resistance to these inhibitors within 6-8 months (Roskoski, 2018;Fujimura et al., 2019).
Anti-cancer drug resistance can be classified into innate and acquired resistance. Innate resistance exists even before treatment while acquired resistance develops after treatment. Innate anti-cancer drug resistance is closely related to inherent gene mutations (Shinohara et al., 2019), drug efflux (Xiao et al., 2018, Figure 1A), and selection of cancer stem cells upon treatment (Green et al., 2019, Figure 1B). DNA damage repair ( Figure 1A), phenotypic switching, epigenetic reprogramming ( Figure 1C), enrichment of slow cycling cells (Figure 1C), and reactivation of molecular signaling pathways also play critical roles in anti-cancer drug resistance. High level of ABCB5 (ATP-binding cassette transporter, subfamily B, member 5) is responsible for resistance to the BRAF inhibitor vemurafenib (Xiao et al., 2018). Enhanced DNA damage repair by NF-κB confers resistance to chemotherapy .
Anti-cancer drug resistance is associated with the presence of induced drug-tolerant cells (Kim et al., 2010(Kim et al., , 2015Al Emran et al., 2018). These induced drug-tolerant cells resulting from exposure to chemotherapy display histone lysine modifications, which are characteristic of epigenetic reprogramming (Al Emran et al., 2018). Exposure to vemurafenib enriches slow cycling melanoma cells expressing H3K4-demethylase JARID1B (Roesch et al., 2013). Inhibition of mitochondrial function enhances sensitivity to vemurafenib by decreasing the expression of JARID1B (Roesch et al., 2013). Rapidly proliferating cancer cells, but not slow cycling cells, are the main subjects of targeted therapy. Slow cycling cancer cells are enriched by anti-cancer drugs and confer resistance by activating various signaling pathways, including the WNT5A and EGFR pathways FIGURE 1 | The mechanisms of anti-cancer drug resistance. (A) Drug efflux by ABC transporter activity, drug inactivation, and alterations in drug targets leads to anti-cancer drug resistance. Increased DNA damage repair also leads to anti-cancer drug resistance. (B) Cancer stem cells survive anti-cancer drug treatment. Mutations (point mutations, gene amplifications etc.) in these cancer stem cells lead to anti-cancer drug resistant phenotypes. Cancer stem cells that survive anti-cancer drug treatment proliferates and lead to anti0cancer drug resistance (intrinsic resistance). CSC denotes cancer stem cell. (C) Slow-cycling drug-tolerant cells are selected on treatment by reversible epigenetic reprogramming. Further epigenetic reprogramming give rise to re-proliferating drug-resistant cells. Genetic mutation in slow-cycling drug-tolerant cells also give rise to permanent drug-resistant cells. HATs denote histone acetyl transferases. (D) Mesenchymal transition is closely related to increased drug resistance and invasiveness. MET denotes mesenchymal-epithelial transition. (E) Repeated exposure to BRAF inhibitors spurs resistance. BRAF inhibitor resistance develops from gene amplification, gene overexpression, genetic mutations, activation of signaling pathways, and upregulation of HDACs. (Ahn et al., 2017). Figure 1 shows the mechanisms of anti-cancer drugs resistance. Tumor heterogeneity and plasticity (phenotypic switching) are responsible for resistance to various anti-cancer drugs (Su et al., 2019, Figure 1D). Tumor heterogeneity includes cell type heterogeneity and genetic heterogeneity. These characteristics make it almost impossible to rely on a single therapy for cancer treatment. Melanoma cells switch between differentiated (proliferative) and de-differentiated (invasive) states during metastatic progression. Phenotypic switching toward the de-differentiated state leads to resistance to BRAF and MEK inhibitors (Granados et al., 2020). BRAF inhibitor treatment induces mesenchymal transition, which leads to BRAF inhibitor resistance (Su et al., 2019).
BRAF/MEK inhibitor resistance in melanoma is associated with increased expression of EGFR (Ahn et al., 2017;Dratkiewicz et al., 2019). Resistance to BRAF inhibitors (dabrafenib or vemurafenib) results from BRAF amplification, AKT mutation, N-RAS mutation, MEK1/MEK2 mutation, and high level of insulin like growth factor-1 receptor (IGF-1R) in BRAF V600E mutant melanomas (Rizos et al., 2014, Figure 1E). AKT1(Q79K) mutation also confers resistance to BRAF inhibitors (vemurafenib or dabrafenib) via amplification of PI3K-AKT signaling (Shi et al., 2014). Resistance to BRAF inhibitors (vemurafenib or dabrafenib) results from alterations in MAPK pathway, such as MAP2K2, and melanocyte inducing transcription factor (MITF) (Van Allen et al., 2014). Melanoma cells can adapt to the drugs through phenotypic switching (plasticity), which results in resistance to targeted therapies such as BRAF and MEK inhibitors (Richard et al., 2016;Hartman et al., 2020). MITF, a regulator of melanoma cell plasticity, shows heterogeneous expression in cancer cell subpopulations (Vachtenheim and Ondrusova, 2015). Low expression of MITF expression is associated with invasion while high MITF expression favors cellular proliferation (Vachtenheim and Ondrusova, 2015). MITF regulates invasion of melanoma cells through negative feedback loop with Notch signaling (Golan and Levy, 2019). Therapy-resistant melanoma show low expression of MITF (Ahmed and Haass, 2018). High MITF level is found in more than 20% of melanomas following MAPK inhibitor treatment (Van Allen et al., 2014;Smith et al., 2016). MAPK inhibition leads to increased expression of MITF, which counteracts the effect of the MAPK inhibitor (Smith et al., 2019). Reactivation of MAPK signaling leads to activation of the PI3K-mTOR signaling pathway, which confers resistance to the BRAF inhibitors vemurafenib and dabrafenib (Welsh et al., 2016). Resistance to the BRAF inhibitor SB-590885 results from activation of IGF-1R/PI3K signaling (Villanueva et al., 2010). Resistance to PLX4720, an inhibitor of BRAF, results from upregulation of HDACs based on the fact that pan-HDAC inhibitors overcome resistance to PLX4720 (Lai et al., 2012). Trametinib-resistant melanoma cells show increased expression of HDACs 2/5/6/10/11 (Booth et al., 2017). A combination of vemurafenib and the MEK inhibitor trametinib increases the expression of HDAC8 in melanoma cells . This increased expression of HDAC8 leads to the activation of MAPK signaling via receptor tyrosine kinases, such as EGFR and proto-oncogene MET, which confers resistance to the combination of BRAF inhibitor and MEK inhibitor . It is therefore probable that HDAC8 is responsible for acquired resistance to the BRAF and MEK inhibitors. Figure 1E shows the mechanisms associated with resistance to BRAF inhibitors.
These reports suggest that targeting signaling pathways and/or HDACs may overcome resistance to BRAF inhibitors. Cancers are generally heterogeneous and multiclonal. An individual cancer reflects differences in mutations of various genes. Therefore, a combination of anti-cancer drugs is employed as anti-cancer therapy. Aberrant activation of the MAPK pathway is a major feature in most cases of melanoma (Dikshit et al., 2018). A combination of BRAF and MEK inhibitors has been employed for the treatment of metastatic melanomas harboring the BRAF V600E mutation. The anti-tumor effects of these BRAF inhibitors are enhanced by co-administration of MEK inhibitors (Dummer et al., 2018). The combination of dabrafenib and trametinib results in stronger inhibition of activity of specific tyrosine kinases than does treatment with dabrafenib alone (Krayem et al., 2020). The combination of a BRAF inhibitor (dabrafenib) and a MEK inhibitor (trametinib) increases the expression of KIT, a tumor suppressor, and also induces alterations in CCND1, RB1, and MET in patients with BRAF V600E metastatic melanoma (Louveau et al., 2019). The combination of cobimetinib (MEK inhibitor) and vemurafenib (BRAF inhibitor) improved progression-free survival compared to vemurafenib monotherapy in patients with BRAF V600 mutant metastatic melanoma in a phase 3 clinical trial (12.3 months vs. 7.2 months; Ascierto et al., 2016). In a phase III clinical of patients with advanced melanoma harboring the BRAF V600E mutation, the combination of BRAF and MEK inhibitors (dabrafenib plus trametinib) increased the 3-year relapse-free survival rate compared to placebo treatment (58% vs. 39%) (Long et al., 2017).
Blockade of MAPK signaling pathway with BRAF and MEK inhibitors induces favorable responses, but most patients eventually develop resistance to these inhibitors. Melanoma patients harboring the BRAF V600E mutation display primary resistance. Prolonged treatment with BRAF/MEK inhibitors induces acquired resistance (Atzori et al., 2020). These reports suggest that targeting molecular reprogramming induced by BRAF/MEK inhibitors is necessary to treat melanomas.
THE ROLES OF HDACs in MELANOMA GROWTH AND ANTI-CANCER DRUG RESISTANCE
HDACs deacetylate the lysine residues of histones that prevent transcription factor access (Guan et al., 2020). The HDAC family can be subdivided into four categories: Class I HDACs comprise HDAC 1, HDAC 2, HDAC 3, and HDAC 8, which are expressed in most tissues and localized in the nucleus. Class IIa HDACs (HDAC 4, HDAC 5, HDAC 7 and HDAC 9) are present in the nucleus and cytoplasm. Class IIb HDACs (HDAC 6 and HDAC 10) are expressed in a tissue-specific manner and localized in the cytoplasm. HDAC 11, the class IV HDAC, is present in the nucleus (Sahakian et al., 2015). Classes I, II, and IV HDACs require Zn 2+ in their catalytic site, whereas class III HDACs require NAD + for their deacetylase activity (Figure 2). Class III HDACs comprises seven sirtuin proteins (SIR1-7) and are homologous with the yeast protein SIR2. Inhibitors targeting classes I, II, and IV HDACs bind to the catalytic core of the Zn 2+binding site. Figure 2 shows classification, functional domains, and inhibitors of HDACs.
Chromatin state changes regulated by HDACs are closely associated with melanoma progression (Al Emran et al., 2018;Emran et al., 2019;Luo et al., 2020). Resistance to BRAF inhibitor results from increased expression of HDACs (Booth et al., 2017;Emmons et al., 2019). Downregulation of peroxisome proliferator-activated receptor γ coactivator 1-α (PGC1α) expression by H3K27me3 suppresses melanoma cell invasion (Luo et al., 2020). Aberrant expression, dysregulation of HDACs or imbalances between HDACs and histone acetyltransferases (HATs) promotes cancer progression (Krumm et al., 2016). Induced drug-tolerant melanoma cells show increased level of H3K9me3 and loss of H3K4me3/H3K27me3 (Al Emran et al., 2018). The loss of H3K4me3 in combination with increased DNA methylation of tumor suppressor genes leads to acquired anticancer drug resistance (Al Emran et al., 2018). The increased levels of H3K18ac and H3K27ac are responsible for multidrug resistance in renal cell carcinoma cells (Zhu et al., 2019). It is reasonable to conclude that an epigenetic regulator, such as HDACs/HATs, can regulate cancer cell growth and the responses to anti-cancer drugs.
HDACs regulate the expression levels of genes involved in melanoma cell proliferation (Kim et al., 2015;Chen et al., 2019). Malignant melanoma cells display high levels of HDAC1/2/3 compared to normal cells (Krumm et al., 2016). High expression of HDAC1 is seen in prostate cancers and breast cancers (Gameiro et al., 2016;Tang et al., 2017). Apicidin, an inhibitor of HDAC2 and HDAC3, decreases the expression of Notch1 by decreasing the level of H3Kac27 (Ferrante et al., 2020). Notch 1 signaling suppresses anti-tumor immunity by increasing the expression of TGF-β1 . Increased expression of HDAC2 is seen in human melanoma cells (Malme3M R ) that have been made resistant to various anti-cancer drugs by repeated exposure to the anti-cancer drug celastrol (Kim et al., 2010). Downregulation of HDAC4 leads to apoptosis of head and neck cancer cells . HDAC5 promotes invasion of hepatocellular carcinoma cells by increasing the expression of hypoxia-inducible factor-1 . HDAC5 enhances the metastatic potential of neuroblastoma cells by decreasing the expression of CD9 via hypermethylation (Fabian et al., 2016). The hypermethylation of miR-589 promotes mesenchymal transition by upregulation of HDAC5 in non-small cell lung cancer cells (Liu et al., 2017). HDAC6, which is highly expressed in various melanoma cells, is necessary for invasion and metastasis of melanoma cells (Liu et al., 2016). HDAC6 deacetylates Lys-72 of extracellular signal-regulated kinase 1 (ERK1) and promotes ERK1 activity (Wu et al., 2018). HDAC6 binds to Tyrosine-protein phosphatase non-receptor type 1 (PTPN1), activates extracellular signal-regulated kinase 1/2 (ERK1/2), inhibits apoptosis, and promotes melanoma cell proliferation . HDAC7 regulates the level of acetyl-H3K27 and is necessary for maintaining cancer stem cells (Caslini et al., 2019). HDAC9 is highly expressed in most gastric cancer cells and plays on oncogenic role (Xiong et al., 2019). HDAC10 promotes angiogenesis by activating ERK1/2 phosphorylation (Duan et al., 2017). The class I and II HDAC inhibitor trichostatin A (TSA) decreases the expression of the genes involved in driving the extracellular signal-regulated kinase (ERK)1/2 oncogenic pathway (Mazzio and Soliman, 2018).
Valproic acid (VPA), an inhibitor of HDACs, binds to HDAC2 and enhances sensitivity to anti-cancer drugs (Kalal et al., 2019). HDAC2 binds to the cancer/testis antigens, such as CAGE, and leads to multi-drug resistance by decreasing p53 expression in melanoma cells (Kim et al., 2010, Figure 3A). HDAC5 confers resistance to tamoxifen by inducing deacetylation and nuclear localization of SOX9 (Xue et al., 2019). HDAC6 binds to tubulin β3 and confers resistance to anti-cancer drugs in Malme3M R cells (Kim et al., 2015). Malme3M R cells show low expression level of HDAC3 compared to parental anti-cancer drug sensitive melanoma cells (Malme3M) . Overexpression of HDAC3 enhances sensitivity to anti-cancer drugs by disrupting the interaction between HDAC6 and tubulin β3 (Kim et al., 2015, Figure 3B). HDAC3 decreases the expression of tubulin β3 by binding to its promoter sequences (Kim et al., 2015). HDAC3 suppresses the angiogenic potential of Malme3M R cells by decreasing the expression levels of plasminogen activator inhibitor-1 (PAI-1) and vascular endothelial growth factor (VEGF) Figure 3B). HDAC3 forms a negative feedback loop with miR-326 and enhances sensitivity to anti-cancer drugs in vitro and in vivo . Thus, increasing HDAC3 expression may overcome resistance to anti-cancer drugs, including BRAF and MEK inhibitors. CAGEderived 269 GTGKT 273 peptide binds to CAGE and enhances sensitivity to anti-cancer drugs in Malme3M R cells (Kim et al., 2017). CAGE interacts with EGFR and human epidermal growth factor receptor 2 (HER2) to confer resistance to gefitinib and trastuzumab in Malme3M R cells (Kim et al., 2016, Figure 3C). Thus, HDAC2-binding of CAGE can regulate the response to BRAF/MEK inhibitors. Table 1 shows the roles of HDACs in cancer cell proliferation, angiogenic potential, and metastasis.
The HDAC inhibitors vorinostat and valproic acid (VPA) decrease the migration potential of BRAF V600E mutant melanoma cells by increasing the expression of plasma membrane Ca 2+ ATPase 4b (PMCA4b) (Hegedus et al., 2017). VPA increases acetylation of lysine residues of histone H3 at 9, 18, 23, and 27 at the promoter region of tissue type plasminogen activator (Larsson et al., 2012). Vorinostat induces H3K9 acetylation to exert anti-cancer effects in urothelial carcinoma cells (Eto et al., 2019), and decreases the tumorigenic potential of drug-resistant melanoma cells (Wang et al., 2018). The HDAC inhibitor panobinostat decreases PI3 kinase activity and increases the expression levels of apoptotic proteins such as BIM and NADPH oxidase activator (NOXA) (Gallagher et al., 2018). Panobinostat increases the acetylation of STAT3 at lysine 685 (Gupta et al., 2012). MS-275, an inhibitor of class I HDACs, increases H3K27ac and HDAC7 expression in breast cancer cells (Caslini et al., 2019). The class IIa-specific inhibitor MC-1568 increases the expression of Rb protein and the level of H3K27 at the Rb promoter (Rajan et al., 2018). The HDAC6-specific inhibitor ACY241 decreases the number of Treg cells (CD4 + CD25 + FoxP3 + ), but increases the number of activated CD8 + T cells by activating AKT signaling to induce anti-cancer effects against multiple myeloma (Bae et al., 2018). HDAC6-specific inhibitors (Tubastatin A and Nexturastat) suppress melanoma cell proliferation by increasing the expression levels of tumor-associated antigens (TAAs) and human leukocyte antigen (HLA) class I (Woan et al., 2015). High levels of TAAs activate CD8 + T cells to suppress cancer progression (Qu et al., 2018). Tubastatin A increases acetylation of Cystathionine γ-lyase (CSEγ) at lysine 73 (Chi et al., 2019). These reports suggest that HDAC inhibitors can regulate responses to anti-cancer drugs.
Class I HDAC inhibitors, such as VPA or MS-275, enhance the sensitivity of melanoma cells to the alkylating agents temozolomide, dacarbazine, and fotemustine by suppressing the double strand break (DSB) repair pathway by decreasing the expression levels of RAD51 and fanconi anemia complementation group D2 (FANCD2) (Krumm et al., 2016). The combination of trichostatin A (TSA) with etoposide increases the expression of p53 and reverses resistance to chemotherapy in melanoma cells (Monte et al., 2006). These reports imply a role for HDAC inhibitors in the response to BRAF/MEK inhibitors. The combination of a BRAF inhibitor, encorafenib, and an HDAC inhibitor, panobinostat, synergistically induces caspasedependent apoptotic cell death by inhibiting PI3 kinase activity and decreasing the expression levels of anti-apoptotic proteins (Gallagher et al., 2018). Vorinostat enhances sensitivity to dabrafenib and trametinib by increasing the level of reactive oxygen species (ROS) in anti-cancer drug-resistant melanoma cells (Wang et al., 2018). Vorinostat enhances the efficacy of BRAF/MEK inhibitors in N-RAS and NF-1 mutant melanomas by suppressing DNA repair pathways (Maertens et al., 2019). The HDAC8 inhibitor PCI-34051 enhances sensitivity to BRAF inhibitors by increasing the acetylation of c-jun at lysine 273 . GPCR-mediated yes associated protein (YAP) activation and receptor tyrosine kinase (RTK)-driven AKT signaling confer resistance to MEK inhibition. The HDAC inhibitor panobinostat prevents MEK inhibition from activating YAP and AKT signaling (Faiao-Flores et al., 2019). These reports indicate that a combination of an HDAC inhibitor and a BRAF/MEK inhibitor may offer clinical benefits in patients with metastatic melanoma.
THE ROLE OF IMMUNE CHECKPOINT IN MELANOMA GROWTH AND ANTI-CANCER DRUG RESISTANCE
Cancer cells evade immune surveillance and progress by activating immune checkpoint pathways that suppress the antitumor immune responses by CTLs. Vemurafenibresistant (Vem R ) cells display cross-resistance to melanoma antigen MART-specific CTLs and NK cells (Jazirehi et al., 2014). This indicates that lack of immune surveillance is responsible for resistance to BRAF inhibitors. Understanding the mechanisms of immune evasion is necessary for overcoming resistance to targeted and immune therapy. Immune checkpoint molecules, such as PD-1 and PD-L1, promote cancer progression by activating MDSCs and protumorigenic tumor-associated macrophages (TAMs or M2 macrophages), while inhibiting CTLs and NK cells. High PD-L1 expression is common in malignant melanomas . The expression levels of PD-L1 and PD1 can predict the outcome of anti-PD1 immune therapy in malignant melanoma (Ugurel et al., 2020). The BRAF V600E mutation leads to high PD-L1 level in a MEK-dependent manner .
Treatment with the MEK inhibitor trametinib increases the expression of PD-L1 via STAT3 activation, which in turn enhances sensitivity to PD-L1 blockade (Kang et al., 2019, Figure 4B). Resistance to the MEK inhibitor BAY86-9766 results from increased expression of EGFR and PD-L1 (Napolitano et al., 2019). Vemurafenib resistance results from the increased expression of PD-L1 by YAP, an effector of Hippo signaling, in melanoma cells , Figure 4B). These reports suggest that immune checkpoint molecules can determine melanoma growth and the response to anti-cancer drugs.
HDAC INHIBITORS ACTIVATE IMMUNE SURVEILLANCE
The tumor microenvironment consists of cancer cells and stromal cells (for example, cancer-associated fibroblasts, endothelial cells, and innate and adaptive immune cells). Cancer-associated fibroblasts induce phenotypic switching of melanoma cells into a mesenchymal-like phenotype and activate PI3K signaling to confer resistance to BRAF inhibitors (Seip et al., 2016). Therefore, cellular interactions within the tumor microenvironment may regulate the response to anti-cancer drugs. PD-1/PD-L1 interactions lead to immune evasion (tumor tolerance) by inactivating CD8 + T cells (Figure 5A). MDSCs interact with CD8 + T cells via PD-L1 and inactivate CD8 + T cells by secreting TGF-β and IL-10 (Fleming et al., 2018, Figure 5A). TAMs, which are activated by IFN-γ released by CD4 + T helper cells, inactivate CD8 + T cells Figure 5A). Melanoma cells activate MDSCs, but inactivate CD8 + T cells via PD-L1 ( Figure 5A). Specific depletion of pro-tumorigenic CD163 + M2 macrophages (TAMs) leads to infiltration of CTLs and tumor regression (Etzerodt et al., 2019).
The combination of DNA methyltransferase (DNMT) and histone deacetylase inhibitors decreases the number of MDSCs through type I IFN signaling and activates CD8 + T and NK cell signaling (Stone et al., 2017). This implies that epigenetic modifications regulate interactions between cancer cells and immune cells. HDAC6-selective inhibitors (ricolinostat and citarinostat) enhance the anti-tumor effects of CTLs in melanoma patients by decreasing the expression of Forkhead Box P3 (FOXP3) to suppress the functions of regulatory T cells (Laino et al., 2019, Figure 5B). The HDAC6 inhibitor ACY241 enhances the anti-tumor effects of antigen-specific CD8 + T cells by activating the AKT/mTOR/p65 pathways in solid tumors (Bae et al., 2018, Figure 5B). A combination of the HDAC inhibitor sodium butyrate and vemurafenib increases the expression of NK cell receptor (NKG2D)-ligand to enhance recognition of vemurafenib-treated melanoma cells by NK cells (Lopez-Cobo et al., 2018). MS-275 induces anti-tumorigenic M1 macrophage polarization through the IFN-γ receptor/ STAT1 signaling pathway, and inhibits the function of MDSCs and eliminates antigen-negative cancer cells in a caspase-dependent manner (Nguyen et al., 2018, Figure 5B). The HDAC inhibitor vorinostat increases the expression levels of HLA classes I and II molecules on the cell surface to activate CTLs . These reports suggest that HDAC inhibitors may activate immune surveillance mechanism to suppress melanoma growth and enhance sensitivity to immune checkpoint inhibitors.
HDAC INHIBITORS ENHANCE THE EFFICACY OF IMMUNE CHECKPOINT INHIBITORS
Immune checkpoint inhibitors, such as anti-cytotoxic T Lymphocyte associated protein-4 (CTLA-4) antibody (Ipilimumab) and anti-PD-L1 antibodies (atezolizumab, druvalumab, and avelumab) have shown some clinical benefits in the treatment of patients with advanced-stage metastatic melanoma. The overall response to atezolizumab was 30% among 43 melanoma patients in a phase I clinical trial (Hamid et al., 2019). Anti-PD1 antibodies, such as nivolumab and pembrolizumab, are also widely used to treat advanced melanoma (Fujimura et al., 2019).
Epigenetic modifications regulate expression of the genes involved in immune surveillance. Bromodomain and extraterminal region (BET) protein recognizes acetylated lysines of histones and non-histone proteins (Rajendran et al., 2019). BET inhibitors suppress melanoma growth by decreasing the expression of PD-L1 while activating CD8 + T cells (Erkes et al., 2019). HDAC6 increases the expression of PD-L1 through STAT3 signaling, and selective inhibition of HDAC6 suppresses cancer progression in vivo (Lienlaf et al., 2016, Figure 5C). The inhibition of HDAC6 by MPT0G612 prevents IFN-γ from increasing the expression of PD-L1 and induces apoptosis by suppressing autophagy (Chen et al., 2019, Figure 5C). RGFP966 increases the expression of PD-L1 in dendritic cells, and the combination of RGFP966 with anti-PD-L1 antibody suppresses murine lymphoma growth (Deng et al., 2019, Figure 5C).
The effect of immune checkpoint blockade is compromised by activation of MDSCs. A combination of the HDAC inhibitor VPA and anti-PD-L1 antibody inhibits functioning of MDSCs by decreasing the expression levels of IL-10, IL-6, and Arginase I (ARG1) while activating CD8 + T cells (Adeshakin et al., 2020, Figure 5D). The HDAC6 inhibitor nexturastat A improves the efficacy of anti-PD-1 antibody by decreasing the number of pro-tumorigenic M2 macrophages (TAMs) while increasing the number of tumor infiltrating NK cells and CD8 + T cells (Knox et al., 2019, Figure 5D). PD-1 blockade increases the expression of PD-L1 via pro-inflammatory cytokines such as IFN-γ (Knox et al., 2019). Nexturastat A prevents anti-PD-1 antibody from increasing the expression of PD-L1 (Knox et al., 2019). These reports indicate that HDAC inhibitors enhance the responses to immune checkpoint inhibitors by activating immune surveillance.
CONCLUSION AND PERSPECTIVES
To better understand the mechanisms of resistance to BRAF/MEK inhibitors in melanoma, identification of molecular signatures associated with resistance is necessary. Establishment of melanoma cell lines that are resistant to these inhibitors will make it possible to identify molecular signatures that may serve as targets for the development of anti-melanoma therapies. MicroRNAs (miRNAs) are small non-coding RNAs that play important roles in cellular proliferation, anti-cancer drug resistance and cancer progression (Kim et al., 2017). miR-22 directly binds to the 3 UTR of HDAC6 and suppresses cervical cancer cell proliferation (Wongjampa et al., 2018). miRNAs that target specific HDACs can overcome resistance to targeted and immune therapy. Downregulation of miR-589 promotes cancer malignancy by increasing PD-L1 expression level (Liu et al., 2017). miR-146a, which is increased in metastatic melanoma, induces immune evasion of melanoma cells (Mastroianni et al., 2019). The combination of a miR-146a inhibitor and anti-PD-L1 improves survival in a mouse model of melanoma (Mastroianni et al., 2019). It is necessary to identify miRNAs that bind to the 3 UTR of PD-L1 and/or PD-1. These miRNAs can be developed as anti-melanoma therapies in combination with HDAC inhibitors and immune checkpoint inhibitors.
Epigenetic modifications regulate cancer progression and anti-cancer drug resistance. Epigenetic modifications are reversible and dynamic. Thus, targeting HDACs has emerged as an attractive strategy for the treatment of various cancers. Reportedly, HDACs regulate the expression levels of immune checkpoint molecules. Thus, targeting HDACs may prove to be an effective strategy to overcome resistance to immune checkpoint blockade.
The FDA has approved four HDAC inhibitors for use in cancer patients. These inhibitors are Vorinostat (hydroxamic acid family), Romidepsin (cyclic peptide family), Belinostat (hydroxamic acid family), and Panobinostat (hydroxamic acid family). These inhibitors have been approved for the treatment of cutaneous T-cell lymphoma and peripheral T cell lymphoma. In a phase II clinical trial, some patients with advanced melanoma displayed an early response to vorinostat. However, the disease state in most of these patients was stable (Haas et al., 2014). Vorinostat therapy has many side effects, including fatigue, nausea, and lymphopenia (Haas et al., 2014). Vorinostat and the proteasome inhibitor marizomib have synergistic effects when used together in cancer cell lines derived from melanoma patients and are well-tolerated by melanoma patients (Millward et al., 2012). The combination of belinostat (an inhibitor of HDACI and HDACII) with cisplatin and etoposide lea to hematologic toxicity in a phase I clinical trial of advanced small cell lung cancer patients (Balasubramaniam et al., 2018). The combination of romidepsin and the DNA methyl transferase I inhibitor 5-aza-deoxycydine displayed dose-limiting toxicity, including grade 4 thrombocytopenia, grade 4 neutropenia, and pleural effusion (O'Connor et al., 2019). The overall response rate to a combination of romidepsin and 5-azadeoxycydine in T-cell lymphoma patients was 55% (O'Connor et al., 2019). A combination of romidepsin and the BET inhibitor IBET151 increases the expression of IL-6 and the number of antigen-specific CD8 + cells during vaccination for the treatment of melanoma (Badamchi-Zadeh et al., 2018). Panobinostat (a pan-deacetylase inhibitor) showed a very low response rate and a highly toxic effects in phase I trial of patients with metastatic melanoma (Ibrahim et al., 2016). Panobinostat treatment was associated with high rates of nausea, vomiting, and fatigue in phase 1 trial of metastatic melanoma patients (Ibrahim et al., 2016). The combination of panobinostat and the proteasome inhibitor carfilzomib had adverse effects, including thrombocytopenia (41%), fatigue (17%), and nausea/vomiting (12%) in a phase I trial of 32 patients with multiple myeloma (Kaufman et al., 2019). The objective response rate (ORR) and clinical benefit rate were 63 and 68%, respectively in that same phase I trial of 32 patients with multiple myeloma (Kaufman et al., 2019). Quisinostat, hydroxamate-based HDAC inhibitor, targets both class I and II HDACs. According to the results of a phase I clinical trial, quisinostat shows strong antitumor effect and is well-tolerated in metastatic melanoma patients (Venugopal et al., 2013). Table 2 describes clinical trials of HDAC inhibitors in various cancers, including melanoma.
To date, there have been no successful clinical trials involving a combination of HDAC inhibitors and immune checkpoint inhibitors. The HDAC-selective inhibitors that are currently in use have off-target effects. To overcome these off target effects, it is necessary to design HDAC-specific inhibitor based on the structure of each HDAC. Identification of proteins that interact with individual HDACs may make it possible to devise new antimelanoma therapies. We previously reported that CAGE-binding peptide prevents CAGE from binding to GSK3β and enhances sensitivity to anti-cancer drugs (Kim et al., 2017). Identification of proteins that interact with individual HDACs is necessary for development of anti-melanoma therapies. Peptides that bind to each HDAC and prevent interactions between each HDAC and its binding partner may circumvent off-target effects and enhance sensitivity to targeted and immune therapies.
Due to tumor heterogeneity and plasticity, combination therapy is required for the treatment of cancers, including melanomas. HDACs play major roles in the regulation of immune checkpoint molecules, cancer cell proliferation, and activation of oncogenic signaling pathways. It is reasonable to conclude that HDAC inhibitors in combination with targeted therapies and immune therapies can be employed as anti-melanoma therapies.
AUTHOR CONTRIBUTIONS
DJ wrote the manuscript. MY made the figures and tables. HJ helped in editing. YK and HJ provided intellectual output in the manuscript.
FUNDING
This work was supported by National Research Foundation Grants (2020R1A2C1006996, 2017M3A9G7072417, and 2018R1D1A1B07043498), a grant from the BK21 plus Program.
|
v3-fos-license
|
2023-10-07T15:12:59.279Z
|
2023-10-05T00:00:00.000
|
263729755
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fopht.2023.1256397/pdf?isPublishedV2=False",
"pdf_hash": "9ee3667a13c223be19cbaee4923dcb20217924fb",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2402",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "1b8a55761a0307e6f4ed46cf5a086e9201ff61a4",
"year": 2023
}
|
pes2o/s2orc
|
Optic disc drusen and scleral canal size – protocol for a systematic review and meta-analysis
Background Around one in forty patients are diagnosed with optic disc drusen (ODD) during their lifetime. Complications of these acellular deposits range from asymptomatic visual field deficits to artery occlusion and subsequent cecity. Still, the pathogenesis of their emergence remains controversial. In particular, it was suggested 50 years ago that a narrow disc and scleral canal is one factor leading to axoplasmic flow disturbance, which induces ODD formation. However, this hypothesis is still debated today. To evaluate the basis of this theory, we will conduct a systematic review and meta-analysis of studies evaluating the scleral canal size in patients with ODD and in healthy subjects. Methods We will search MEDLINE via PubMed, Cochrane, and EMBASE electronic databases to identify articles published before November 29, 2022 that measure the scleral canal size in patients with ODD and in healthy subjects. In addition, grey literature will be searched. The meta-analysis will include studies that include patients with a clinical or imaging diagnosis of ODD and healthy subjects. Additionally, we will perform a subgroup analysis to compare patients with buried ODD and patients with visible ODD. Extracted data from included studies will be presented descriptively, and effect sizes will be computed based on the recommendations from the Cochrane Collaboration handbook. Discussion The hypothesis that a narrow scleral canal is a risk factor of ODD has long been debated and this systematic review and meta-analysis should disentangle the different views. Understanding the underlying factors driving the development of ODD should help us focus on patients at risk and develop strategies to prevent advanced stages of the disease in these patients. Besides, focusing on patients with small scleral canals should help us derive associated factors and provide a better understanding of the pathology. Systematic review registration https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42022375110.
Introduction
Optic disc drusen (ODD) are acellular deposits that are thought to result from axonal disintegration following axoplasmic flow disruption in the optic nerve head (1).The reported prevalence in adults varies from 0.2% (2) to around 2.0% (3, 4).Only 0.4% of children are thought to be affected.The diagnosis is often made incidentally in children with pseudo-papilloedema (5, 6) or in adults with visible drusen overlying the border of the disc.However, more than half of the patients have visual fields deficits (blind spot enlargement, field constriction) due to retinal nerve fiber layer atrophy (7) and a small number of patients with large ODD will develop dramatic complications, such as choroidal neovascularization, central artery occlusion or anterior ischemic neuropathy (8)(9)(10)(11)(12).Understanding underlying risk factors could allow clinicians to screen patients at risk and undertake a more specific followup to evaluate the evolution of the ODD and their consequences.
It has long been proposed that ODD are more likely to emerge in patients with a narrow scleral canal, as the latter is the location of increased axonal mechanical constraints.Several studies have been undertaken to test this hypothesis, but with diverging results (13)(14)(15).However, several factorsincluding the location of the ODD, the age of the patients, the instrument for measurementare likely to influence the outcome.Therefore, the association between the presence of ODD and the size of the disc and scleral canal would be worth exploring in a systematic way.
The anterior opening of the optic nerve scleral canal is, by definition, the anatomic entrance to the scleral canal at the level of the sclera.It is mostly evaluated using either fundus pictures, where it corresponds to the limits of the disc, or optical coherence tomography (OCT).In most studies, measurements at the level of the Bruch's membrane opening (BMO) are considered as proxies of the measurements at the level of the anterior opening of the optic nerve scleral canal (16,17).Indeed, the BMO is well defined on OCT (14,15,18,19) and seems to remain stable over time and conditions (17,20).The high-resolution enhanced depth imaging spectral-domain OCT (EDI SD-OCT) and swept source OCT (SS-OCT), in particular, provide a greater penetration and a better characterization of deep structures, with less artefacts induced by the drusen themselves (21).EDI-SD-OCT with scan averaging is the ODD diagnostic modality recommended by the Optic Disc Drusen Studies Consortium (22).It has proven equivalent to SS-OCT in that regard (21).
This systematic review and meta-analysis will thus aim at evaluating the mean difference of the scleral canal size at the level of the BMO between patients with ODD and healthy controls, with a secondary focus on patients with buried ODD versus patients with visible ODD.
Two main objectives will be evaluated: • Mean difference of the scleral canal size at the level of the BMO using fundus pictures between patients with ODD and healthy controls.
• Mean difference of the scleral canal size at the level of the BMO using OCT (SD-EDI or SS) between patients with ODD and healthy controls.
Because we expect that patients with buried ODD and patients with visible ODD might differ, we will also undergo a subgroup analysis and compute the following outcomes: • Mean difference of the scleral canal size at the level of the BMO using fundus pictures between patients with buried ODD and patients with visible ODD.• Mean difference of the scleral canal size at the level of the BMO using OCT (SD-EDI or SS) between patients with buried ODD and patients with visible ODD.
Methods/design
The literature search and analysis will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) (23) (see Supplementary File 1) and Meta-analysis of Observational Studies in Epidemiology (MOOSE) (24) guidelines.
Search strategy
We will search MEDLINE via PubMed, Cochrane, and EMBASE electronic databases to identify articles published before November 29, 2022 that measure the mean difference of the scleral canal size at the level of the BMO between patients with ODD and healthy controls or between patients with buried ODD and patients with visible ODD.In addition, grey literature will be searched in Google Scholar, Greylit.org,World Health Organization Clinical Trials Search Portal, ClinicalTrials.govand the European Union Clinical Trials Register.All reference lists and bibliographies of included studies will be reviewed for potentially relevant studies that could be missed by this literature search.
The search will involve the following MeSH keywords: optic AND (disk OR disc OR nerve) AND drusen AND (canal OR area OR size OR measure OR crowded OR small).
Inclusion criteria
Randomized controlled and non-randomized controlled trials, as well as observational studies will be eligible for inclusion.Inclusion criteria will be patients with a clinical or imaging (autofluorescence, B-scan ultrasound, OCT, CT scan) diagnosis of ODD.
Exclusion criteria
Articles with previously published data (review, meta-analysis, follow-up study) and case reports will be excluded.We will exclude Abbreviations: EDI, enhanced depth imaging; HS, Healthy subject; OCT, optical coherence tomography; ODD, Optic disc drusen; SD-OCT, spectral domain optical coherence tomography; SS-OCT, swept source optical coherence tomography; TD-OCT, time-domain optical coherence tomography.
articles of studies that do not include people with ODD, that do not quantify the size of the scleral canal, that do not have a control group (either HS for patients with ODD or visible ODD for patients with buried ODD) or that include only syndromic ODD (ODD associated to a known predisposing syndrome, such as Pseudoxanthoma Elasticum, Retinitis Pigmentosa, Usher syndrome, Down Syndrome, Alagille Syndrome, Noonan syndrome).
We will exclude from the meta-analysis (but include in the systematic review and the sensitivity analysis) studies relying on time-domain OCT (TD-OCT) or non-EDI SD OCT for performing the measurements of the scleral canal size at the level of the BMO.Likewise, for the second main objective and subgroup analysis (measurements based on OCT), only the studies relying on gold standards state-of-the-art OCT (EDI SD OCT or SS OCT) to exclude ODD and define normal optic nerve according to the Copenhagen Consortium (15) will be included.Articles that do not provide appropriate data for pooling the outcomes despite authors being contacted for missing material will also be excluded.Data (reported or obtained from one of the authors) will be considered sufficient in one of the three following situations: sample sizes, means and standard deviations for both groups considered; sample sizes, medians and all four quartiles for both groups considered; raw values for every patient for both groups considered.Potentially eligible studies will be screened for eligibility by AVJ.We will import articles to Zotero, and all articles will be reviewed (title, abstract and main text when needed) to discard those that do not meet the criteria.Data of included papers will then be extracted and the studies will be assessed for risk of bias.
Risk of bias appraisal
We will assess the quality of included studies through a domainbased quality assessment grid adapted from the National Institutes of Health quality assessment tool of case-control studies (25,26).The assessment will be performed independently by two review authors (AVJ and MR), each blinded to the score given by the other.They will later discuss discrepancies until they reach consensus.If no consensus is reached, a third author (DBG) will arbitrate.Publication risk of bias will be characterized using Egger's statistical test and visual inspection of the funnel plot, which represents the estimated effect size (horizontal axis) versus its standard error mean (vertical axis).Asymmetry of the inverted funnel shape favors publication bias.The data extraction tables will be pilot-tested and refined before extraction.
Data extraction and analysis
The means and standard deviations will be extracted when available.If the results are reported using medians and IQR, we will search the protocolif availableand methods to determine whether the data was shewed or whether it was a preference of the authors and there had been no test of normality although the sample size was large enough to expect a Gaussian distribution.In case the choice is not explained and the sample size is above 50, we will suppose a normal distribution and apply the following transformation formulae: mean = median and SD = IQR 1:35 .In any other case, we will use the formulae by Luo et al. (27) and Shi et al. (28): where w 1 = 2:2 2:2+n 0:75 and w 2 = 0:7 − 0:72 n 0:55 : where q 1 (n) = (2 + 0:14n 0:6 ) : f −1 ( n−0:375 n+0:25 ), q 2 (n) = (2 + 2 0:07n 0:6 ) : f −1 ( 0:75n−0:125 n+0:25 ) and f −1 (z) is the upper z th percentile of the standard normal distribution, and a is the minimum value, q 1 the first quartile, q 3 the third quartile and b the maximum value.
If neither one of those data is available, the raw data will be sought and retrieved.If none of this material is available, it will be requested from the corresponding author (he will be contacted up to three times via e-mail).If this latter cannot provide the information, the study will be excluded from the meta-analysis.
When data are not available in the main text, we will search Supplemental Materials for more detailed information.If data are only available by graphical representation, the assessors (AVJ and MR) will use Plot Digitizer to extract data from graphs: the final value will be the mean of these two extractions.
Strategy for data synthesis
Extracted data from included articles will be presented descriptively, and effect sizes will be computed based on the recommendations from the Cochrane Collaboration handbook.and Cochrane Review Manager v5.3.
Our preliminary search suggests that the mean diameter and the total area of the optic disc are two common parameters used to describe the optic disc size.Because the calculation of the mean diameter is more straightforward, we will report the mean diameter only.In cases where the mean diameter is not reported, we will transform the reported measure using the following formulas, which suppose that the optic disc can be approximated by a disc (29): -The reported measure is the maximal and minimal diameters: mean diameter = maximal diameter + minimal diameter 2 -The reported measure is the horizontal diameter: mean diameter = horizontal diameter -The reported measure is the total area: mean diameter = 2 Â ffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi total area p q .
Extracted data will then be pooled to derive Hedge's standardized mean difference.We will apply a fixed-effects model when the I 2 , the percentage of variation across studies due to heterogeneity rather than chance, was low to moderate (I 2 < 50%) (30,31); otherwise, we will perform a random-effects model.The 95% confidence interval excluding the null value will be considered significant.The between-study variance, t 2 , will be estimated using the Restricted Maximum-Likelihood formula (30).
We will use R v4.0.3 with the 'Metafor' package for the statistical analysis and the plots.
Sensitivity analysis
Two sensitivity analysis will be performed.First, we will explore the impact of the hypothesis that the disc can be approximated by a disc.To that end, we will analyse only studies that computed the mean diameter.
Second, we will explore the impact of choosing only studies with recent state-of-the-art OCT modalities.To do so, we will add studies using TD-OCT and non-EDI SD OCT to the analysis.
In both cases, reporting will be done in a summary table.
The overall quality of the evidence for each outcome will be evaluated by using the GRADE criteria following the Cochrane Collaboration recommendations if enough RCTs and interventional studies are included (32).
Discussion
Identifying the factors responsible for the emergence of ODD may help develop a better screening protocol and prevent dramatic complications through earlier diagnosis and care.We are not aware of any means to enlarge the scleral canal: therefore, it would not be a modifiable risk factor.Neither are we able to predict the impact of widening the scleral canal.However, should this study support the association between a narrow scleral canal and the presence of ODD, it would allow defining a better population for studies evaluating the impact of modifying other potential risk factors or introducing preventive treatments.In that regard, the potential interest of lowering the intra-ocular pressure is still pondered (33) and neuroprotective treatments are being developed, which might also prove useful to halt the progressive atrophy in patients with ODD (34)(35)(36).
Several observations support the hypothesis that a narrow scleral canal plays a central role in the formation of ODD.Genetic factors have been incriminated, which follow an irregular autosomal dominant pattern, and small optic discs have been observed in affected families (1).ODD are mainly found in caucasians, who have a smaller optic disc compared to African and Asian people (37, 38).ODD are more frequent in rod-cone dystrophies, and in particular in Usher syndrome, where scleral canals have been found smaller than in other dystrophies (39).In healthy subjects, the optic disc size correlates to the axial length (40).It is therefore interesting to note that the prevalence of ODD in nanophtalmos and posterior microphthalmos is higher than in the general population (41)(42)(43).In nanophtalmos, the presence of ODD correlates to the axial length (41).Pseudoxanthoma elasticum is another disease associated to the presence of ODD (44).If, to our knowledge, no direct link has been unveiled with the scleral canal size, it is remarkable that this pathology is characterized by ectopic mineralization in elastic fibers, and in particular in the Bruch's membrane, which then becomes rigid.We can suppose that its opening turns out to be a zone of higher constraint for the nerve fibers.
Other hypotheses have been put forward: in particular, it has been proposed that ODD emerge from abnormal vasculature and branching, as higher frequencies of trifurcation and cilioretinal arteries have been observed in patients with ODD (45,46).An abnormal permeability and a deficient blood barrier would induce chronic ischemia and calcium deposition, leading to ODD formation.Still, an association has been found between a small scleral canal and vascular anomalies in ODD patients (47), and it is possible that abnormal vessels are a consequence of the higher constraints induced pre-and post-natally by a narrow canal.
We acknowledge several limitations to this study.Although we will adhere to the PRISMA guidelines and methodology, it is not possible to completely account for the limitations of included studies.We expect moderate to high heterogeneity because of several variable factors, including patients' age, measurement methods or magnification correction.However, these factors will be discussed in the narrative review, which will allow us to examine the results accordingly.A subgroup analysis taking into account the expected difference between buried ODD and visible ODD might help us explain part of the heterogeneity and the divergency observed in the literature.To limit the file drawer problem which results in publication bias, grey literature will be searched in addition to traditional databases of published literature.
Figure 1
Figure 1 is a PRISMA flow chart of the review process.Potentially eligible studies will be screened for eligibility by AVJ.We will import articles to Zotero, and all articles will be reviewed (title, abstract and main text when needed) to discard those that do not meet the criteria.Data of included papers will then be extracted and the studies will be assessed for risk of bias.
2.6.1 Study reviewUpon selecting articles for inclusion, all references will be imported in Microsoft® Excel (version 16.65) for data extraction.One assessor (AVJ) will extract and collate information.Another assessor (MR) will verify the extracted material from all included articles.The following data will be extracted (see Supplementary File 2): -Study characteristics: authors, title, year of publication, inclusion and exclusion criteria, sample size; -Population characteristics: percentage of buried versus visible drusen, age, spheric equivalent -Outcome measure characteristics: type of the parameter, means and standard deviations (or median and interquartile range (IQR)), OCT type if appropriate, magnification correction formula if applied FIGURE 1PRISMA flow chart of the review process.
|
v3-fos-license
|
2022-03-22T07:13:10.602Z
|
2022-03-21T00:00:00.000
|
247594723
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://aclanthology.org/2022.acl-long.106.pdf",
"pdf_hash": "2da549915a0119700639beb7a1a90249ecbde848",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2405",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"sha1": "6bf7c93ed5a3aca5ef139308c6797615461daa39",
"year": 2022
}
|
pes2o/s2orc
|
Match the Script, Adapt if Multilingual: Analyzing the Effect of Multilingual Pretraining on Cross-lingual Transferability
Pretrained multilingual models enable zero-shot learning even for unseen languages, and that performance can be further improved via adaptation prior to finetuning. However, it is unclear how the number of pretraining languages influences a model’s zero-shot learning for languages unseen during pretraining. To fill this gap, we ask the following research questions: (1) How does the number of pretraining languages influence zero-shot performance on unseen target languages? (2) Does the answer to that question change with model adaptation? (3) Do the findings for our first question change if the languages used for pretraining are all related? Our experiments on pretraining with related languages indicate that choosing a diverse set of languages is crucial. Without model adaptation, surprisingly, increasing the number of pretraining languages yields better results up to adding related languages, after which performance plateaus.In contrast, with model adaptation via continued pretraining, pretraining on a larger number of languages often gives further improvement, suggesting that model adaptation is crucial to exploit additional pretraining languages.
Introduction
Pretrained multilingual language models (Devlin et al., 2019;Conneau et al., 2020) are now a standard approach for cross-lingual transfer in natural language processing (NLP). However, there are multiple, potentially related issues on pretraining multilingual models. Conneau et al. (2020) find the "curse of multilinguality": for a fixed model size, zero-shot performance on target languages seen during pretraining increases with additional pretraining languages only until a certain point, after which performance decreases. Wang et al. (2020b) also report "negative interference", where monolingual models achieve better results than multilingual models, both on subsets of high-and low-resource languages. However, those findings are limited to target languages seen during pretraining.
Current multilingual models cover only a small subset of the world's languages. Furthermore, due to data sparsity, monolingual pretrained models are not likely to obtain good results for many lowresource languages. In those cases, multilingual models can zero-shot learn for unseen languages with an above-chance performance, which can be further improved via model adaptation with targetlanguage text (Wang et al., 2020a), even for limited amounts (Ebrahimi and Kann, 2021). However, it is poorly understood how the number of pretraining languages influences performance in those cases. Does the "curse of multilinguality" or "negative interference" also impact performance on unseen target languages? And, if we want a model to be applicable to as many unseen languages as possible, how many languages should it be trained on?
Specifically, we ask the following research questions: (1) How does pretraining on an increasing number of languages impact zero-shot performance on unseen target languages? (2) Does the effect of the number of pretraining languages change with model adaptation to target languages? (3) Does the answer to the first research question change if the pretraining languages are all related to each other?
We pretrain a variety of monolingual and multilingual models, which we then finetune on English and apply to three zero-shot cross-lingual downstream tasks in unseen target languages: partof-speech (POS) tagging, named entity recognition (NER), and natural language inference (NLI). Experimental results suggest that choosing a diverse set of pretraining languages is crucial for effective transfer. Without model adaptation, increasing the number of pretraining languages im-proves accuracy on unrelated unseen target languages at first and plateaus thereafter. Last, with model adaptation, additional pretraining languages beyond English generally help.
We are aware of the intense computational cost of pretraining and its environmental impact (Strubell et al., 2019). Thus, our experiments in Section 4 are on a relatively small scale with a fixed computational budget for each model and on relatively simple NLP tasks (POS tagging, NER, and NLI), but validate our most central findings in Section 5 on large publicly available pretrained models.
Cross-lingual Transfer via Pretraining
Pretrained multilingual models are a straightforward cross-lingual transfer approach: a model pretrained on multiple languages is then fine-tuned on target-task data in the source language. Subsequently, the model is applied to target-task data in the target language. Most commonly, the target language is part of the model's pretraining data. However, cross-lingual transfer is possible even if this is not the case, though performance tends to be lower. This paper extends prior work exploring the cross-lingual transfer abilities of pretrained models for seen target languages depending on the number of pretraining languages to unseen target languages. We now transfer via pretrained multilingual models and introduce the models and methods vetted in our experiments.
Background and Methods
Pretrained Language Models Contextual representations such as ELMo (Peters et al., 2018) and BERT (Devlin et al., 2019) are not just useful for monolingual representations. Multilingual BERT (Devlin et al., 2019, mBERT), XLM (Lample and Conneau, 2019), and XLM-RoBERTa (Conneau et al., 2020, XLM-R) have surprisingly high cross-lingual transfer performance compared to the previous best practice: static cross-lingual word embeddings (Pires et al., 2019;Wu and Dredze, 2019). Multilingual models are also practicalwhy have hundreds of separate models for each language when you could do better with just one? Furthermore, Wu and Dredze (2020) report that models pretrained on 100+ languages are better than bilingual or monolingual language models in zero-shot cross-lingual transfer.
Model Adaptation to Unseen Languages
Adapting pretrained multilingual models such as mBERT and XLM-R to unseen languages is one way to use such models beyond the languages covered during pretraining time. Several methods for adapting pretrained multilingual language models to unseen languages have been proposed, including continuing masked language model (MLM) training (Chau et al., 2020;Müller et al., 2020), optionally adding Adapter modules (Pfeiffer et al., 2020), or extending the vocabulary of the pretrained models (Artetxe et al., 2020;Wang et al., 2020a). However, such adaptation methods assume the existence of sufficient monolingual corpora in the target languages. Some spoken languages, dialects, or extinct languages lack monolingual corpora to conduct model adaptation, which motivates us to look into languages unseen during pretraining. We leave investigation on the effect of target language-specific processing, e.g., transliteration into Latin scripts (Muller et al., 2021), for future work.
Research Questions
A single pretrained model that can be applied to any language, including those unseen during pretraining, is both more efficient and more practical than pretraining one model per language. Moreover, it is the only practical option for unknown target languages or for languages without enough resources for pretraining. Thus, models that can be applied or at least easily adapted to unseen languages are an important research focus. This work addresses the following research questions (RQ), using English as the source language for finetuning. RQ1: How does the number of pretraining languages influence zero-shot cross-lingual transfer of simple NLP tasks on unseen target languages?
We first explore how many languages a model should be pretrained on if the target language is unknown at test time or has too limited monolingual resources for model adaptation. On one hand, we hypothesize that increasing the number of pretraining languages will improve performance, as the model sees a more diverse set of scripts and linguistic phenomena. Also, the more pretraining languages, the better chance of having a related language to the target language. However, multilingual training can cause interference: other languages could distract from English, the finetuning source language, and thus, lower performance.
RQ2:
How does the answer to RQ1 change with model adaptation to the target language?
This question is concerned with settings in which we have enough monolingual data to adapt a pretrained model to the target language. Like our hypothesis for RQ1, we expect that having seen more pretraining languages should make adaptation to unseen target languages easier. However, another possibility is that adapting the model makes any languages other than the finetuning source language unnecessary; performance stays the same or decreases when adding more pretraining languages. RQ3: Do the answers to RQ1 change if all pretraining languages are related to each other?
We use a diverse set of pretraining languages when exploring RQ1, since we expect that to be maximally beneficial. However, the results might change depending on the exact languages. Thus, as a case study, we repeat all experiments using a set of closely related languages. On the one hand, we hypothesize that benefits due to adding more pretraining languages (if any) will be smaller with related languages, as we reduce the diversity of linguistic phenomena in the pretraining data. However, on the other hand, if English is all we use during fine-tuning, performance might increase with related languages, as this will approximate training on more English data more closely.
Experimental Setup
Pretraining Corpora All our models are pretrained on the CoNLL 2017 Wikipedia dump (Ginter et al., 2017). To use equal amounts of data for all pretraining languages, we downsample all Wikipedia datasets to an equal number of sequences. We standardize to the smallest corpus, Hindi. The resulting pretraining corpus size is around 200MB per language. 2 We hold out 1K sequences with around 512 tokens per sequence after preprocessing as a development set to track the models' performance during pretraining.
Corpora for Model Adaptation For model adaptation (RQ2), we select unseen target languages contained in both XNLI (Conneau et al., 2018b) and Universal Dependencies 2.5 (Nivre et al., 2019): Farsi (FA), Hebrew (HE), French (FR), Vietnamese (VI), Tamil (TA), and Bulgarian (BG). Model adaptation is typically done for low-resource languages not seen during pretraining because monolingual corpora are too small (Wang et al., 2020a). Therefore, we use the Johns Hopkins University Bible corpus by McCarthy et al. (2020) following Ebrahimi and Kann (2021). 3 Tasks We evaluate our pretrained models on the following downstream tasks from the XTREME dataset (Hu et al., 2020): POS tagging and NLI. For the former, we select 29 languages from Universal Dependencies v2.5 (Nivre et al., 2019). For the latter, we use all fifteen languages in XNLI (Conneau et al., 2018b). We follow the default train, validation, and test split in XTREME. languages and facilitate comparability between all pretraining setups, we use XLM-R's vocabulary and the SentencePiece (Kudo and Richardson, 2018) tokenizer by Conneau et al. (2020). We use masked language modeling (MLM) as our pretraining objective and, like Devlin et al. (2019), mask 15% of the tokens. We pretrain all models for 150K steps, using Adam W (Loshchilov and Hutter, 2019) with a learning rate of 1 × 10 −4 and a batch size of two on either NVIDIA RTX2080Ti or GTX1080Ti 12GB, on which it approximately took four days to train each model. When pretraining, we preprocess sentences together to generate sequences of approximately 512 tokens. For continued pretraining, we use a learning rate of 2 × 10 −5 and train for forty epochs, otherwise following the setup for pretraining. For finetuning, we use a learning rate of 2 × 10 −5 and train for an additional ten epochs for both POS tagging and NER, and an additional five epochs for NLI, following Hu et al. (2020).
Models and Hyperparameters
Languages Table 1 shows the languages used in our experiments. English is part of the pretraining data of all models. It is also the finetuning source language for all tasks, following Hu et al. (2020). We use two different sets of pretraining languages: "Diverse (Div)" and "Related (Rel)" (Table 2). We mainly focus on pretraining on up to five languages, except for POS tagging where the trend is not clear and we further experiment on up to ten.
Results
We now present experimental results for each RQ.
Findings for RQ1
POS Tagging Figure 1 shows the POS tagging accuracy averaged over the 17 languages unseen during pretraining. On average, models pretrained on multiple languages have higher accuracy on unseen languages than the model pretrained exclusively on English, showing that the model benefits from a more diverse set of pretraining data. However, the average accuracy only increases up to six languages. This indicates that our initial hypothesis "the more languages the better" might not be true. Figure 2 provides a more detailed picture, showing the accuracy for different numbers of pretraining languages for all seen and unseen target languages. As expected, accuracy jumps when a language itself is added as a pretraining language. Furthermore, accuracy rises if a pretraining language from the same language family as a target language is added: for example, the accuracy of Marathi goes up by 9.3% after adding Hindi during pretraining, and the accuracy of Bulgarian increases by 31.2% after adding Russian. This shows that related languages are indeed beneficial for transfer learning. Also, (partially) sharing the same script with a pretraining language (e.g., ES and ET, AR and FA) helps with zero-shot cross-lingual transfer even for languages which are not from the same But how important are the scripts compared to other features? To quantify the importance of it, we conduct a linear regression analysis on the POS tagging result. Table 3 shows the linear regression analysis results using typological features among target and pretraining languages. For the script and family features, we follow Xu et al. (2019) and encoded them into binary values set to one if a language with the same script or from the same family is included as one of the pretraining languages. For syntax and phonology features, we derive those vectors from the URIEL database using lang2vec (Littell et al., 2017) following Lauscher et al. (2020). We take the maximum cosine similarity between the target language and any of the pretraining languages. Table 3 further confirms that having a pretraining language which shares the same script contributes the most to positive cross-lingual transfer.
We sadly cannot give a definitive optimal number of pretraining languages. One consistent find- Table 3: Regression analysis on the POS tagging accuracy with coefficients (Coef.), p-value, and 95% confidence interval (CI). A large coefficient with a low pvalue indicates that the feature significantly contributes to better cross-lingual transfer, which shows that the same script is the most important feature.
ing is that, for the large majority of languages, using only English yields the worst results for unseen languages. However, adding pretraining languages does not necessarily improve accuracy (Figure 1). This indicates that, while we want more than one pretraining language, using a smaller number than the 100 commonly used pretraining languages is likely sufficient unless we expect them to be closely related to one of the potential target languages.
NER Our NER results show a similar trend. Therefore, we only report the average performance in the main part of this paper (Figure 3), and full en Div-2 (+ru) Div-3 (+zh) Div-4 (+ar) Div-5 (+hi) Div-6 (+es) Div-7 (+el) Div-8 (+fi) Div-9 (+id) Div-10 details are available in Appendix A. For NER, transfer to unseen languages is more limited, likely due to the small subset of tokens which are labeled as entities when compared to POS tags.
NLI Our NLI results in Figure 4 show a similar trend: accuracy on unseen languages plateaus at a relatively small number of pretraining languages. Specifically, Div-4 has the highest accuracy for 8 target languages, while Div-5 is best only for two target languages. Accuracy again increases with related languages, such as an improvement of 3.7% accuracy for Bulgarian after adding Russian as a pretraining language. Full results are available in Appendix B.
Findings for RQ2
POS Tagging Figure 5a shows the POS tagging results for six languages after adaptation of the pretrained models via continued pretraining. As expected, accuracy is overall higher than in Figure 2. Importantly, there are accuracy gains in Farsi when adding Turkish (+9.8%) and in Hebrew when adding Greek (+7.7%), which are not observed before adapting models. We further investigate it in Section 5. NER NER results in Figure 5b show similarities between POS tagging (e.g., improvement on Bulgarian after adding Russian). However, there is limited improvement on Farsi after adding Arabic despite partially shared scripts between the two languages. This indicates that the effect of adding related pretraining languages is partially task-dependent.
Findings for RQ3
POS Tagging In contrast to RQ1, POS tagging accuracy changes for most languages are limited when increasing the number of pretraining languages ( Figure 6). The unseen languages on which we observe gains belong to the Germanic, Romance, and Uralic language families, which are relatively (as compared to the other language fami-lies) close to English. The accuracy on languages from other language families changes by < 10%, which is smaller than the change for a diverse set of pretraining languages. This indicates that the models pretrained on similar languages struggle to transfer to unrelated languages.
NER F1 scores of EN, Rel-2, Rel-3, .219,.227,.236, and .237 respectively. Compared to Div-X, pretraining on related languages also improves up to adding five languages. However, these models bring a smaller improvement, similar to POS tagging.
NLI Figure 7 shows a similar trend for NLI: when adding related pretraining languages, accuracy on languages far from English either does not change much or decreases. In fact, for nine out of thirteen unseen target languages, Rel-5 is the worst.
More Pretraining Languages
Our main takeaways from the last section are: when using more than one pretraining language, diversity is important. However, there are limitations in the experimental settings in Section 4. We assume the following: (1) relatively small pretraining corpora; (2) the target languages are included when building the model's vocabulary; (3) fixed computational resources; and (4) only up to ten pretraining languages. We now explore if our findings for RQ1 and RQ2 hold without such limitations. For this, we use two publicly available pretrained XLM models (Lample and Conneau, 2019), which have been pretrained on full size Wikipedia in 17 (XLM-17) and 100 (XLM-100) languages, and XLM-R base model trained on a larger Common Crawl corpus (Conneau et al., 2020) in 100 languages. We conduct a case study on low-resource languages unseen for all models, including unseen vocabularies: Maltese (MT), Wolof (WO), Yoruba (YO), Erzya (MYV), and Northern Sami (SME). All pretraining languages used in Div-X are included in XLM-17 except for Finnish, and all 17 pretraining languages for XLM-17 are a subset of the pretraining languages for XLM-100. We report the averages with standard deviations from three random seeds.
RQ1
For models without adaptation, accuracy does not improve for increasing numbers of source languages (Figure 8a). Indeed, the accuracy on both XLM-17 and XLM-100 are on par even though the former uses 17 pretraining languages and the latter uses 100. One exception is Northern Sami (Uralic language with Latin script) due to XLM-17 not seeing any Uralic languages, but XLM-100 does during pretraining. When further comparing Div-10 and XLM-17, increase in accuracy by additional pretraining languages is limited. Erzya remains constant from five to 100 languages (except for XLM-R), even when increasing the pretraining corpus size from downsampled (Div-X) to full Wikipedia (XLM-17 and XLM-100).
RQ2 For the models with adaptation (Figure 8b), there is a significant gap between XLM-17 and XLM-100. This confirms our findings in the last section: more pretraining languages is beneficial if the pretrained models are adapted to the target languages. Thus, a possible explanation is that one or more of XLM-100's pretraining languages is similar to our target languages and such languages can only be exploited through continued pretraining (e.g., Ukrainian included in XLM-100 but not in Div-X). Therefore, having the model see more languages during pretraining is better when the models can be adapted to each target language.
Related Work
Static Cross-lingual Word Embeddings Static cross-lingual word embeddings (Mikolov et al., 2013;Conneau et al., 2018a) embed and align words from multiple languages for downstream NLP tasks (Lample et al., 2018;Gu et al., 2018), including a massive one trained on 50+ languages (Ammar et al., 2016). Static cross-lingual embedding methods can be classified into two groups: supervised and unsupervised. Supervised methods use bilingual lexica as the cross-lingual supervision signal. On the other hand, pretrained multilingual language models and unsupervised cross-lingual embeddings are similar because they do not use a bilingual lexicon. Lin et al. (2019) explore the selection of transfer language using both data-independent (e.g., typological) features, and data-dependent features (e.g., lexical overlap). Their work is on static supervised cross-lingual word embeddings, whereas this paper explores pretrained language models.
Analysis of Pretrained Multilingual Models on Seen Languages Starting from Pires et al. (2019), analysis of the cross-lingual transferability of pretrained multilingual language models has been a topic of interest. Pires et al. (2019) hypothesize that cross-lingual transfer occurs due to shared tokens across languages, but Artetxe et al. (2020) show that cross-lingual transfer can be successful even among languages without shared scripts. Other work investigates the relationship between zero-shot cross-lingual learning and typological features (Lauscher et al., 2020), encoding language-specific features (Libovický et al., 2020), and mBERT's multilinguality (Dufter and Schütze, 2020). However, the majority of analyses have either been limited to large public models (e.g., mBERT, XLM-R), to up to two pretraining languages (K et al., 2020;Wu and Dredze, 2020), or to target languages seen during pretraining. One exception is the concurrent work by de Vries et al. (2022) on analyzing the choice of language for the taskspecific training data on unseen languages. Here, we analyze the ability of models to benefit from an increasing number of pretraining languages.
Conclusion
This paper explores the effect which pretraining on different numbers of languages has on unseen target languages after finetuning on English. We find: (1) if not adapting the pretrained multilingual language models to target languages, a set of diverse pretraining languages which covers the script and family of unseen target languages (e.g., 17 languages used for XLM-17) is likely sufficient; and (2) if adapting the pretrained multilingual language model to target languages, then one should pretrain on as many languages as possible up to at least 100.
Future directions include analyzing the effect of multilingual pretraining from different perspectives such as different pretraining tasks and architectures, e.g., mT5 (Xue et al., 2021), and more complex tasks beyond classification or sequence tagging.
A NER Results
We show additional experimental results on NER in Figures 9 and 10.
B NLI Results
Tables 5 and 6 shows the results without model adaptation, and Table 4 shows the full results with model adaptation.
C Notes on the Experimental Setup for Model Adaptation
Following are the additional notes on the setup of the model adaptation: • No vocabulary augmentation is conducted unlike Wang et al. (2020a). We use XLM-R's vocabulary throughout all experiments in this paper.
• The Bible is used instead of Wikipedia for the continued pretraining or model adaptation to minimize the corpus size and contents inconsistency across languages. Figure 10: NER F1 score on diverse pretraining languages (EN, RU, ZH, AR, HI, ES, EL, FI, ID, TR) grouped by families of target languages, with Indo-European (IE) languages further divided into subgroups following XTREME. The accuracy gain is significant for seen pretraining languages, and also the languages from the same family of the pretraining languages when added. Table 6: NLI accuracy on the 13 unseen languages using the models pretrained on related languages (EN, DE, SV, NL, DA), incrementally added one language at a time up to five languages.
|
v3-fos-license
|
2019-08-23T10:06:27.986Z
|
2019-08-05T00:00:00.000
|
201246034
|
{
"extfieldsofstudy": [
"Engineering"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2076-3417/9/15/3187/pdf",
"pdf_hash": "3a543ab2fb2ee451704f835cdbd9421041b587e1",
"pdf_src": "Adhoc",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2407",
"s2fieldsofstudy": [
"Computer Science"
],
"sha1": "35675c4ed5a680779bdb04d53e81d58f4df8ef7b",
"year": 2019
}
|
pes2o/s2orc
|
Multi-Media and Multi-Band Based Adaptation Layer Techniques for Underwater Sensor Networks
: In the last few decades, underwater communication systems have been widely used for the development of navy, military, business, and safety applications, etc. However, in underwater communication systems, there are several challenging issues, such as limitations in bandwidth, propagation delay, 3D topology, media access control, routing, resource utilization, and power constraints. Underwater communication systems work under severe channel conditions such as ambient noise, frequency selectivity, multi-path and Doppler shifts. In order to collect and transmit the data in e ff ective ways, multi-media / multi-band-based adaptation layer technology is proposed in this paper. The underwater communication scenario comprises of Unmanned Underwater Vehicles (UUVs), Surface gateways, sensor nodes, etc. The transmission of data starts from sensor nodes to surface gateway in a hierarchical manner through multiple channels. In order to provide strong and reliable communication underwater, the adaptation layer uses a multi-band / multi-media approach for transferring data. Hence, in this paper, existing techniques for splitting the band such as Orthogonal Frequency-Division Multiple Access (OFDMA), Frequency-Division Multiple Access (FDMA), or Orthogonal Frequency-Division Multiplexing (OFDM) are used for splitting the frequency band, and the medium selection mechanism is proposed to carry the signal through di ff erent media such as Acoustic, Visible Light Communication (VLC), and Infrared (IR) signals in underwater. For the channel selection mechanism, two phases are involved: 1. Finding the distance of near and far nodes using Manhattan method, and 2. Medium selection and data transferring algorithm for choosing di ff erent media. about 4 to 5 orders of magnitude greater than the propagation speed of acoustic waves in fluids. This levies tremendous checks on the complete transmission process by means of acoustic waves. Certainly, in acoustic-based communications, the parameters for the propagation speed play a very significant role. Underwater, the model for sound speed is calculated up to 1000 m.
Introduction
In an underwater constrained environment, existing communication mechanisms consist of single medium and single band technology for transferring data through wireless communication. Therefore, it is difficult to apply various types of applications underwater. In existing underwater wireless communication systems, it is hard to satisfy the real time performance and reliability requirements while maintaining the connection with various heterogeneous networks beyond the application domain. In order to overcome this, a method for bundling underwater wireless media and underwater wireless bands which can adapt with the existing communication is proposed. Based on the characteristics of each medium such as acoustic, optical, IR, Magnetic Field (MFAN), etc., the adaptation layer for underwater multi-media/multi-band is proposed.
1.
Increase the lifetime of sensor nodes 2.
Increase the reliability of data transmission 3.
Improve the faster discovery of neighbor nodes 4.
Reduce the transmission delay between nodes 5.
Long-term connectivity between nodes 6.
Faster medium selection mechanism to transfer data
Underwater Communication Technology Overview
In this section, existing underwater technologies are described. Figure 1 depicts the architecture of the underwater environments of numerous technologies pertaining to communication.
Signal communication in such environments might comprise several aspects, such as relations to buoy or ship, from terrestrial to satellite. It is also possible to exchange information using RF antennas set up on floating equipment and land stations. The exchange of data with underwater stations is made possible by using floating structures that contain communication devices. Inside water environments, it is possible to deploy various kinds of communication nodes containing AUVs, wired networks, and wireless systems in local areas. Some devices may be attached or anchored to the seafloor.
Radio Frequency (RF) Communication
From the physics perspective, for frequency ranges of satellites, TV, mobile, and radio communications, the conductivity in seawater is very high; thus, it strongly affects EM wave propagation. Due to this, establishing communication links in ultra and very high frequency (UHF and VHF respectively) for a distance more than 10m in ocean is not very likely. In the case of lower frequencies, i.e., at very low and extreme frequency ranges (VLF and ELF, respectively), EM attenuation can be considered short enough to allow reliable communication over few kilometers to occur. Frequencies from 3 kHz to 30 kHz and 3 Hz to 3 kHz are not wide enough to allow transmissions at elevated data rates.
Optical Communication
The medium behavior is the main difference between optical and RF propagation in seawater. It is dielectric for optical propagation and conductor for RF. The description for this phenomenon is based upon the plasma frequency, which behaves as either a conductor or dielectric, according to the frequency range. At around 250 GHz, seawater changes from conductor to dielectric. EM waves have lower attenuation in dielectric media than in conductor ones. For a propagation range restricted to 10s of meters, higher data rates can be provided using optical technology. The effect and Doppler spread are minor in communications of optical wireless systems, as the speed of light is larger than the acoustic wave propagation speed in fluids, i.e., by around 4 to 5 orders of magnitude.
Acoustic Commiunication
As mentioned, RF transmission and optical transmissions have narrow propagation ranges. The first is rigorously affected by heavy attenuation, leading to a minor propagation distance, while the latter depends on the turbidity of water. So, acoustic communication is another technology which may be applied to greater distances, and is presently the leading technology for wireless communication underwater. The waveform's propagation speed depends on the medium's EM or mechanical properties. EM waves can propagate through air at a speed near to that of light in a vacuum, i.e., about 4 to 5 orders of magnitude greater than the propagation speed of acoustic waves in fluids. This levies tremendous checks on the complete transmission process by means of acoustic waves. Certainly, in acoustic-based communications, the parameters for the propagation speed play a very significant role. Underwater, the model for sound speed is calculated up to 1000 m.
Radio Frequency (RF) Communication
From the physics perspective, for frequency ranges of satellites, TV, mobile, and radio communications, the conductivity in seawater is very high; thus, it strongly affects EM wave propagation. Due to this, establishing communication links in ultra and very high frequency (UHF and VHF respectively) for a distance more than 10m in ocean is not very likely. In the case of lower frequencies, i.e., at very low and extreme frequency ranges (VLF and ELF, respectively), EM attenuation can be considered short enough to allow reliable communication over few kilometers to occur. Frequencies from 3 kHz to 30 kHz and 3 Hz to 3 kHz are not wide enough to allow transmissions at elevated data rates.
Optical Communication
The medium behavior is the main difference between optical and RF propagation in seawater. It is dielectric for optical propagation and conductor for RF. The description for this phenomenon is based upon the plasma frequency, which behaves as either a conductor or dielectric, according to the frequency range. At around 250 GHz, seawater changes from conductor to dielectric. EM waves have lower attenuation in dielectric media than in conductor ones. For a propagation range restricted to 10s of meters, higher data rates can be provided using optical technology. The effect and Doppler spread are minor in communications of optical wireless systems, as the speed of light is larger than the acoustic wave propagation speed in fluids, i.e., by around 4 to 5 orders of magnitude.
Visible Light Communication (VLC)
The VLC is derived from optical communication. The wavelength of VLC ranges from 450 nm to 550 nm. VLC uses the blue-green spectrum. The distance for communication is up to 100 m. The speed of communication is 500 Mbps, which is very high, i.e., up to 1.5 m. VLC is perfect for one-to-one communication.
Magnetic Induction (MI)
This way of communication is mainly used below the sea floor. The communication distance is approximately up to 10 m. The speed of communication is 3 × 10 8 m/s. The data rate is in kbps.
Limitation and Advantages of Underwater Communication Technology
In this section, the advantages and disadvantages of different communication technologies underwater such as attenuation, speed, data rate, distance, etc. are described. Tables 1 and 2 show a comparison between different communication technologies. Sensor node deployment: In underwater networks, the sensor nodes are sparsely deployed in different places. Long-range communication depends on the availability of connections between the nodes. So, in the MAC protocol, the design of the node deployment is a critical issue.
3.
Time synchronization: In the MAC protocol, the power cycling method works on time synchronization. In order to handle time certainty between nodes, time synchronization is necessary in MAC protocols.
4.
Power wastage: The power wastage in sensor nodes is because of collisions while transmitting data. So, MAC protocols must be designed to avoid collisions between nodes.
5.
Other Issues: Also, the MAC protocol underwater gives rise to other problems, such as making connections with centralized networks, high delays for handshaking, collision avoidance problems, etc.
MAC Layer Protocols
In this section, the MAC protocols designed and developed for underwater communication are described, and their advantages and disadvantages are noted.
Contension-Free Based MAC Protocol Design
In 2009, a CDMA-B was developed whose main purpose was to save energy. Near-far signal is one of the major problems affecting performance levels underwater [8]. In 2011, POCA-CDMAMAC was developed, where the round-robin methods in it are used to receive packets from neighboring nodes. All nodes sending the packet in the same interval of time are the disadvantage [9]. In [10], the PLAN-MAC protocol was developed for long latency access networks. In this approach the contention free algorithm is used to spread the codes, i.e., each node will get a unique code. The staggered TDMA Underwater MAC Protocol (STUMP) [11] was developed in 2009, where there is no need of synchronization between nodes in order to avoid overlap during transmissions. The disadvantages of STUMP are the usage of a number of time slots. ER-MAC [12] was developed in 2008. This is an energy efficient protocol, but it is not suitable for multi-hop communications. WA-TDMA [13] was developed in 2009; the sleep-awake method is used for each node to avoid energy wastage. In WA-TDMA, slot allocation is one of the major issues. ACMENet [14] was developed in 2009; the sleep-awake method is used for each node to avoid energy wastage. Slot allocation is the main issue in this protocol. Dynamic Slot Scheduling Strategy (DSSS) [15] was developed in 2011 for the best usage of channels in underwater communication. The transmission pairs are increased to avoid collisions and parallel data transmissions. Synchronization is the essential requirement in this approach. UW-FLASHR [16] was developed in 2008; tight synchronization is not required in its use. There is a time gap during the transmission of data. ST-MAC [17] was developed in 2009; it describes the delays between the transmission links openly and modelled the ST-MAC as a new technique for solving vertex coloring problem. OFDMA [18] was developed in 2009; in it, topology is centralized. This protocol is used to solve the hidden terminal problem, but many users can affect the performance of the system. UW-OFDMAFC [19] was developed in 2009; in it, the topology is centralized. This protocol is used to solve the hidden terminal problem. Many users can affect the performance of the system.
Contension Based MAC Protocol Design
S-Aloha [20] was developed in 2006. In underwater acoustic communication, the propagation delay is high, so there is no coordination between nodes. Based on an analysis, the system performance is not much different from the Aloha and S-Aloha systems. T-Lohi [21] was developed in 2007; this mechanism exploits space-time uncertainty problems and high latency problems to detect collisions. The power consumption is low for wake-up, and this protocol gives good throughput. CUMAC [22] was developed in 2012; it is considered a cost-effective technique, thanks to its use of only one transmitter to transmit the data. The hidden terminal problem is still not solved in this approach. MACA-MN [23] was developed in 2008. This approach solves the hidden terminal problem. It can increase the packet delivery rate compared to MACA. R-MAC [24] was developed in 2007; in it, data and control packets are scheduled on both sides such as sender and receiver. UMIMO-MAC [25] was developed in 2011. UMIMO-MAC is designed to: (1) adaptively leverage the tradeoff between multiplexing and diversity gain, (2) select suitable transmit power to reduce energy consumption, and (3) efficiently exploit the UW channel and minimize the propagation delay underwater. Channel Stealing MAC (CS-MAC) was developed in 2011 [26] to solve the hidden and exposed terminal problems. This protocol is satisfactory regarding channel utilization and delay in transmission. MACA adaptation for underwater network (MACA-U) [27] was developed in 2008. In this approach the MACA protocol was tested for underwater communication, focusing on areas such as transaction rules, forwarding packets, and back-off methods. The performance evaluation was good in multi-hop underwater communication. Contention-based Parallel rEservation MAC (COPE-MAC) [28] was developed in 2010 for underwater acoustic communication. In this approach, two techniques, parallel reservation and cyber carrier sensing, are introduced. The advantage of this protocol is its ability to increase the throughput; furthermore, it is very comfortable in large networks. Multiple-rendezvous Multichannel MAC (MM-MAC) [29] was developed in 2010. In this mechanism, the cyclic quorum approach was used to reduce the probability of collisions in underwater communication; it increased the performance in multi-hop underwater wireless sensor networks. The Receiver Initiated Packet Train (RIPT) protocol was developed in 2008 [30]. This protocol was developed to address propagation delays in underwater channels. Its advantage over other protocols is the throughput level, because of the minimum collision rate.
Hybrid MAC Protocol Design
Hybrid Spatial Reuse (HSR-TDMA) was proposed in 2010. This technique has been applied in underwater ad-hoc networks which use the spatial reuse methods to improve the throughput level. The experimental result showed that the availability of underwater nodes increased to transmit the data [31]. Hybrid Medium Access Control Protocol (H-MAC) was developed in 2010. This protocol was proposed to improve the power efficiency in traffic, quality of service, channel utilization, etc. [32]. Pattern-MAC (P-MAC) was developed in 2005. The sleep-wake schedule of underwater nodes was adaptively determined based on the traffic condition and the neighbor nodes [33]. UW-MAC, also known as CDMA-based energy control MAC, was proposed in 2010. This protocol focuses on delay, network throughput, and increased network lifetime [34].
Routing Layer Protocols
In this section, the routing protocols designed and developed for underwater communication are described and the advantages and disadvantages of routing protocols are noted.
Routing Protocols Including Localization Techniques
SEANAR [35] was developed in 2010; it acts as a power efficient routing protocol for underwater communication. SEANAR shares topology information along with other information with the neighboring node. REBAR [36] was developed in 2008. In this approach, a cylindrical path is created between source and destination. This method is used for energy balancing in underwater routing techniques. Its major disadvantage is the difficulty of finding the position of the node due to node mobility. A Depth Adaptive Routing Protocol DARP [37] was developed in 2012; this approach is based on the depth of water and the speed of signal changes. The test proves that when the depth is under 1000 m, the signal strength is good, and communication is faster. The result shows that it will reduce the delay between end-to-end communication. Lifetime-Extended Vector-Based Forwarding (LE-VBF) [38] was developed in 2012. This approach is used for energy saving. The Mobicast [39] routing protocol, also known as mobile geocast, was developed in 2012. The data collection efficiency is solved by using this protocol. This approach comprises two phases: in the first phase, it collects the data inside the 3-D ZOR; in the second, the nodes in the 3-D ZOR are woken up. Because of this approach, power can be saved by activating the nodes only inside the 3-D ZOR. HH-VBF [40] was developed in 2008. This protocol is used to reduce the data load while routing through VBF. Computational complexity is the major issue associated with this approach. Vector-Based Void Avoidance (VBVA) [41] was developed in 2009. The vector shift routing strategy is used in this approach. One of its advantages is a high packet delivery rate; the major disadvantage is its high-power consumption and delays in data delivery. [42] was developed in 2008. In it, depth-based sensor nodes are used to send the data from bottom to top. The throughput rate is high for this approach. Its disadvantage is high power consumption. Q-ERP [43] was developed in 2017. The major advantages of this protocol are the high packet delivery rate, power efficiency, and absence of delays in data delivery. Adaptive Mobility of Courier nodes in the Threshold-optimized DBR Protocol (AMCTD) [44] was developed in 2013. The main advantage of this protocol is its improvements in the lifetime of the underwater network. The information-carrying based routing protocol (ICRP) [45] was developed in 2007. The main advantages of this protocol are its scalability and power efficiency. Low data delivery rates among its major disadvantages. The multi-layered routing protocol (MRP) [46] was developed in 2014. In this protocol, two phases are used: 1. a layering phase: In this phase the layers are formed towards the super node; and 2. A data forwarding phase: The data is forwarded through this formed layer.
Transport Layer Protocol
In this section, the transport layer protocols designed and developed for underwater communication are described and their advantages and disadvantages are noted.
Multi path and network coding (MPNC) was developed in 2015. It is considered a reliable protocol for underwater acoustic communication [47]. Twin path and network coding (TPNC) were developed in 2015. The ratio of data delivery is the same as that of MPNC, and power consumption is low in comparison [47]. Th erasure code based multi-hop reliable data transfer scheme (ECRDT) was developed in 2017 for underwater wireless sensor networks. This approach uses a packet level forward error correction method in end to end codes [48]. The adaptive RTT-driven transport layer flow and error control protocol (ARTFEC) was developed in 2014. Data flow control techniques are used here to apply various characteristics to the acoustic channel. In this approach, the reliability and data transmission rate are high [49]. Segmented data reliable transfer (SDRT) was developed in 2010. This protocol transfers packets in blocks, and it improves the utilization of underwater channels. Power consumption is also moderate in this approach [50].
Multi-Band Underwater Communication
A multiband OFDM for converting sound signal is used here. For low SNR communication, a transmitter and receiver are used. Multi-band OFDM techniques can reduce the complexity on the receiver side compared to the single-band OFDM techniques. The proposed scheme is tested in the sea, covering a total bandwidth of 3.6 kHz, with 16 sub-band systems, a data rate of 4.2 and 78 bits/s, and a range of 52 km. The limitation of OFDM signals was clearly revealed in 78 bit/s, but the performance level was low because of a failure in the signal and synchronization problems at 4.2 bit/s [51]. The performance degradation of one band affects all the other bands; thus, the performance level of multi-bands is worse in comparison. In order to solve this problem, the error rate of each band should be analyzed on the receiving side, and then the threshold should be set, and lesser weights allocated to the inferior bands. An algorithm is used to set the threshold for preamble error rates. In this experiment, the performance level was increased as the number of multi bands increased [52]. The dynamic bandwidth schedule algorithm was proposed to attain a multi-condition bandwidth in optical networks using OFDM-PON [53]. For a 21-inch autonomous underwater vehicle used by the navy, acoustic communication with multiple data rates and two frequency bands was developed. This system includes high and mid frequencies modems, i.e., 25 kHz and 3 kHz, and data rates of 80 bps to 5000 bps respectively, to increase the reliability of the system [54]. The normalized match filter approach is proposed based on the frequency domain processing method. The performance of diver's signal was evaluated in the Hudson river. The frequency bands with highest SNRs were used [55].
Multi-Media Underwater Communication
[56] For underwater acoustic communication, MC-UWMAC was designed. This protocol is developed based on low power and multiple channels. Collision-free communication is guaranteed. [57] The MAC protocol for underwater acoustic communication was proposed to solve the presence of noise sources around the region covered by network. Noise-aware MAC (NA-MAC) is a protocol that improves the ability of nodes; it was developed with a multi-band modem using the frequency band to detect increases of in-band noise.
START: In the starting stage, the default-band can be used by the nodes. 2.
ALERT: In this stage, the nodes should be aware of increases in noise, so that they might change to a new frequency band. 3.
TRACKING: In this stage, the nodes exchange their information with neighboring nodes using PREQ (packet used for noise level request) and PRES (packet used to replay to noise level request) to get the updates about the noise level. 4.
NEW-CHANNEL: The node changes the current band and transmits through a new band.
Multi-Band Techniques for Adaptation Layer
In this multi-band technique, existing methods, such as OFDM, FDMA, or OFDMA, can be used to divide the frequency in underwater communication. In [58], OFDM has many advantages for underwater communication schemes. Limited underwater acoustic bandwidth can be utilized efficiently by the use of OFDM. OFDM makes effective use of the spectrum by allowing overlay between sub-carriers. The introduction of guard time with cyclic prefixes reduces inter symbol and inter carrier noise significantly; hence, the modulation scheme is robust against ISI and ICI. By dividing the wide band frequency selective channel into narrowband smooth fading sub channels, OFDM is more robust in terms of frequency fading. It is computationally-resourceful thanks to the IFFT and FFT methods which implement the modulation and demodulation functions respectively. In addition to the advantages of the basic OFDM scheme, the performance of the communication scheme for underwater channels can be further improved in following ways. By the introduction of appropriate pilot carriers, effects due to channel distortion can be corrected. The introduction of Forward Error Correction (FEC) coding and interleaving can expand the reliability and performance of the communication scheme by significantly reducing the Bit Error Rate (BER).
The OFDMA-based MAC protocol is constructed on the OFDMA technology, which splits an accessible channel into a several orthogonal sub-channels, called "subcarriers". We use this technology to allow concurrent sessions through subcarrier sharing among the nodes that are in communication with each other. Each nearby pair of nodes uses a subcarrier or a pair of subcarriers to send data. This used set of subcarriers is, therefore, kept for the pair until they relinquish it clearly. The timeline is separated into slots, each of length Ts, where transmissions start only at the beginning of each time slot. Synchronization is assumed to be done through a one-hop transmission from the base station. The objective is to achieve optimal sharing of the available subcarriers; optimal here means the best distribution of the available subcarriers among network nodes which results in the minimum transmission power consumption subject to a minimum required throughput level [58]. Figure 2 shows the scheme for OFDM-based underwater communication. This method can be applied to our multi-band techniques for splitting the frequency band.
of each time slot. Synchronization is assumed to be done through a one-hop transmission from the base station. The objective is to achieve optimal sharing of the available subcarriers; optimal here means the best distribution of the available subcarriers among network nodes which results in the minimum transmission power consumption subject to a minimum required throughput level [58]. Figure 2 shows the scheme for OFDM-based underwater communication. This method can be applied to our multi-band techniques for splitting the frequency band. The parameters used for OFDM-based communication are bandwidth, carrier spacing, carrier frequency, sampling frequency, cyclic prefix duration, symbol duration, total symbol duration, subcarriers, etc.
Protocol Stack of Mult-Band and Mult-Medium Techniques
Existing underwater communication consists of single medium and single band technology for transferring data through wireless communication. It is difficult to apply various types of applications underwater. Figure 3 shows a protocol stack of multi-band and multi-media underwater communication. The protocol stack consists of different layers, i.e., the physical, data-link, network, transport, and application layers. The adaptation layer is an extension of the data-link layer which contains multi-band and multi-media technology. The multi-band approach is used to split the bandwidth into different frequencies, while the multi-media approach is used to share the data through different channels such as acoustic, visual light, infrared, etc. Multi-band and multi-media approaches are used for the reliable transmission of data in underwater communication. The parameters used for OFDM-based communication are bandwidth, carrier spacing, carrier frequency, sampling frequency, cyclic prefix duration, symbol duration, total symbol duration, subcarriers, etc.
Protocol Stack of Mult-band and Mult-medium Techniques
Existing underwater communication consists of single medium and single band technology for transferring data through wireless communication. It is difficult to apply various types of applications underwater. Figure 3 shows a protocol stack of multi-band and multi-media underwater communication. The protocol stack consists of different layers, i.e., the physical, data-link, network, transport, and application layers. The adaptation layer is an extension of the data-link layer which contains multi-band and multi-media technology. The multi-band approach is used to split the bandwidth into different frequencies, while the multi-media approach is used to share the data through different channels such as acoustic, visual light, infrared, etc. Multi-band and multi-media approaches are used for the reliable transmission of data in underwater communication. Figure 4 shows the adaptation layer mechanism. In this case, the MAC layer is formed as the extended layer to create the adaptation layer. Again, the adaptation layer is divided into two techniques, known as multi-band and multi-media techniques. A description of multi-band/multimedia techniques is given below. Figure 4 shows the adaptation layer mechanism. In this case, the MAC layer is formed as the extended layer to create the adaptation layer. Again, the adaptation layer is divided into two techniques, known as multi-band and multi-media techniques. A description of multi-band/multi-media techniques is given below.
Modem Design of Proposed Scheme
In this section, the components used for designing the adaptation layer are shown. The basic components of underwater multi-media/multi-band communications links are shown in Figure 5. The main component of the modem hardware is the transmitter, which contains components such as a modem controller, medium switch controller, power controller, modulator, frequency band splitter, etc. The medium switch controller (MSC) controls the switch to select the type of channel, e.g., VLC, IR or Ultrasonic. Then, the modulator contains the desired information in light or acoustic form. The acoustic Tx is the acoustic transmitter used to transmit the acoustic signal through the underwater channel. The VLC Tx and IR Tx are used to transmit an optical signal through an underwater channel. At the receiver end, the detector is used to receive the acoustic or optical signal. The received signal process demodulates to get the original data in the form of bits.
Modem Design of Proposed Scheme
In this section, the components used for designing the adaptation layer are shown. The basic components of underwater multi-media/multi-band communications links are shown in Figure 5. The main component of the modem hardware is the transmitter, which contains components such as a modem controller, medium switch controller, power controller, modulator, frequency band splitter, etc. The medium switch controller (MSC) controls the switch to select the type of channel, e.g., VLC, IR or Ultrasonic. Then, the modulator contains the desired information in light or acoustic form. The acoustic Tx is the acoustic transmitter used to transmit the acoustic signal through the underwater channel. The VLC Tx and IR Tx are used to transmit an optical signal through an underwater channel. At the receiver end, the detector is used to receive the acoustic or optical signal. The received signal process demodulates to get the original data in the form of bits.
Modem Design of Proposed Scheme
In this section, the components used for designing the adaptation layer are shown. The basic components of underwater multi-media/multi-band communications links are shown in Figure 5. The main component of the modem hardware is the transmitter, which contains components such as a modem controller, medium switch controller, power controller, modulator, frequency band splitter, etc. The medium switch controller (MSC) controls the switch to select the type of channel, e.g., VLC, IR or Ultrasonic. Then, the modulator contains the desired information in light or acoustic form. The acoustic Tx is the acoustic transmitter used to transmit the acoustic signal through the underwater channel. The VLC Tx and IR Tx are used to transmit an optical signal through an underwater channel. At the receiver end, the detector is used to receive the acoustic or optical signal. The received signal process demodulates to get the original data in the form of bits.
Medium Selection Mechanism
In this section, multi-media techniques in the adaptation layer will be discussed. Multi-media techniques are used to select the appropriate medium from the various media in underwater communications, such as acoustic, VLC, or IR. Figure 6 shows a means of communication scheme from the physical to the routing layer, and the placement of the medium selection mechanism inside the MAC layer.
Medium Selection Mechanism
In this section, multi-media techniques in the adaptation layer will be discussed. Multi-media techniques are used to select the appropriate medium from the various media in underwater communications, such as acoustic, VLC, or IR. Figure 6 shows a means of communication scheme from the physical to the routing layer, and the placement of the medium selection mechanism inside the MAC layer. This phase is used to find the distance between nearby and distant nodes in underwater communication. For distance finding methods, we need to follow two steps: a.) The received signal strength indicator (RSSI) method is used to find the signal strength between the nodes, and b.) the Manhattan method is used to find the distance between nearby and faraway nodes.
Received Signal Strength Calculation
RSSI is used to calculate the loss of signal between the nodes in underwater sensor networks, including signal loss in underwater networks. The propagation model is used to calculate the RSSI between the nodes. The formula with which to measure the path loss is shown below In Equation (1), d is the distance between the nodes. Based on the decrease in RSSI, the path loss n should be measured. The value of Xσ is 0, which is the Gaussian distribution variable to change the power of the received signal at a certain distance. The referential distance is d0, which is equal to l meter. Path Loss(d0) is the referential power value.
Suppose R to be the received signal strength at distance d0 between the transmitter and receiver nodes; then, the equation can be written as follows In Equation (2), Pr is the power of node that is transmitting the signal and Path Loss(d0) is the referential power in dB. The RSSI value can be calculated using the formula shown in Equation (3). This phase is used to find the distance between nearby and distant nodes in underwater communication. For distance finding methods, we need to follow two steps: a.) The received signal strength indicator (RSSI) method is used to find the signal strength between the nodes, and b.) the Manhattan method is used to find the distance between nearby and faraway nodes.
Received Signal Strength Calculation
RSSI is used to calculate the loss of signal between the nodes in underwater sensor networks, including signal loss in underwater networks. The propagation model is used to calculate the RSSI between the nodes. The formula with which to measure the path loss is shown below In Equation (1), d is the distance between the nodes. Based on the decrease in RSSI, the path loss n should be measured. The value of Xσ is 0, which is the Gaussian distribution variable to change the power of the received signal at a certain distance. The referential distance is d0, which is equal to 1 m. Path Loss(d0) is the referential power value.
Suppose R to be the received signal strength at distance d0 between the transmitter and receiver nodes; then, the equation can be written as follows Appl. Sci. 2019, 9, 3187 13 of 24 In Equation (2), Pr is the power of node that is transmitting the signal and Path Loss(d0) is the referential power in dB. The RSSI value can be calculated using the formula shown in Equation (3).
Distance Estimation of Nearby Node Using Manhattan Method In [59,60], the Manhattan method was used to find the distance between near-far nodes. In this case, the referential object must be allocated. Let's consider the referential set as Ref ab for the pair of nodes (a, b), where a is the nearby node of b. Let N a and N b denote the nearby set of nodes a and b. Then, the referential set can be created as Based on the referential set Ref ab , two vectors V a and V b can be generated for nodes a and b respectively by using vector generation. In this case both comprise a RSSI value in dB for all the nodes in Ref ab . The sending power of the RSSI node itself is set at 0. That is RSSI (a, a) = 0 in vector V a . If the node d is far from the node a, then RSSI (a, d) is set at −100.
If the given vectors are (a 1 , a 2 , a 3 . . . ) and (b 1 , b 2 , b 3 . . . ) with elements at same number n, then the Manhattan distance M dis can be calculated using Equation (4). Figure 7 shows an example of a graph constructed using the Manhattan distance. Suppose the referential set Ref ab of N S1 and N S2 is obtained using (S1, S2, S3, S4, S5). V S1 and V S2 are the set as [0, −55, −65, −60, −100] and [−60, 0, −50, −75, −70], which is the RSSI value. Then, the distance between the nearby nodes S1 and S2 can be calculated as shown in Equation (5). In [59,60], the Manhattan method was used to find the distance between near-far nodes. In this case, the referential object must be allocated. Let's consider the referential set as Refab for the pair of nodes (a, b), where a is the nearby node of b. Let Na and Nb denote the nearby set of nodes a and b. Then, the referential set can be created as Refab = Na U Nb U {a} U {b}.
Based on the referential set Refab, two vectors Va and Vb can be generated for nodes a and b respectively by using vector generation. In this case both comprise a RSSI value in dB for all the nodes in Refab. The sending power of the RSSI node itself is set at 0. That is RSSI (a, a) = 0 in vector Va. If the node d is far from the node a, then RSSI (a, d) is set at −100.
If the given vectors are (a1, a2, a3…) and (b1, b2, b3…) with elements at same number n, then the Manhattan distance M dis can be calculated using Equation (4).
Distance Estimation of Far-Away Node Using Manhattan Method
By using the RSSI values, the distance between the nearby nodes can be calculated. We know that the Manhattan distance of Mdis (S1, S2) = 175, so the distance between (S2, S1) = 175. Similarly, the other distances, such as Mdis (S2, S4), Mdis (S1, S3), Mdis (S2, S5), Mdis (S1, S4), Mdis (S3, S4), Mdis (S2, S3), and Mdis (S3, S5), can be calculated. Figure 8 shows the methods for finding the distance for faraway nodes. The shortest path method is applied here after finding the Mdis of nearby nodes. Distance Estimation of Far-Away Node Using Manhattan Method By using the RSSI values, the distance between the nearby nodes can be calculated. We know that the Manhattan distance of M dis (S1, S2) = 175, so the distance between (S2, S1) = 175. Similarly, the other distances, such as M dis (S2, S4), M dis (S1, S3), M dis (S2, S5), M dis (S1, S4), M dis (S3, S4), M dis (S2, S3), and M dis (S3, S5), can be calculated. Figure 8 shows the methods for finding the distance for far-away nodes. The shortest path method is applied here after finding the M dis of nearby nodes. Figure 9 shows a flowchart of the medium selection mechanism. At first, the nodes will obtain the RSSI values and calculate the distance between the nearby nodes. If data = 1 means, there is an availability of data to send, while if data = 0, there is no data to send. If data is available, then the adaptable medium to select is needed. Here, the medium selection is based on the distance between the neighboring nodes. If distance is less than 5, the IR medium is used to send the data. If the distance is between 5 to 20 means, the VLC medium is used to send the data, while if the distance is greater than 20 means, the acoustic medium is used. The pseudo code of medium selection mechanism is given in Algorithm 1. Figure 9 shows a flowchart of the medium selection mechanism. At first, the nodes will obtain the RSSI values and calculate the distance between the nearby nodes. If data = 1 means, there is an availability of data to send, while if data = 0, there is no data to send. If data is available, then the adaptable medium to select is needed. Here, the medium selection is based on the distance between the neighboring nodes. If distance is less than 5, the IR medium is used to send the data. If the distance is between 5 to 20 means, the VLC medium is used to send the data, while if the distance is greater than 20 means, the acoustic medium is used. The pseudo code of medium selection mechanism is given in Algorithm 1.
Scenario for Medium Selection Mechanism
In Figure 10, two scenarios in which the multi-media approach can be used are presented: (1) Tsunami monitoring, and (2) Diver networks monitoring. In both scenarios, the data needs to reach the control room much faster, and that data should be reliable. So, the distance is used as the criteria for selecting the medium. The distance between the nearby nodes is calculated using RSSI values and the Manhattan method. Table 3 shows the RSSI value received as dB from the nearby node. For example, S1 receives −33 dB from node S4, which is one of the nearby nodes of S1. So, its distance is 4 m, based on the Manhattan method.
Appl. Sci. 2019, 9, x FOR PEER REVIEW 17 of 26 Figure 10. Surveillance using the multi-media mechanism. Figure 10. Surveillance using the multi-media mechanism. Table 3. RSSI values of nearby nodes based on distance. S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 S11 S12 S13 SN Figure 11 shows the multi-media method which is applicable for reliable and fast routing underwater. The result is based on source to destination routing using the Manhattan method. In this approach, multiple media can be used for communications between nodes. In our working scenario, there are a number of sensor nodes from S1 to S13, and SN is considered as the sink node. Here, S1, S2, and S3 are the source nodes that are to transmit the data. In this case, it will choose the most efficient method for transmitting data. For example, S1 will transmit data through S1→S1-S5-S6-S10-SN using the Manhattan approach. Also, this approach will be used in different media for communication between sensor nodes based on Sections 7.4. 2.1 and 7 Figure 11 shows the multi-media method which is applicable for reliable and fast routing underwater. The result is based on source to destination routing using the Manhattan method. In this approach, multiple media can be used for communications between nodes. In our working scenario, there are a number of sensor nodes from S1 to S13, and SN is considered as the sink node. Here, S1, S2, and S3 are the source nodes that are to transmit the data. In this case, it will choose the most efficient method for transmitting data. For example, S1 will transmit data through S1→S1-S5-S6-S10-SN using the Manhattan approach. Also, this approach will be used in different media for communication between sensor nodes based on Sections 7.4.2.1 and 7.4.2.2. Figure 11. Source to sink node routing using the medium selection mechanism. Figure 12 shows the modem design and tests done using the multi-media approach. In Figure 11, 'a' represents the setup of modem and 'b' represents the operation performed using multi-media communication. The conceptual modem was developed using a combination of Raspberry Pi 3+ and Beagle-bone Black. The conceptual model setup was done with PC-to-PC link using acoustic and visible light communication. In this approach, an acoustic modem developed by [61] and the visible light (VL) scheme developed by [62] were combined. The experimental result shows that this modem can send 30 bytes of data at 1000 ms, and that it can send 1000 times each to acoustic and VL. Table 4 shows the conceptual modem's specifications. Figure 11. Source to sink node routing using the medium selection mechanism. Figure 12 shows the modem design and tests done using the multi-media approach. In Figure 11, 'a' represents the setup of modem and 'b' represents the operation performed using multi-media communication. The conceptual modem was developed using a combination of Raspberry Pi 3+ and Beagle-bone Black. The conceptual model setup was done with PC-to-PC link using acoustic and visible light communication. In this approach, an acoustic modem developed by [61] and the visible light (VL) scheme developed by [62] were combined. The experimental result shows that this modem can send 30 bytes of data at 1000 ms, and that it can send 1000 times each to acoustic and VL. Table 4 shows the conceptual modem's specifications.
Multi-Media Modem
Appl. Sci. 2019, 9, x FOR PEER REVIEW 19 of 26 Figure 11 shows the multi-media method which is applicable for reliable and fast routing underwater. The result is based on source to destination routing using the Manhattan method. In this approach, multiple media can be used for communications between nodes. In our working scenario, there are a number of sensor nodes from S1 to S13, and SN is considered as the sink node. Here, S1, S2, and S3 are the source nodes that are to transmit the data. In this case, it will choose the most efficient method for transmitting data. For example, S1 will transmit data through S1→S1-S5-S6-S10-SN using the Manhattan approach. Also, this approach will be used in different media for communication between sensor nodes based on Sections 7.4.2.1 and 7.4.2.2. Figure 11. Source to sink node routing using the medium selection mechanism. Figure 12 shows the modem design and tests done using the multi-media approach. In Figure 11, 'a' represents the setup of modem and 'b' represents the operation performed using multi-media communication. The conceptual modem was developed using a combination of Raspberry Pi 3+ and Beagle-bone Black. The conceptual model setup was done with PC-to-PC link using acoustic and visible light communication. In this approach, an acoustic modem developed by [61] and the visible light (VL) scheme developed by [62] were combined. The experimental result shows that this modem can send 30 bytes of data at 1000 ms, and that it can send 1000 times each to acoustic and VL. Table 4 shows the conceptual modem's specifications. The 'a' part of Figure 13 shows the modem setup for the multi-media/multi-band approach; the 'b' part of Figure 12 shows the operation selection and working of the multi-media/multi-band device. The modem setup was developed using a Xilinx zynq board with multi-media/multi-band techniques embedded inside it. This modem is applicable for real time applications such as diver network monitoring, tsunami monitoring, etc. as shown in Section 7.4.2.2. In this approach, the modem was developed with: (1) Two transducers supporting bandwidths of 70 kHZ and 140 kHZ, as used for acoustic communication; (2) a single bandwidth for infrared (IR) communication at a wavelength of 700 nm to 1 mm; and (3) Visible light (VL) communication using the blue wavelength at a range of 450 to 485 nm. So, this modem setup is combines of multiple bands for different media, such as acoustic, VLC, and IR. The detail specifications of the modem used for the multi-band/multi-media approach is shown in Table 5.
Multi-media/Multi-band Modem Setup and Testing
The 'a' part of Figure 13 shows the modem setup for the multi-media/multi-band approach; the 'b' part of Figure 12 shows the operation selection and working of the multi-media/multi-band device. The modem setup was developed using a Xilinx zynq board with multi-media/multi-band techniques embedded inside it. This modem is applicable for real time applications such as diver network monitoring, tsunami monitoring, etc. as shown in Section 7.4.2.2. In this approach, the modem was developed with: (1) Two transducers supporting bandwidths of 70 kHZ and 140 kHZ, as used for acoustic communication; (2) a single bandwidth for infrared (IR) communication at a wavelength of 700 nm to 1 mm; and (3) Visible light (VL) communication using the blue wavelength at a range of 450 to 485 nm. So, this modem setup is combines of multiple bands for different media, such as acoustic, VLC, and IR. The detail specifications of the modem used for the multi-band/multimedia approach is shown in Table 5.
Multi-Media/Multi-Band Tested Results
Based on the experimental setup described in Section 8.2, the first test was done with pure water inside a water tank of 1 m in height and 1.8 m in width. More than 20,000 packets where received at distances of 1 m and 1.5 m. The second test was done with salty water inside the water tank; in this case, the tank was of 1 m in height and 8 m in width. Around 20,000 packets were received at distances of 4 m and 6 m. The final test was done in open seabed with highly turbid water. The seabed was constructed with a depth of 1 m depth and width of 12 m. Around 20,000 packets were collected distances of 2, 4, 6, 8, 10, and 12 m. In recent tests, the signal strength was collected only for VLC and IR. Figure 14 shows the signal strength received by the multi-media/multi-band modem using pure water and the small water tank. The signal strength received for the VLC medium was 100 and 98.98 for 1 m and 1.5 m respectively, and that for the IR medium was 100 for both 1 m and 1.5 m. Figure 15 shows the signal strength received using salty water and the large water tank. The signal strength received for the VLC medium was 29.2 and 80.1 for 4 m and 6 m respectively; that for the IR medium was not noted. Figure 16 shows the signal strength received using the turbid water in open seabed test. In the case of 2 m, the signal strength received for the VLC medium was 100 for 1 m, 1.5 m, and 2 m, and that of IR is 100 for 1 m and 1.5 m. In the case of 4 m, the signal strength received for the VLC medium was 100, 100, 100, 95, 19, 0.07 for 1 m, 1.5 m, 2 m, 2.3 m, 3 m, and 3.8 m respectively, and for IR, 100 for 1 m and 1.5 m.
Multi-media/Multi-band Tested Results
Based on the experimental setup described in Section 8.2, the first test was done with pure water inside a water tank of 1 m in height and 1.8 m in width. More than 20,000 packets where received at distances of 1 m and 1.5 m. The second test was done with salty water inside the water tank; in this case, the tank was of 1 m in height and 8 m in width. Around 20,000 packets were received at distances of 4 m and 6 m. The final test was done in open seabed with highly turbid water. The seabed was constructed with a depth of 1 m depth and width of 12 m. Around 20,000 packets were collected distances of 2, 4, 6, 8, 10, and 12 m. In recent tests, the signal strength was collected only for VLC and IR. Figure 14 shows the signal strength received by the multi-media/multi-band modem using pure water and the small water tank. The signal strength received for the VLC medium was 100 and 98.98 for 1 m and 1.5 m respectively, and that for the IR medium was 100 for both 1 m and 1.5 m. Figure 15 shows the signal strength received using salty water and the large water tank. The signal strength received for the VLC medium was 29.2 and 80.1 for 4 m and 6 m respectively; that for the IR medium was not noted. Figure 16 shows the signal strength received using the turbid water in open seabed test. In the case of 2 m, the signal strength received for the VLC medium was 100 for 1 m, 1.5 m, and 2 m, and that of IR is 100 for 1 m and 1.5 m. In the case of 4 m, the signal strength received for the VLC medium was 100, 100, 100, 95, 19, 0.07 for 1 m, 1.5 m, 2 m, 2.3 m, 3 m, and 3.8 m respectively, and for IR, 100 for 1 m and 1.5 m.
Conclusions and Future Work
In order to develop strong and reliable communication systems for underwater sensor networks, this paper proposes adaptation layer techniques using a multi-band/multi-media method inside the MAC layer. For multi-band communication, the existing communication protocol for MAC layers such as OFDM, OFDMA, or FDMA can be used to divide the frequency bands. The medium selection mechanism is proposed to carry the data through multiple media, such as acoustic, VLF, or IR. In the medium selection mechanism, the proposed scheme is split into two phases: (1) Finding the distance of near and far nodes; and (2) Medium selection and data transferring. To find the distance, the Manhattan method was used, and for the medium selection, a new algorithm was proposed. In the adaptation layered multi-media/multi-band approach, the RSSI between nodes is considered as the main factor to find the distance, since distance is the main property used when transferring data. Also, this paper takes into account scenarios such as diver networks monitoring and tsunami monitoring based on multi-media communication technology. In our approach, a multi-media/multiband modem was developed with: (1) Two transducers supporting bandwidths of 70 kHz and 140 kHz for acoustic communication; (2) a single bandwidth for Infrared (IR) communication at a wavelength of 700 nm to 1 mm; and (3) Visible light (VL) communication that uses the blue wavelength at a range of 450 to 485 nm. So, this modem setup combines multiple bands for different media such as acoustic, VLC, and IR. Currently, tests have been undertaken only for VLC and IR. In future, red and green light will be considered for VL communication. Also, the multi-media/multiband adaptation techniques will be improved by considering more properties of underwater environments, such as temperature, pressure, pH, etc.
Conclusions and Future Work
In order to develop strong and reliable communication systems for underwater sensor networks, this paper proposes adaptation layer techniques using a multi-band/multi-media method inside the MAC layer. For multi-band communication, the existing communication protocol for MAC layers such as OFDM, OFDMA, or FDMA can be used to divide the frequency bands. The medium selection mechanism is proposed to carry the data through multiple media, such as acoustic, VLF, or IR. In the medium selection mechanism, the proposed scheme is split into two phases: (1) Finding the distance of near and far nodes; and (2) Medium selection and data transferring. To find the distance, the Manhattan method was used, and for the medium selection, a new algorithm was proposed. In the adaptation layered multi-media/multi-band approach, the RSSI between nodes is considered as the main factor to find the distance, since distance is the main property used when transferring data. Also, this paper takes into account scenarios such as diver networks monitoring and tsunami monitoring based on multi-media communication technology. In our approach, a multi-media/multiband modem was developed with: (1) Two transducers supporting bandwidths of 70 kHz and 140 kHz for acoustic communication; (2) a single bandwidth for Infrared (IR) communication at a wavelength of 700 nm to 1 mm; and (3) Visible light (VL) communication that uses the blue wavelength at a range of 450 to 485 nm. So, this modem setup combines multiple bands for different media such as acoustic, VLC, and IR. Currently, tests have been undertaken only for VLC and IR. In future, red and green light will be considered for VL communication. Also, the multi-media/multiband adaptation techniques will be improved by considering more properties of underwater environments, such as temperature, pressure, pH, etc.
Conclusions and Future Work
In order to develop strong and reliable communication systems for underwater sensor networks, this paper proposes adaptation layer techniques using a multi-band/multi-media method inside the MAC layer. For multi-band communication, the existing communication protocol for MAC layers such as OFDM, OFDMA, or FDMA can be used to divide the frequency bands. The medium selection mechanism is proposed to carry the data through multiple media, such as acoustic, VLF, or IR. In the medium selection mechanism, the proposed scheme is split into two phases: (1) Finding the distance of near and far nodes; and (2) Medium selection and data transferring. To find the distance, the Manhattan method was used, and for the medium selection, a new algorithm was proposed. In the adaptation layered multi-media/multi-band approach, the RSSI between nodes is considered as the main factor to find the distance, since distance is the main property used when transferring data. Also, this paper takes into account scenarios such as diver networks monitoring and tsunami monitoring based on multi-media communication technology. In our approach, a multi-media/multi-band modem was developed with: (1) Two transducers supporting bandwidths of 70 kHz and 140 kHz for acoustic communication; (2) a single bandwidth for Infrared (IR) communication at a wavelength of 700 nm to 1 mm; and (3) Visible light (VL) communication that uses the blue wavelength at a range of 450 to 485 nm. So, this modem setup combines multiple bands for different media such as acoustic, VLC, and IR. Currently, tests have been undertaken only for VLC and IR. In future, red and green light will be considered for VL communication. Also, the multi-media/multi-band adaptation techniques will be improved by considering more properties of underwater environments, such as temperature, pressure, pH, etc.
|
v3-fos-license
|
2017-06-21T23:31:31.952Z
|
2014-07-22T00:00:00.000
|
12606729
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://trialsjournal.biomedcentral.com/track/pdf/10.1186/1745-6215-15-294",
"pdf_hash": "f1faa0623a7fe9ce2eb9f52829c2d7b7104e0525",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2409",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "baf1c4659daf97a2b531eaa6b4857cccdcf4505b",
"year": 2014
}
|
pes2o/s2orc
|
Chinese medicine combined with calcipotriol betamethasone and calcipotriol ointment for Psoriasis vulgaris (CMCBCOP): study protocol for a randomized controlled trial
Background Psoriasis causes worldwide concern because of its high-prevalence, as well as its harmful, and incurable characteristics. Topical therapy is a conventional treatment for psoriasis vulgaris. Chinese medicine (CM) has been commonly used in an integrative way for psoriasis patients for many years. Some CM therapies have shown therapeutic effects for psoriasis vulgaris (PV), including relieving symptoms and improving quality of life, and may reduce the relapse rate. However, explicit evidence has not yet been obtained. The purpose of the present trial is to examine the efficacy and safety of the YXBCM01 granule, a compound Chinese herbal medicine, with a combination of topical therapy for PV patients. Methods/Design Using an add-on design, the trial is to evaluate whether the YXBCM01 granule combined topical therapy is more effective than topical therapy alone for the treatment of PV. The study design is a double-blind, parallel, randomized controlled trial comparing the YXBCM01 granule (5.5 g twice daily) to a placebo. The duration of treatment is 12 weeks. A total of 600 participants will be randomly allocated into two groups, YXBCM01 granule group and placebo group, from 11 general or dermatological hospitals in China. Topical use of calcipotriol betamethasone for the first 4 weeks and calcipotriol ointment for the remaining 8 weeks will be the same standard therapy for the two groups. Patients will be enrolled if they have a clinical diagnosis of PV, a psoriasis area severe index (PASI) of more than 10 or body surface area (BSA) of more than 10%, but PASI of less than 30 and BSA of less than 30%, are aged between 18 and 65-years-old, and provide signed informed consent. The primary outcome, relapse rate, is based on PASI assessed blindly during the treatment. Secondary outcomes include: (i) relapse time interval, (ii) time to onset, (iii) rebound rate, (iv) PASI score, (v) cumulative consumption of medicine, (vi) the dermatology quality life index (DLQI), and (vii) the medical outcomes study (MOS) item short form health survey (SF-36). Analysis will be on intention-to-treat and per-protocol subject analysis principles. Discussion To address the effectual remission of the YXBCM01 granule for PV, this trial may provide a novel regimen for PV patients if the granule can decrease relapse rate without more adverse effects. Trial registration Chinese Clinical Trial Registry (http://cwww.chictr.org): ChiCTR-TRC-13003233, registered 26 May 2013.
Background
Psoriasis is an immune-abnormal, chronic, proliferative skin disease determined by polygenic inheritance and induced by a number of environmental factors. The prevalence rates of psoriasis in Europe vary between 0.73% (in Scotland) and 2.9% (in Italy), and rates reported in United States vary between 0.7% and 2.6%, which shows a worldwide geographic variation [1]. In China, the incidence of psoriasis has increased to 0.47% [2], showing an upward trend of 0.12% compared to 1984 [3]. Psoriasis is characterized by scaly, erythematous patches, papules, and plaques that are often severely pruritic. Symptoms such as poor esteem, sexual dysfunction, anxiety, depression, and suicidal ideation due to the appearance of skin, can significantly reduce the patients' health-related quality of life. For instance, in China, 59.8% of psoriatic people experience negative influences on quality of life [2]. It has been a cause of increasing concern worldwide due to its high-prevalence and its harmful and incurable characteristics.
Topical therapies remain the primary method of managing mild to moderate psoriasis, including topical corticosteroids, vitamin D analogues, sulfur ointment, laser therapy, dithranol, tazarotene, and coal tar [4,5]. In particular situations, plaque psoriasis, erythrodermic psoriasis, generalized pustular psoriasis, and refractory cases are treated with systematic therapy such as retinoic acid, methotrexate, glucocorticoid, and biologicals. However, most of the systematic therapies have serious side effects and are an expensive option, thus limiting their long-term use [6].
Over the years, Chinese medicine (CM) for treatment of psoriasis has accumulated a wealth of clinical experience. Some CM therapies have shown long lasting therapeutic effects on controlling psoriasis vulgaris (PV) with minimal side effects. CM can also alleviate the symptoms effectively, reduce the recurrence rate, and control the condition with fewer side effects in psoriasis treatment [7][8][9]. A systematic review of CM for psoriasis treatment, including 32 randomized controlled trials worldwide, reporting 5,179 patients with PV, suggested that some certain CM interventions or combining CM with conventional medicine had promising results for PV [10]. The topical herbal formula and herbal medicines adding to conventional therapy for psoriasis appears to have potential benefits for symptom reduction and is also relatively safe [11,12]. However, there was previously no evidence supporting CM as an effective and safe therapy against PV relapse.
A Chinese herbal compound, YXBCM01 formula, is theorized to have an effect on reducing PV's relapse rate based on Chinese medicine theory and clinical observations. This formula composed of Radix Paeoniae Rubra (Chishao), Sarcandra Glabra (Jiujiecha), Rhizoma Smilacis Glabrae (Tufuling) and more, and was developed by Professor Xuan Guowei, a well-known CM dermatologist in China. YXBCM01 is the most commonly used formula to treat PV in Guangdong Provincial Hospital of Chinese Medicine (GPHCM) for over 20 years, and has been registered with the Guangdong Food and Drug Administration (Hospital preparation approval number Z20080123). An observational study showed two months treatment of YXBCM01 for PV reduced Psoriasis Area Severe Index (PASI) and Dermatology Life Quality Index (DLQI) scores, moreover, no adverse reaction was reported during the study period [13].
According to the German guidelines on treatment of PV, topical therapy with a combination of calcipotriol betamethasone for the first 4 weeks and calcipotriol ointment for the remaining 8 weeks is recommended, since it was significantly superior from a pharmacoeconomic standpoint when compared to the use of each therapy alone [13]. Therefore, a randomized, double-blind, placebocontrolled add-on study is designed to evaluate the efficacy of YXBCM01 concurrent with topical therapy of calcipotriol and betamethasone in reducing the relapse rate of PV. Results of this study will provide evidence regarding the value of YXBCM01 as an intervention for PV patients.
Design and setting
This is a multicenter, randomized controlled trial, which will be conducted in 11 general or dermatological hospitals in different provinces in China. The study aims to enroll 600 patients with PV over a three-year period. Participants are randomized using a ratio of 1:1 to receive either an oral YXBCM01 granule (5.5 g twice daily) or a placebo. After a 2 to 4 week run-in period for the screening of subjects and providing written informed consent, patients will be randomly allocated into two groups to undergo a treatment period of 12 weeks. A follow-up period of 12 weeks will continue until the study close (see Figure 1 and Additional file 1). This trial was approved by six sites' ethics committees and required archival filing management in five other sites' ethics committees (Additional file 2).
Participants
Patients with a clinical diagnosis of psoriasis plaque (as diagnosed by a dermatologist during the run-in period) can be enrolled into this study. A diagnosis of PV is compliant with the guidelines of care for the management of psoriasis and psoriatic arthritis from the American Academy of Dermatology and the guideline for the treatment of psoriasis from the Psoriasis Study Group of Chinese Medical Association [14,15].
Research assistants will introduce and discuss the trial to potential subjects using Mandarin or the local language. All potential subjects will be given a consent form and separate information sheets including information on the main aspects of the trial. Patients will then be able to have an informed discussion with their family and participating consultant. Research assistants will obtain the signed consent form from patients willing to participate in the trial.
Recruitment procedure
Most participants will be recruited through posters in local newspapers and hospitals; a few patients may spontaneously contact trial centers.
Inclusion criteria
Patients will be enrolled if they have a clinical diagnosis of PV, a psoriasis area severe index (PASI) of more than 10 or body surface area (BSA) of more than 10%, but PASI of less than 30 and BSA of less than 30%, are aged between 18 and 65-years-old, and provide signed informed consent.
Exclusion criteria
1. Those with guttate psoriasis, inverse psoriasis or psoriasis exclusively involving the face 2. Those who are pregnant, lactating, or who plan to become pregnant within a year 3. A Self-rating Anxiety Scale (SAS) score of > 50 or a Self-rating Depression Scale (SDS) scale of > 53, or those with other psychiatric disorders 4. Those with any uncontrolled cardiovascular, respiratory, digestive, urinary, or hematological disease 5. Those with a known cancer, infection, electrolyte imbalance, acid-base disturbance or calcium metabolic disorder 6. An abnormal serum calcium level (Ca 2+ > 2. 9 mmol/L or < 2 mmol/L) 7. Anyone allergic to any medicine or ingredients used in this study 8. Anyone currently enrolled in other clinical trials or who have participated in another trial within a month 9. Anyone who has received topical treatments (such as corticosteroids or retinoic acid) within 2 weeks, systemic therapy or phototherapy (ultraviolet radiation B,UVB and psoralen combined with ultraviolet A, PUVA) within 4 weeks, or biological therapy within 12 weeks 10. Acute progression of psoriasis and erythroderma tendency 11. Patients need systemic treatment prescribed by doctors
Interventions
Participants in experimental group will receive the YXBCM01 granule 5.5 g twice daily for 12 weeks. Placebo granule are given to patients in the placebo group 5.5 g twice daily for 12 weeks, the main ingredients of which are maltodextrin, lactose, and a natural edible pigment, will be identical to the YXBCM01 granule in appearance, weight, and taste. The sequential topical therapy is administrated simultaneously in all eligible patients by using a calcipotriol betamethasone ointment once daily (treatment area up to 30% BSA, one fingertip unit is recommended) in the first 4 weeks (maximum of 100 g weekly), followed by calcipotriol ointment once daily in the remaining 8 weeks (maximum of 100 g weekly), which is recommended on the S3-Guidelines for the treatment of psoriasis vulgaris [5]. The YXBCM01 granule and the matching placebo used in the trial are manufactured by the Tianjiang Pharmaceutical Co., Ltd. (Jiangyin, Jiangsu Province, China) that meets the requirements of the Good Manufacturing Practice (GMP). The calcipotriol betamethasone ointment (approval number H20080487) and calcipotriol ointment (approval number H20050125) are manufactured by LEO Laboratories Ltd (Ballerup, Denmark).
Rescue and concomitant treatment
Urea ointment is used as the basic concomitant treatment in the run-in and follow-up period, according to doctors' opinion. In cases of patients with serious itch, cetirizine hydrochloride (1 pill per day) is to be a rescue drug following the doctor's advice.
Randomization and blinding
A total of 600 participants will be enrolled from eligible patients in 11 research sites. A computer-generated random list for center-stratified method and permuted blocks size created by SAS 9.2 software (SAS Institute Inc., Cary, USA) and performed by the Key Unit of Methodology in Clinical Research (KUMCR) of Guangdong Provincial Hospital of Chinese Medicine will be used for randomization.
The participants will be randomly allocated to two different treating groups in a 1:1 ratio through the Interactive Web Response System for Chinese Medicine Trials (IWRS-CMT), which was a verified online randomization facility established by the KUMCR.
The randomization schedule will be concealed until interventions are all assigned and enrollment, follow-up, data collection, data cleaning, and analysis are complete. The participants, paramedics, investigators, outcomes assessors, statistician, and other staff do not know the treatment allocation, which will not be revealed until the end of study.
The randomization list and blinding codes will be kept strictly confidential. Only the KUMCR staff will have access to the randomization list. Blinding was ensured using a matched placebo granule identical in color, size, shape, and taste. The quality of the matched trial supplies, such as contents, solubility, and bacteria contaminations, should be controlled rigorously according to the GMP standards, and be tested and verified by researchers.
Outcome measures
The primary outcome measure in the trial is relapse incidence rate in the treatment period and follow-up period. The definition of relapse is a loss of 50% of PASI improvement from baseline in patients who have achieved treatment success (at least 50% improvement in PASI score from baseline) [16,17]. PASI will be assessed every week during the first 4 weeks and every 2 weeks throughout the rest of the treatment and follow-up period. Meanwhile, patients will be required to report the emergency of skin lesions at any time in the study period, and researchers will assess the PASI score on the same or closest day.
Secondary outcome measures include time to relapse, time to onset, rebound rate, PASI score, cumulative consumption of topical medicine, visual analogue scale (VAS), BSA, DLQI, and SF-36 (the MOS item short form health survey).
Time to relapse is defined as the time to loss of at least 50% of the PASI improvement for the first time. Time to onset is time taken for the PASI score to decrease more than 50% for the first time, and treatment will be considered as ineffective when the PASI score cannot get 50% improvement throughout the treatment period. Rebound refers to a PASI score increasing of more than 125% above baseline, or occurring of new generalized pustular and erythrodermic after its improvement (PASI-50) during the study period. The VAS and BSA will be assessed every week during the first 4 weeks and every 2 weeks throughout the rest of the treatment period. The DLQI and SF-36 will be selfassessed by patients every 4 weeks throughout this trial.
Safety assessments
Participants are to be questioned and report all adverse events (AEs) at each visit point, and all AEs reports will be recorded and assessed by the investigators. A blood test, urinalysis, hemagglutination test, and electrocardiograph examination will be checked before and after the treatment. Furthermore, calcium, kidney, and liver function tests will be reexamined at week 4 and week 12, respectively. All abnormal changes from the baseline of lab test will be evaluated.
Sample size calculation
Based on White et al.'s study [18], the relapse rate of sequential topical therapy 12 weeks is 37.3%, and placebo is 46.6%. For example, suppose the relapse rate of YXBCM01 granule combined with sequential topical therapy for PV in the twelfth week is 20%, sequential topical therapy alone is 37%. According to the supposition and as calculated by PASS 11.0 software (NCSS, LLC, Kaysville, Utah, USA), sample size of 239 in each group can achieve 90% power and to rule out a two-sided type I error of 5% to detect a superiority margin difference of 10% in this two-arm trial with equal allocation in each group. Considering 15 to 20% loss to follow-up, the total sample size should be adjusted to 300 in each group.
Data management and quality control
The data collected in this trial comprises of information recorded in case report forms and information on the DLQI and SF-36 scale. When every visit was completed at each center, data will be entered using the double entry method.
To ensure that outcome assessments are of a high standard in accordance with the trial protocol, physicians, assessors, and research assistants will attend sixhours training workshop before the conduction of trial. They will also be provided with a written protocol and standard operation procedures documents. All the data in different sites will be checked regularly by researchers from GPHCM and be overseen by monitors. The monitoring tasks of the trial will be entrusted to Guangdong International Clinical Research Center of Chinese Medicine (Guangzhou, China). The auditing and inspection of the trial will be performed by the Department of Science Research of GPHCM and the Office of National Key Technology R&D Program for the Twelfth Five-year Plan of Ministry of Science and Technology, China. The Data Monitoring Committee from GPHCM will assess the safety data and the critical efficacy outcomes.
Participants may withdraw from the study at any time for any reason. If any patients wish to withdraw, clinicians should ask if they would be willing to complete the assessments as according to the study schedule and write down their last time of taking the medicine. Incidences of patient loss to follow-up and withdrawal will be recorded and reported.
Statistical analysis
Data analysis will follow with the statistical analysis plan for this trial. Data will be processed by statistical analyses with PASW Statistics 18.0 (IBM SPSS Inc., Armonk, New York, USA) and SAS 9.2 software (SAS Institute Inc., Cary, USA). Two-tailed P values <0.05 are considered to be statistically significant. Analysis will be on intention-totreat and per-protocol subject principles. The baseline characteristics of patients in two groups will be reported. The primary outcome, relapse rate, will be compared between both groups at 12 weeks and 24 weeks after treatment using Chi-square test, and considering superiority comparison between two groups by 95% confidence interval method. The secondary outcomes will be summarized with frequency, mean, standard deviation, median, and range. At one time point, comparisons between the experiment group and the placebo group will be conducted using the t-test. In order to distinguish the treatment effect and time effect, changes from baseline in the above outcomes will be tested using repeated measure analysis of variance. The analysis of covariance or logistic regression model will be used to analyze the site effect and impact factors. The co-variables will include gender, age, concomitant drugs, disease course, BSA, and PASI at the baseline visit. The subgroup analysis will be performed based on the severity of disease and CM patterns. Adjusting for baseline covariates, for timed endpoints such as time to relapse and time to onset, we will use the Kaplan-Meier survival analysis followed by the multivariable Cox proportional hazards model for adjusting for baseline variables. To assess the impact of potential clustering for patients cared for by the same hospital, we will use generalized estimating equations (GEE) assuming an exchangeable correlation structure.
Safety will be evaluated by tabulations of adverse events and will be presented with descriptive statistics at baseline and follow-up visits for each group. The statistics will be organized by treatment phase and post-treatment phase as appropriate. Chi-square test or Fisher's exact test will be used to compare the frequency difference of adverse events between the experimental and control group. As cases are divided into different degrees of AE, the ranksum test will be performed for analyzing the independent ordered multiple category data between two groups.
Discussion
Chinese herbs have been widely used in China and other Asian countries over years. Simultaneously, dozens of Chinese herbal studies in vivo or vitro were conducted to figure out the ingredients or pharmacological mechanisms. In a rice model, oral administration of Sarcandra Glabra extract (5-caffeoylquinic acid, 3-caffeoylquinic acid, isofraxidin and so on) could alleviate the stressinduced reduction of the number of lymphocytes, the balance of CD4 + T/CD8 + T and natural killer cell activity per spleen [19]. Niu et al. [20] examined that arthritic mice treated with isofraxidin (IF) had an obvious difference in serum tumor necrosis factor-α (TNF-α) compared with the lipopolysaccharides-stimulated group, IF may possess significant anti-inflammatory activities. Astilbin isolated from Rhizoma Smilacis Glabrae, inhibited the footpad swelling, arthritic incidence, and clinical scores without losing body weight [21]. The previous pharmacological properties of Sarcandra Glabra extract or Rhizoma Smilacis Glabrae may at least partially explain the clinical benefits for the YXBCM01 formula.
The inhibitory effect of the YXBCM01 formula has also been shown on the epithelial cell mitosis of mice [22]. Furthermore, the effect on proliferating cell nuclear antigen (PCNA) presentation and keratinocytes (KC) apoptosis was studied through different dose lavage in mice, in which high dose had a remarkable influence on PCNA inhibition and COLO-16 apoptosis [23]. Also, the YXBCM01 decoction was found to significantly inhibit the mitosis of mouse vaginal epithelium and promote the formation of granular layers in mouse tail-scale epidermis, and inhibit human keratinocyte line HaCat cells growth remarkably, when compared with the saline control group [24]. Among the five main constituents in the YXBCM01 formula, isofraxidin was found to be the active constituent for its correlation with the pathogenesis of psoriasis, with astilbin as the helper constituent due to its relationship with the transportation of drug molecules [25].
Some studies have shown that CM agents are effective and safe in the treatment of psoriasis. A randomized trial was conducted to compare CM combined with an acitretin capsule with CM alone [8]; the adverse reaction of the acitretin capsule could be alleviated by adjusting the herbs used. In another randomized control trial, 78 patients were randomly assigned to two groups: oral CM decoction (30 patients), or compound amino-polypeptide tablet (28 patients) [26]. At 4 weeks, there was an increase significantly between CM group (83.33%) and control group (64.28%) in the total response rate of PASI. Quality of life improvement in the CM group was superior to that the control. Similarly, Gao and Xu [27] demonstrated that a compound amino-polypeptide tablet combined with a CM decoction (94.29%) had an advantage over aminopolypeptide (83.33%) or CM decoction alone (62.86%) in PASI reduction. The incidence of adverse events in the CM group was less than that in amino-polypeptide and integrated group, as well as the relapse rate. However, the research quality was low and the definition of relapse was not declared. Remission seems to be long lasting and less costly compared with other treatments or biologic agents [28], but the relapse rate of CM treatment has not been proved in randomized controlled trials before.
A pilot study is necessary and important to demonstrate the feasibility of this trial, which has been performed to inform the design, from August 2012 to July 2013 in Guangzhou, China. It used the same add-on designed, double-blinded, randomized controlled trial comparing the YXBCM01 granule to placebo. The results showed the relapse rate in the YXBCM01 granule group was 28.6% (2 out of 7 patients) versus 70% (7 out of 10 patients) in the placebo group. While the result was not against our supposition of sample size calculation, we revised some details in the study design after this pilot study and it paved the way for patient recruitment and trial conducting.
As a randomized controlled trial of CM for psoriasis, it is unusual that both the CM granule and placebo groups receive topical therapy treatment. We aim to conduct this trial to demonstrate both the efficacy and safety of the YXBCM01 formula for PV patients, and consider that a conscientiously performed trial and effective outcomes will benefit PV patients. If this trial provides high-quality evidence for the efficacy and safety of the YXBCM01 formula, it will provide useful clinically information for PV, especially for reducing the relapse of disease in PV patients.
Trial status
The pilot phase of the trial started in August 2012 in the Guangdong Provincial Hospital of Chinese Medicine, China and is still in progress. data analysis, critical revision. All authors read and approved the final manuscript.
|
v3-fos-license
|
2020-12-17T14:15:50.737Z
|
2020-12-17T00:00:00.000
|
229290902
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2020.606575/pdf",
"pdf_hash": "e37e3b80c9cf03c1854b01a9a560e6c188a55f3a",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2410",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "e37e3b80c9cf03c1854b01a9a560e6c188a55f3a",
"year": 2020
}
|
pes2o/s2orc
|
Characterization of Antennal Sensilla and Immunolocalization of Odorant-Binding Proteins on Spotted Alfalfa Aphid, Therioaphis trifolii (Monell)
The spotted alfalfa aphid [Therioaphis trifolii (Monell), Homoptera, Drepanosiphidae] is a well-known destructive pest that can significantly reduce alfalfa yields. Herein, the morphology of antennal sensilla of T. trifolii has been examined by using scanning electron microscopy and the ultrastructure of sensilla stellate and placoidea was described by transmission electron microscopy. Stellate sensilla, placoid sensilla, and coeloconic sensilla were found on the 6th segment, and a single sensillum placoidea was located on the 5th segment. Placoid sensilla were also present on the 3rd antennal segment of alate and apterous aphids, and the number was similar between two morphs. Two types of trichoid sensilla and coeloconic sensilla were found on the antennae, respectively. The results of ultrastructure showed that stellate sensilla are innervated by three neurons, while placoid sensilla present three groups of neurons, equipped with 2–3 dendrites in each neuron group. Immunocytochemical localization of odorant-binding proteins (OBPs) was performed on ultrathin sections of sensilla stellate and placoidea, and we observed that the antiserum against OBP6 intensively labeled all placoid sensilla from both primary and secondary rhinaria. OBP7 and OBP8 could also be detected in placoid sensilla, but less strongly than OBP6. In addition, OBP6, OBP7, and OBP8 were densely labeled in stellate sensilla, suggesting OBP6, OBP7, and OBP8 may sense alarm pheromone germacrene A in T. trifolii.
INTRODUCTION
The spotted alfalfa/clover aphid, Therioaphis trifolii (Monell) (Homoptera: Aphididae), is a cosmopolitan pest of legumes, mainly in the tribes Trifoliae and Loteae (Blackman and Eastop, 2000). Spotted alfalfa aphid (SAA) damages the plants directly by sucking the juices from the leaves and tender stems and indirectly by vectoring plant-pathogenic viruses, severely interfering plant growth and affecting the quality and quantity of herbage produced (He and Zhang, 2006). Losses of alfalfa have been large, and the need for control of the insect has become of great economic importance. According to the reports, SAA severely inhibits seedling establishment and plant growth, affecting the quality and quantity of herbage produced, particularly hay, with an estimated 25% loss in production (He and Zhang, 2006).
Using insecticides to control aphid populations has become more difficulty since aphids develop insecticide resistance (Tang et al., 2017;Zhang et al., 2017a;Hanson and Koch, 2018;Koch et al., 2018). Thus, there is considerable interest in developing eco-friendly pest-control methods, with the use of semiochemicals as a distinct possibility. Searching for environmentally safe prevention and control strategies is extremely important. The behaviors of insects, such as locating food sources, mating partners, oviposition sites, choice of suitable hosts, and identifying predators, are frequently modulated or evoked by semiochemicals emitted by host plants or conspecifics (Zwiebel and Takken, 2004;Yoshizawa et al., 2011). The olfaction system plays a critical role in perceiving the semichemicals of insects (Karg and Suckling, 1999;Field et al., 2000). Aphids, like other insects, use semichemicals to direct much of their behaviors (Fan et al., 2015;Zhang et al., 2017b;Song et al., 2018).
It is well known that the antenna is one of the primary organs that insects use to recognize semiochemicals and environmental odors. In aphids, the antennal olfactory sensilla have been divided into primary rhinaria, second rhinaria, and trichoid sensilla according to external morphology (Shambaugh et al., 1978;Zhang and Zhang, 2000). The primary rhinaria occur on the 5th and 6th segments of the antenna and include several sensillum types. The second rhinaria, which are sensilla placoidea, in fact are located between the 3rd and 5th segments. The literature reported that two different types of trichoid sensilla have been identified based on the morphology (Bromley et al., 1980). The type I hair are found along the whole length of the antenna as far as the 6th segment primary rhinarium, while type II hair occur along the processus terminalis and on the tip of the antenna.
Aphids produce repellent droplets from the cornicles to alert nearby conspecifics to escape by walking away and dropping off the host plant to protect native populations from natural enemies or other dangers. These secretions contain alarm pheromone (Dixon, 1958;Dahl, 1971;Kislow and Edwards, 1972;Nault et al., 1973;Goff and Nault, 1974). The alarm pheromone plays an essential role in aphid's behavior and has been applied to explore potential strategies for aphid population control (Li et al., 2017). Two primary alarm pheromones, (E)-β-farnesene and germacrene A, have been identified in aphids until now. (E)-β-Farnesene has been found in all studied species of subfamilies Aphidinae and Chaitophorinae, while germacrene A was identified only within the genus Therioaphis of the subfamily Drepanosiphinae (Bowers et al., 1972Nault and Bowers, 1974;Nishino et al., 1977). Early study showed that aphids of genus Therioaphis, such as T. trifolii, are lack of response to (E)-β-farnesene . Although biochemical research on the olfactory system of aphids is rapidly progressing, information about Drepanosiphinae's aphids is still scanty and fragmentary. In insects, semiochemicals and environmental odors enter the sensillum lymph via pores in the cuticle of the sensilla and are carried by odorant-binding proteins (OBPs), transported through the sensillum lymph and finally reached sensory dendrites, where they activate membrane-bound odorant receptors (ORs) (Brito et al., 2016). OBPs, usually 14-20 kDa, are abundantly expressed in the lymph of chemosensilla and referred to as the solubilizer and carrier of hydrophobic pheromones and discrimination of semiochemicals (Qiao et al., 2009;Pelletier et al., 2010;Swarup et al., 2011;Sun et al., 2012). Early studies reported either or both of OBP3 and OBP7 might be involved in (E)-β-farnesene perception in most aphids, such as in Rhopalosiphum padi and Acyrthosiphon pisum (Qiao et al., 2009;Fan et al., 2017;Zhang et al., 2017a). Although previous study reported the alarm pheromone of T. trifolii, which OBP is involved in the perception of germacrene in T. trifolii are still unknown.
While biochemical research on the olfactory system of aphids is rapidly progressing, information at the anatomical level for SAAs T. trifolii (Homoptera, Drepanosiphidae) is still scanty and fragmentary. This study was conducted to investigate the function of antennae in T. trifolii by studying the distribution and fine structure of chemosensilla, using both scanning and transmission electron microscopy, and mapping the expression of OBPs in such sensilla. Our research offers data related to the candidate OBPs potentially involved in perception of semiochemicals in aphid T. trifolii, which will provide original strategies for aphid's integrated management. Herein, we report on the morphology and ultrastructural characterization of the different types of antennal sensilla in SAA T. trifolii by scanning and transmission electron microscopy. In addition, the distribution and expression of OBP6, OBP7, and OBP8 in sensilla stellate and placoidea was investigated.
Insect Rearing
Spotted alfalfa aphid T. trifolii was reared on alfalfa (Medicago sativa) at 20-22 • C, 60-70% relative humidity with a photoperiod of 16: 8 h (light: dark) at College of Grassland Science and Technology, China Agricultural University, Beijing, China.
Scanning Electron Microscopy
For better confirmation of the number and types of sensilla on the antenna of T. trifolii, twenty alate and apterous adult aphids were used in this study for scanning electron microscopy (SEM). The heads of all samples were carefully excised with fine forceps under a stereomicroscope. The heads were first kept in 70% ethanol for 48 h at room temperature and then cleaned in an ultrasonic bath (250 W) for 5 s in the same solution. After dehydrated by an ethanol serial solution (30, 50, 70, 80, 90-100%) (Bock, 1987) in each case for 3 min, the dehydrated specimens were dried in Critical Point Dryer (LEICA CPD 030, Wetzlar, Germany) for 1.5 h. The dried head was mounted on holder and gold-sputtered in a Hitachi sputtering ion exchanger (HITACHI ID-5, Tokyo, Japan), and then the sensilla types were identified and counted in a HITACHI S-4800 SEM (Japan). Pictures were only adjusted for brightness and contrast.
Transmission Electron Microscopy
The antennae used for Transmission electron microscopy (TEM) were excised and prefixed for 2 days with paraformaldehyde (4%) and glutaraldehyde (2.5%) in 0.1 M phosphate buffered saline (PBS, pH 7.2), then postfixed for 1 h with 1% OsO 4 in 0.1 M PBS (pH 7.2), and followed by dehydration in an ethanol series solutions (30, 50, 70, 80, 90, 95-100%) for 3 min each. After being dehydrated with pure acetone three times for 10 min each, the samples were embedded in Epoxide resin 618 through mixtures of 2: 1, 1: 1, 1: 2 of acetone and Epoxide resin 618 (Serva, Heidelberg, Germany) and then kept in pure Epoxide resin 618 overnight. Polymerization was accomplished with heating from 30 to 60 • C (5 • C/6 h), at 60 • C for 48 h in tightly closed gelatin capsules filled completely with the resin monomer. Ultrathin sections were cut with a diamond knife (Diatome, Bienne, Switzerland) on a Leica EM UC6 microtome (Wetzlar, Germany) and then mounted on Formvar-coated grids. The sections were observed on a HITACHI H-7500 TEM (Hitachi, Tokyo, Japan). Pictures were only adjusted for brightness and contrast.
Immunocytochemistry
The antennae used were prefixed in paraformaldehyde (4%) and glutaraldehyde (2%) in 0.1 M PBS (pH 7.4) and then dehydrated in an ethanol series. The samples were embedded in LR White resin, and ultrathin sections were cut and mounted on Formvar-coated grids. For immunocytochemistry, the grids were subsequently floated, each time for 5 min, on 25-µL droplets of the following solutions, along with the procedure adapted from Steinbrecht et al. (1992). In brief, the grids with the sections were floated on solutions of PBG (PBS containing 50 mmol/L glycine) and PBGT (PBS containing 0.2% gelatin, 0.5% bovine serum albumin, and 0.02% Tween-20) twice for each solution, and then overnight at 4 • C in primary antiserum (against OBP6, OBP7, and OBP8, respectively), or preimmune serum in PBGT. After six washings with PBGT, sections were incubated for 1 h with a secondary antibody in PBGT (1: 20) at room temperature, followed by two washings with PBGT, PBS glycine, and water. Each washing step was performed with 20-µL droplets for 5 min. Silver intensification (Danscher, 1981) was also applied to increase the size of the gold granules, followed by 2% uranyl acetate to increase the tissue contrast in TEM. Sections were then observed under a transmission electron microscope HITACHI H-7500 (Hitachi, Tokyo, Japan). Pictures were only adjusted for brightness and contrast.
In this study, the antisera against OBP6, OBP7, and OBP8 of A. pisum were kindly provided by Dr. Paolo Pelosi, University of Pisa (Qiao et al., 2009;Sun et al., 2012), and used as the primary antisera. The primary antiserum was used at dilutions 1: 1000. The serum from a healthy rabbit at the same dilution was used as the control. In the controls, the primary antiserum was replaced by a serum from a healthy rabbit at the same dilution. A secondary antibody was anti-rabbit IgG, coupled to 10-nm colloidal gold (AuroProbe TM EM, GAR G10, Amersham).
Image analysis was performed with ImageJ (developed at the United States National Institutes of Health).
RESULTS
The antennae of SAA T. trifolii are composed of three parts: a scape (Sc), a pedicel (Pe), and a long flagellum (F), with a total length of 1.8 mm ( Figure 1A). The length of flagellum accounts for more than 80% of the whole the antenna and consists of four subunits, named F1-F4.
Four morphologically distinct types of sensilla were present on the entire surface of SAAs' antennae, including placoid sensilla, stellate sensilla, coeloconic sensilla, and trichoid sensilla. The primary rhinaria of T. trifolii was found on the 5th and 6th segments of the antennae. The primary rhinarium on the 6th segment consists of one large placoid sensillum, two stellate sensilla, and two to three coeloconic sensilla (Figures 1B,C), with numerous pores penetrated the surface the former two types sensilla (Figures 1D,E). The primary rhinarium on the 5th segment is a single placoid sensillum similar to that on the 6th segment ( Figure 1F). Numerous secondary rhinaria (placoid sensilla) are located on the 3rd segment ( Figures 1G,H), and the numbers of placoid sensilla were similar between alate and apterous morphs, ranging from 5 to 12 (Table 1 and Supplementary Figure 1). There are two to three coeloconic sensilla in total, and they are classified into two types according to the terminal projections. Trichoid sensilla are classified into two types according to their morphology, which are along the whole length of the antenna. The number and the distribution of different sensilla on the antenna of T. trifolii are listed in Table 1 and Supplementary Figure 1. In addition, the expressions of OBPs in the antennal placoid and stellate sensilla of the aphid T. trifolii have been investigated by using immunocytochemical methods.
Placoid Sensilla
The placoid sensilla of SAA are flat oval plates in a cavity. They constitute both the primary and secondary rhinaria of these aphids, but only the former are surrounded by a cuticular ridge (Figure 1F), whereas those of secondary rhinaria are only surrounded by a few small microtrichia at the proximal edge of the cavity (Figure 1H).
For the primary rhinaria on the 5th segment and 6th segment, there is a single large placoid sensillum, respectively (Figures 1B,F). The large placoid sensillum on the 6th segment is approximately 12 µm in diameter and similar to the one on the 5th segment. Many pores are located on the surface of these sensilla ( Figure 1E) and perforated the outer cuticle (Figures 2A,B). The dendrites of bipolar neurons within the placoid sensillum are clustered into three groups (Figure 2C), two of them containing three bipolar neurons (Figures 2C,D), while the third group has only two bipolar neurons (Figures 2C,E). The bipolar neurons are enclosed in a dendritic sheath (Figure 2D). The dendrite is subdivided into inner and outer segments by a short ciliary region, and each group of neurons is surrounded by trichogen cell (Figure 2E). Both the single placoid sensillum on the 5th segment and the secondary rhinaria along the third segment present internal structures similar to that of the placoid sensillum on the 6th segment.
Stellate Sensilla
Two stellate sensilla are present on the 6th antennal segment of T. trifolii as part of the primary rhinaria. These sensilla usually have six to eight branches and surrounded by a fringed cuticular ridge ( Figure 1B). The branch is about 5 µm in length. Similar to placoid sensilla, many pores are present on the surface of the branches (Figures 1D, 3A) that penetrate the outer cuticle ( Figure 3B). The diameters of the pores are about 30 nm ( Figure 1D). The dendrites of bipolar neurons within these sensilla are clustered into one group, containing three bipolar neurons ( Figure 3C). The dendrites embedded by dendrite sheath, which was disappeared at the distal ends of the dendrites.
When the dendrites enter into the intercuticular space between the inner and outer cuticles, they are separated into dendritic branches and turn toward the distal end of the sensillum, occupying the whole space ( Figure 3C). The space beneath the pore is filled with the sensillum lymph where the dendritic branches are located. The dendrite is also subdivided into inner and outer segments by a short ciliary region, surrounded by the trichogen cell ( Figure 3D).
Coeloconic Sensilla
Two to three coeloconic sensilla are present on the 6th segment of the SAA antenna, and the distribution is shown as in Figures 1B,C. They are typical peg-in-pit sensilla, characterized externally by a round aperture (Figures 4A,B). The peg is 1.5 µm height and 0.6 µm diameter (Figures 4A,B). The peg terminates in many cuticular projections and exhibits a range of different shapes (Figures 4A,B). According to the terminal projections, the coeloconic sensilla were classified into two types. The cuticular projections of type I hair were usually characterized by a crown of six cuticular projections (Figure 4A), while type II hair exhibits a more complicated morphology of the peg ending with more cuticular projections, usually closely packed ( Figure 4B). These sensilla of the SAA were similar to that found in peach aphids Myzus persicae (Ban et al., 2015).
Trichoid Sensilla
Two distinct types of trichoid sensilla are present on the antenna of the SAA. The type I hair of trichoid sensilla is visible along the whole length of the antenna, and there is no pore on the surface and the tip of these trichoid sensilla (Figures 4C,D). Type II hairs were found only on the tip of antennae and uniporous (Figures 4E-F). Depending on external morphology, the type I is divided into two subtypes type IA ( Figure 4C) and type IB (Figure 4D). The type IA hairs present a swollen tip with diameter 1.8 µm, and the base of the hair forms an ovalshaped plate, inserted into a ring-shaped socket. Type IA hairs are approximately 7.5 µm long and 1.2 µm wide of the base (Figure 4C). Type IB hairs are similar to type IA except for that they are equipped with a sharp tip with diameter 0.4 µm ( Figure 4D). The type IA hairs present on the scape and pedicel, while type IB hairs are found from the 3rd segment to the 6th segment. Type II hairs present on the tip of the antenna, crowned by five of this sensilla ( Figure 4G). Type II hairs are approximately 4-6 µm in length and 0.7-1.2 µm in width, inserted into a ring-shaped socket. A pore is found on the tip of type II hairs, while there are no pores on the surface of the sensilla (Figures 4E-G).
Immunolabeling of OBPs
Immunocytochemical experiments were performed to investigate the cellular localization of OBP6, OBP7, and OBP8 in the SAA antennae. The results indicated that sensilla placoidea and stellate are labeled by OBP antisera except for sensilla trichodea and coeloconica. The distribution of OBPs and the labeling density in different sensilla placoidea and stellate are summarized in Table 2.
The results showed that the antiserum against OBP6 of T. trifolii specifically and strongly labeled sensilla placoidea on the 6th segment and 3rd segment. The gold granules are predominately distributed between the outer and inner cuticles, with a grain density of 35 and 28 grains/µm 2 on the 6th segment and 3rd segment, respectively (Figure 5). The lymph of placoid sensilla on the 3rd segment was also labeled specifically by the antiserum against OBP8 (Figure 5), with a grain density of 17 grains/µm 2 . The antiserum against OBP7 showed relative weakness but significant staining specifically in the placoid sensilla of the 3rd antennal segment. Finally, very weak and -, no significant labeling; +, 10-20 grains/µm 2 ; ++, 20-30 grains/µm 2 ; +++, more than 30 grains/µm 2 . barely significant labeling was observed with the antiserum against OBPs in the sensilla placoidea of the 5th segment (data were not shown). The stellate sensilla on the 6th segment were also labeled by OBP6, OBP7, and OBP8 antisera, mainly in the branches and sensillum lymph surrounding the dendrites (Figures 6).
DISCUSSION
In this study, we investigate the structure, morphology, and distribution of sensilla and the expression of three OBPs (OBP6, OBP7, and OBP8) in the antenna of SAA T. trifolii.
Similar to other aphids, the antennae of SAA T. trifolii also contained six segments. Based on external morphology, the antennae sensilla of SAA are classified into four types, including placoid sensilla, stellate sensilla, coeloconic sensilla, and trichoid sensilla. One striking observation is the presence of stellate sensilla on the 6th segment of primary rhinaria in this aphid. To our best knowledge, stellate sensilla were only identified in aphid species in the subfamily Drepanosiphinae (Shambaugh et al., 1978). Compared to other aphids, these aphids present two stellate sensilla but with two small placoid sensilla absent (Table 1), indicating these sensilla may be substitutes for the latter sensilla. The ultrastructure of these two type sensilla supported that they were similar, in that both of them are with three bipolar neurons (Ban et al., 2015). Similar to placoid sensilla, stellate sensilla also have many pores on the surface (Shambaugh et al., 1978;Ban et al., 2015), suggesting their chemosensory FIGURE 5 | Immunocytochemical localization OBP6, OBP7, and OBP8 in sensilla placoidea on the 3rd and 6th segments of antennae in spotted alfalfa aphid. The preimmune serum was used as control. The antiserum against OBP6 of T. trifolii specifically and strongly labeled sensilla placoidea on the 6th segment and 3rd segment. The lymph of placoid sensilla on the 3rd segment was also labeled specifically by OBP8, but weaker than OBP6. The antiserum against OBP7 showed relatively weak, but significant staining specifically in the placoid sensilla of the 3rd antennal segment. Dilution of primary antibody was 1:1,000, and that of the secondary antibody was anti-rabbit IgG conjugated with 10 nm colloidal gold granules at a dilution of 1:20; the same below.
FIGURE 6 | Immunocytochemical localization of OBP6 (B), OBP7 (C), and OBP8 (D) in sensilla stellate on the antennae of spotted alfalfa aphid. The preimmune serum was used as control (A). OBP6, OBP7, and OBP8 are densely labeled in the sensillum lymph of stellate sensilla and appear to be colocalized in the same sensillum while preimmune serum shows no labeling at all. Dilution of the primary antibody was 1:1,000, and that of the secondary antibody was anti-rabbit IgG conjugated with 10 nm colloidal gold granules at a dilution of 1:20.
function (Sun et al., 2013). As previously described, we also observed the presence of multiple pores on the surface of the outer cuticle of placoid sensilla (Bromley et al., 1979), indicating they are typical olfactory chemoreceptors (Steinbrecht, 1984). We found that the number of secondary rhinaria (placoid sensilla) on the antenna are very similar between alate and apterous T. trifolii (Table 1), while secondary rhinaria were seldom found in apterous morphs of other aphids such as M. persicae and A. pisum (Table 1; Shambaugh et al., 1978;Sun et al., 2013;Ban et al., 2015;De Biasio et al., 2015). In addition, the ultrastructure of secondary rhinaria (placoid sensilla) is also very similar between alate and apterous morphs in T. trifolii, and further studies need to confirm this phenomenon in other aphids. Coeloconic sensilla have been reported that they are involved in thermo-/hygroreceptive functions in both Lepidoptera and Diptera (Sutcliffe, 1994). Similar to what was described in other aphid species (Bromley et al., 1980), two types of trichoid sensilla have been reported in our study, which could be involved in mechanosensing and/or in contact chemoreception (Bromley et al., 1980;Sun et al., 2013;Ban et al., 2015). Type II trichoid sensilla localized on the antennal tip and crowned by five blunt tipped uniporous hair, implying a gustatory function for these sensilla (De Biasio et al., 2015). The ultrastructure of coeloconic sensilla and trichoid sensilla needs to be more studied in the future.
Immunocytochemistry experiments have been used widely to study the location of OBPs in insects (Steinbrecht et al., 1995;Laue, 2000;Zhang et al., 2001Zhang et al., , 2018Zhu et al., 2016). Most literatures showed that OBP was usually found in sensilla, which have many pores on their surface (Steinbrecht et al., 1995;Laue, 2000;Zhang et al., 2001;Sun et al., 2013;Zhu et al., 2016). In herein, we selected three OBPs to investigate the expression pattern on the antennal sensilla. The results showed high expression of OBP6, OBP7, and OBP8 in the antennal placoid and stellate sensilla of adults, supporting a chemosensory role for these proteins in detecting alarm pheromones, plant volatiles, or sex pheromone (Bromley et al., 1979;Sun et al., 2013;De Biasio et al., 2015). The alarm pheromone is released by aphids in the presence of danger and induces other individuals to immediately leave the host plant. The alarm pheromone, (E)-β-farnesene has been found in all studied species of subfamilies Aphidinae and Chaitophorinae, while germacrene A was identified only within the genus Therioaphis of the subfamily Drepanosiphinae (Bowers et al., 1972Nault and Bowers, 1974). Our results indicated that stellate sensilla might be involved in sensing alarm pheromone germacrene A. An early study suggests that OBP7, together with OBP3, is involved in the perception (E)-β-farnesene in both M. persicae and A. pisum (Sun et al., 2012;Zhang et al., 2017b). Our immunocytochemical results showed that placoid and stellate sensilla are strongly labeled by antibodies against OBP6 and significantly labeled by those against OBP7 and OBP8, suggesting that OBP6, OBP7, and OBP8 may sense alarm pheromone germacrene A. OBP3 has been reported high-binding affinity to the (E)-β-farnesene which is the only component of the alarm pheromone in M. persicae and A. pisum (Qiao et al., 2009;Sun et al., 2012). Whether OBP3 is involved in germacrene A perceiving is still unclear, which needs further work to decipher it.
Further studies of molecular biology are also necessary to clarify the function of stellate sensilla in the alfalfa spotted aphid.
Overall, we identified the main sensilla types on the antenna of alfalfa spotted aphid, whose alarm pheromone is different from other aphids. Using TEM, we clarified the ultrastructure of stellate sensilla, which is absent in other aphid besides the subfamily Drepanosiphinae. Our findings will enrich recognition to the antennae sensilla of aphids. In addition, our study will provide insights into determining new strategies for control of these worldwide pests by interfering with their chemical communication.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
AUTHOR CONTRIBUTIONS
LB conceived the project and designed the research. LS, YL, and YS performed the research. LS, XW, and LB analyzed the data and wrote the manuscript. All authors reviewed and approved the manuscript for publication.
FUNDING
This work was supported by grants from the National Natural Science Foundation of China (NSFC 31971759, 31372364) and Beijing Agriculture Innovation Consortium (BAIC09-2020).
ACKNOWLEDGMENTS
We thank Paolo Pelosi (University of Pisa, Italy) for supplying the antisera used in this work. We also thank Hao Hong Jing (Institute of Agro-products processing Science and Technology, Chinese Academy of Agricultural Sciences) for help with electron microscopy and Rui Yang (College of Plant Science and Technology, Beijing Key Laboratory of New Technology in Agricultural Application, Beijing University of Agriculture) for technical assistance in SEM.
|
v3-fos-license
|
2019-03-09T14:01:40.041Z
|
2012-02-13T00:00:00.000
|
71942394
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://downloads.hindawi.com/archive/2012/209678.pdf",
"pdf_hash": "7aed23e855aaf3cf94fa6c15a535b8a6a94baf53",
"pdf_src": "Anansi",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2411",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"sha1": "7aed23e855aaf3cf94fa6c15a535b8a6a94baf53",
"year": 2012
}
|
pes2o/s2orc
|
Exact Solutions for the Axial Couette Flow of a Fractional Maxwell Fluid in an Annulus
The velocity field and the adequate shear stress corresponding to the rotational flow of a fractional Maxwell fluid, between two infinite coaxial circular cylinders, are determined by applying the Laplace and finite Hankel transforms. The solutions that have been obtained are presented in terms of generalized Ga,b,c ·, t and Ra,b ·, t functions. Moreover, these solutions satisfy both the governing differential equations and all imposed initial and boundary conditions. The corresponding solutions for ordinaryMaxwell andNewtonian fluids are obtained as limiting cases of our general solutions. Finally, the influence of the material parameters on the velocity and shear stress of the fluid is analyzed by graphical illustrations.
Introduction
Due to the several technological applications, the flow analysis of non-Newtonian fluids is very important in the fields of fluid mechanics.Many investigators have not studied the flow behavior of non-Newtonian fluids in various flow fields due to the complex stressstrain relationship 1 .The study of non-Newtonian fluids has got much attention because of their practical applications.Non-Newtonian characteristics are displayed by a number of industrially important fluids including polymers, molten plastic, pulps, microfluids, and food stuff display.Exact analytic solutions for the flows of non-Newtonian fluids are important provided they correspond to physically realistic problems, and they can be used as checks against complicated numerical codes that have been developed for much more complex flows.Many non-Newtonian models such as differential type, rate type, and integral type fluids have been proposed in recent years.Among them, the rate type fluid models have received special attention.The differential type fluids do not predict stress relaxation, and they are not successful for describing the flows of some polymers.
The flow between rotating cylinders or through a rotating cylinder has applications in the food industry, it being one of the most important and interesting problems of motion near rotating bodies.As early as 1886, Stokes 2 established an exact solution for the rotational oscillations of an infinite rod immersed in a linearly viscous fluid.However, such motions have been intensively studied since G. I. Taylor 1923 reported the results of his famous investigations 3 .For Newtonian fluids, the velocity distribution for a fluid contained in a circular cylinder can be found in 4 .The first exact solutions corresponding to different motions of non-Newtonian fluids, in cylindrical domains, seem to be those of Ting 5 , Srivastava 6 , and Waters and King 7 .A lot of interests and studies were also given to the unidirectional start-up pipe flows, which has a significant practical and mathematical meaning.Zhu The purpose of this paper is to provide exact solutions of the velocity field and the shear stress corresponding to the motion of a fractional Maxwell fluid between two infinite circular cylinders.The Laplace and finite Hankel transforms are used to solve the problem, and the solutions obtained are presented in terms of generalized G a,b,c •, t and R a,b •, t functions.The solutions for ordinary Maxwell and Newtonian fluids are obtained as limiting cases of our general solutions.Furthermore, the solutions for the motion between the cylinders, when one of them is at rest, are also obtained as special cases from our general results.At the end, obtained solutions are discussed graphically for different values of time and material parameters.
Basic Governing Equations
The constitutive equations of an incompressible Maxwell fluid with fractional calculus are given by 14 where T is the Cauchy stress tensor, −pI denotes the indeterminate spherical stress, S is the extrastress tensor, A L L T with L the velocity gradient, μ is the dynamic viscosity of the fluid, λ is the material constant called relaxation time, and DS/Dt is defined by Here, w is the velocity vector, ∇ is the gradient operator, the superscript T denotes the transpose operation, and the Caputo fractional derivative operator D β t is defined as 26 where Γ • is the Gamma function which is defined as In cylindrical coordinates r, θ, z , the rotational flow velocity is given by w w r, t w r, t e θ , 2.5 where e θ is the unit vector in the θ-direction.For such flows, the constraint of incompressibility is automatically satisfied.Since the velocity field w is independent of θ and z, we also assume that S depends only on r and t.Furthermore, if the fluid is assumed to be at rest at the moment t 0, then w r, 0 0, S r, 0 0.
2.6
Equations 2.1 , 2.5 , and 2.6 imply S rr S zz S θz 0 18 , where τ r, t S rθ r, t is the nontrivial shear stress.In the absence of body forces and a pressure gradient in the axial direction, the equations of motion lead to the relevant equation where ρ is the constant density of the fluid.Eliminating τ between 2.7 and 2.8 , we attain to the governing equation where ν μ/ρ is the kinematic viscosity of the fluid.In the following, the fractional partial differential equations 2.9 and 2.7 , with appropriate initial and boundary conditions, will be solved by means of Laplace and finite Hankel transforms.In order to avoid lengthy calculations of residues and contours integrals, the discrete inverse Laplace method will be used 13, 14 .
Axial Couette Flow between Two Infinite Circular Cylinders
Let us consider an incompressible fractional Maxwell fluid at rest in an annular region between two coaxial circular cylinders of radii R 1 and R 2 > R 1 .At time t 0 , both cylinders with radii R 1 and R 2 begin to rotate along their common axis.Owing to the shear, the fluid is gradually moved, its velocity being of the form 2.5 .The governing equations are given by 2.9 , while the appropriate initial and boundary conditions are where Ω 1 and Ω 2 are constants with dimensions T −2 .
Calculation of the Velocity Field
Applying the Laplace transform to 2.9 , using the Laplace transform formula for sequential fractional derivatives 26 , and having the initial and boundary conditions 3.1 and 3.2 in mind, we find that where w r, q is the Laplace transform of the function w r, t which is defined as and the image function w r, q has to satisfy the conditions In the following, we denote by 27 and the Hankel transform of w r, q , where and r n are the positive roots of the transcendental equation B R 1 , r 0, while J 1 • and Y 1 • are Bessel functions of the first and second kind of order one.
Multiplying both sides of 3.3 by rB r, r n , integrating with respect to r from R 1 to R 2 , and taking into account the conditions 3.5 and the identity 3.8 we find that 3.9 Now, for a suitable presentation of the final results, we rewrite 3.9 in the following equivalent form:
3.10
Now, applying the inverse Hankel transform formula 27 we obtain the Laplace transform of the velocity field w r, q under the form w r, q writing the last factor of 3.12 in the following equivalent form:
3.13
Introducing 3.13 into 3.12 , applying the discrete inverse Laplace transform, and using the known result 28, equation 97 , where the generalized G a,b,c •, • function is defined by and we find the velocity field under the form w r, t 3.16
Calculation of the Shear Stress
Applying the Laplace transform to 2.7 , we find that τ r, q μ 1 λq β ∂w r, q ∂r − w r, q r , 3.17 where ∂w r, q ∂r − w r, q r is obtained from 3.12 and Thus, 3.17 becomes
3.20
applying again the discrete inverse Laplace transform as well as using the known relation 28, equation 21 , where the generalized R a,b c, t functions are defined by 28 and the expansion
3.23
and we obtain the shear stress τ r, t under the form τ r, t 3.24
Classical Maxwell Fluid
Making β → 1 into 3.16 and 3.24 , we obtain the velocity field and the shear stress the expressions 4.1 and 4.2 can be written in the simplified form 1n e q 2n t − q 2 2n e q 1n t q 2n − q 1n , τ M r, t q 1n e q 2n t − q 2n e q 1n t q 2n − q 1n . 4.4
Newtonian Fluid
By now letting λ → 0 into 4.4 or β → 1 and λ → 0 into 3.16 and 3.24 , using lim we obtain the velocity field and the associated shear stress corresponding to a Newtonian fluid, performing the same motion.
When the Inner Cylinder Is at Rest
Making Ω 1 0 and Ω 2 Ω into 3.16 and 3.24 , for instance, we obtain the velocity field and the shear stress a
When the Outer Cylinder Is at Rest
Making Ω 1 Ω and Ω 2 0 into 3.16 and 3.24 , we obtain the velocity field and the associated shear stress regard to t like Figure 1 a , but it has opposite effect for r, more exact velocity is decreasing with regard to r on the whole flow domain.
Conclusions
In this paper, the velocity w r, t and the shear stress τ r, t corresponding to the flow of an incompressible Maxwell fluid with fractional derivatives, in the annular region between two infinite coaxial circular cylinders, have been determined using the Laplace and finite Hankel transforms.The solutions that have been obtained, written under a series form in terms of generalized G a,b,c •, t -and R a,b •, t -functions, satisfy the governing equations and all imposed initial and boundary conditions.In the limiting cases, when β → 1 or β → 1 and λ → 0, the corresponding solutions for the ordinary Maxwell and Newtonian fluids are obtained.These solutions also satisfy the associated initial and boundary conditions 3.1 and 3.2 , respectively.Moreover, the solutions for the motion between the cylinders, when one of them is at rest, are also obtained from our general results.
In order to reveal some relevant physical aspects of the obtained results, the diagrams of the velocity field w r, t and the shear stress τ r, t given by 3.16 and 3.24 have been drawn against r for different values of the time t and the material parameters.Figures 2 and 3 show the profile of the fluid motion at different values of time when both inner and outer are rotating with the same angular velocity in the same direction and in the opposite direction, respectively.From these figures, one can clearly see that both velocity and shear stress in absolute values are increasing function of t.From Figure 3 a , one can also observe that fluid has zero velocity nearer to inner cylinder.
In Figure 4, the influence of the relaxation time λ on the fluid motion is shown.As expected, both the velocity and the shear stress in absolute value are decreasing functions with respect to λ.Effect of fractional parameter β on the fluid motion is represented in Figure 5, and it is clearly seen that both velocity and shear stress in absolute value are increasing with respect to β.Finally, for comparison, the diagrams of w r, t and τ r, t corresponding to the three models fractional Maxwell, ordinary Maxwell, and Newtonian are together depicted in Figure 6 for the same values of the common material constants and time t.The Newtonian fluid is the swiftest, while the fractional Maxwell fluid is the slowest on the whole flow domain.One thing is worth of mentioning that the units of the material constants are SI units in all the figures, and the roots r n have been approximated by nπ/ R 2 − R 1 .
et al. 8 studied the characteristics of the velocity filed and the shear stress field for an ordinary Maxwell fluid, and Yang and Zhu 9 studied it for a fractional Maxwell fluid.In the last decade, the unidirectional flow of viscoelastic fluid with the fractional Maxwell model was studied by Tan et al. 10, 11 and Hayat et al. 12 .Tong et al. 13, 14 discussed the unsteady flow with a generalized Jeffrey's model in an annular pipe.In the meantime, a lot of papers regarding such motions have been published.The interested readers can see for instance the papers 15-25 and their related references.
4 . 2 corresponding
to an ordinary Maxwell fluid, performing the same motion.Of course, in view of the identities
5 . 2 corresponding
to a fractional Maxwell fluid when the inner cylinder is at rest.Figure1 ashows velocity profile corresponding to 5.1 for different values of time, when the inner cylinder is at rest.It shows that velocity is an increasing function with regard to t and r on the whole flow domain.
|
v3-fos-license
|
2014-10-01T00:00:00.000Z
|
2009-08-19T00:00:00.000
|
1891434
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0006683&type=printable",
"pdf_hash": "4e6bc6f736ce2151287773f74aa3fe222b4ae920",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2414",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"sha1": "4f780ddb84d4c303d40419e56fade00cbe09d739",
"year": 2009
}
|
pes2o/s2orc
|
Sequence Skill Acquisition and Off-Line Learning in Normal Aging
It is well known that certain cognitive abilities decline with age. The ability to form certain new declarative memories, particularly memories for facts and events, has been widely shown to decline with advancing age. In contrast, the effects of aging on the ability to form new procedural memories such as skills are less well known, though it appears that older adults are able to acquire some new procedural skills over practice. The current study examines the effects of normal aging on procedural memory more closely by comparing the effects of aging on the encoding or acquisition stage of procedural learning versus its effects on the consolidation, or between-session stage of procedural learning. Twelve older and 14 young participants completed a sequence-learning task (the Serial Reaction Time Task) over a practice session and at a re-test session 24 hours later. Older participants actually demonstrated more sequence skill during acquisition than the young. However, older participants failed to show skill improvement at re-test as the young participants did. Age thus appears to have a differential effect upon procedural learning stages such that older adults' skill acquisition remains relatively intact, in some cases even superior, compared to that of young adults, while their skill consolidation may be poorer than that of young adults. Although the effect of normal aging on procedural consolidation remains unclear, aging may actually enhance skill acquisition on some procedural tasks.
Introduction
Normal aging leads to declines in certain cognitive abilities while leaving other abilities intact. It is known that aging particularly impairs the formation of certain types of declarative memories, for instance, recall and recognition of new facts and events [1,2]. In contrast, the effect of aging on the ability to form new procedural memories such as motor skills has received less attention in the aging literature. Existing studies show that aging is accompanied by general declines in motor execution such as reaction time speed and accuracy [3]. However, older adults retain the ability to improve on certain motor tasks over an initial period of practice, or during encoding, the first stage of procedural memory. For instance, in a task of fine motor movement and manipulation of objects, older subjects improved their motor execution speed over practice [4]. Older adults have also shown comparable performance improvements to young adults during encoding of a motor sequence. Participants completed a version of the serial reaction time task (SRTT) in which they learned a sequence of finger movements using visual cues, and their performance was measured by response time. After performing the sequence over a series of practice blocks, older and young participants demonstrated comparable practice effects as indicated by speeded reaction times. In addition, both age groups demonstrated comparable sequence-specific learning as indicated by an increase in response times when switching from sequential to random finger movements [5,6].
Older participants thus appear to be able to learn certain procedural tasks as effectively as young adults during the encoding, or acquisition, phase of procedural learning since they show similar improvements during initial training. However, further skill can potentially be obtained during the consolidation phase of procedural memory, or the stage following acquisition. Recent studies have shown that college-age subjects can continue to increase their level of skill on sequence tasks between practice sessions [7,8]. This between-session improvement, termed ''offline'' learning, is one behavioral expression of procedural consolidation. Young adults continue to acquire skill on a sequence-learning task over a period of 12 waking hours without practice on the task [9]. We sought to examine the comparative effects of aging on procedural acquisition and on procedural consolidation as indicated by off-line learning on a task of procedural learning.
We tested a group of older and younger adults on the Serial Reaction Time Task (SRTT) on two testing sessions separated by 24 hours, including both wake and sleep. This task requires participants to respond via button-pressing to a series of dots that appear in one of four spatial locations on a computer screen. These spatial cues appear in blocks of trials with either a random or a sequential order. By comparing participants' reaction time on sequential versus random trials, sequence-specific learning can be assessed both within sessions (acquisition) and between sessions (offline learning). We sought to compare the affects of normal aging on both acquisition and off-line learning of this procedural task.
Participants
Thirty-two healthy adults were recruited for this study. They included 10 female and 8 male young adults (n = 18) and 9 female and 5 male older adults (n = 14). Fourteen young adults (M = 20.4 years of age, SD = 1.6) and 12 older adults (M = 58.3 years of age, SD = 3.8) were included for analyses (N = 26). Four young and two older participants were excluded because they either generated unusable data (n = 3 young), showed outlying scores of more than three standard deviations away from the mean on the primary behavioral task (n = 1 young, n = 1 older), or did not perform the task properly (n = 1 older). All participants were right-handed according to their reports on the Edinburgh Handedness Questionnaire, and all participants reported being free of neurological and psychiatric illnesses. Young participants scored marginally but significantly better on the Mattis dementia rating scale (M = 143.82/144, SD = 0.6) than older participants (M = 142.33/144, SD = 1.8), t (21) = 22.57, p,0.05), although all participants scored within the normal range. (Three young participants did not complete the Mattis scale). Older participants completed significantly more years of education (M = 19.3, SD = 3.9) than young participants (M = 12.3, SD = 1.1, t (24) = 6.44, p,.0001), likely due to the fact that most of the young had not yet completed their college education. Older participants were recruited from the greater Boston area via fliers that were posted around the testing site as well as via online postings. Younger adults were recruited from local colleges (primarily Boston University). All subjects received $30 in cash as compensation. All subjects underwent both written and verbal informed consent. The study was approved by the Committee on Clinical Investigations of Beth Israel Deaconess Medical Center, Boston, MA.
Procedure
All subjects performed the Serial Reaction Time Task (SRTT), a procedural sequence-learning measure [10]. Subjects sat in front of a computer screen with their right hand resting on a button box with four buttons in a horizontal array. Participants then saw blue dots appear one at a time in one of four horizontal positions across a white computer screen. Subjects were required to press the button that corresponded to the position of the dots as quickly and accurately as they could. Each dot presented was set to disappear as soon as participants pressed the correct corresponding button, and the interval between each correct response and the next stimulus was set to 400 milliseconds. We used an SRTT task design that was similar to that used by Curran [11] in which random and sequence trials were present in each block. This allowed sequence-specific learning to be measured over each individual block of practice. Random trial orders were pre-determined by the investigators such that there were no repetitions (i.e. 1-4-2-2) and no triplets shared by sequential trials. Random trials were therefore pseudorandom (though we will use the term ''random'' throughout the rest of the paper). Sequential trials followed a 12-item sequential order (2-3-1-4-3-2-4-1-3-4-2-1, 1 corresponding to the left-most position, and 4 corresponding to the right-most position).
The task began with 50 random trials, after which the 12-item sequence was introduced. This sequence repeated a set number of times before the dots would return to a random order. Participants were not informed of the existence of the sequence. Participants performed this task over three blocks during session 1, with a brief 1-2 minute rest between blocks, and a final block at session 2, 24 hours later. As shown in Figure 1B, each block began and ended with 50 random trials, with a series of sequence trials in the middle. The initial block contained 180 sequence trials (15 repetitions), the middle block contained 300 sequence trials (25 repetitions), and the final block of session one contained 180 sequence trials (15 repetitions). The fourth testing block completed at session 2, 24 hours later, contained 180 sequence trials.
After participants finished the fourth and final test block of the SRTT at the second testing session, they were immediately asked 1) if they noticed the sequence and 2) if they could recall the sequence. In previous studies using the SRTT, off-line skill improvements were affected by participants' free recall of the sequence. Those who recalled more than 8 items only showed off-line improvements over sleep, whereas those recalled 4-items or less demonstrated off-line improvements over both wake and sleep [9]. To remove this possible impediment to off-line skill improvements, participants who recalled more than 4 items of the sequence were excluded from analysis (n = 2).
After completing the entire SRTT task, participants also completed a test of declarative memory, the California Verbal Learning Test (CVLT-16), to contrast with our primary measure of procedural learning. This test requires participants to learn a list of 16 words over five oral presentations of the list. Participants are tested on 1) their free recall of the list immediately after each of the five oral presentations, 2) their free recall of the list after a short and a long delay (about 5 minutes and 20 minutes, respectively), and 3) their recognition of the words from a list of target and foil words.
Skill Acquisition and Off-Line Improvement
Skill on the SRT task was defined as sequence-specific improvements demonstrated by declines in response time on sequence trials compared to the random trials which immediately followed. Only reaction times for correct responses were included for analysis of skill. To measure skill on the SRTT, the mean reaction times of the last 50 sequential trials and the 50 random trials that followed were contrasted at each of the four testing blocks (A, B, C and D) of the task [9,12]. The effect of outlier trials were reduced by removing all reaction times that were more than three standard deviations away from the mean for each block. These outlying response times were replaced with the given testing block's mean reaction time [9]. This yielded a skill score for each block of the SRTT. To determine how much ''off-line'' learning (or ''delta skill'') participants displayed, the skill at the end of session 1 (the skill at the third testing block) was subtracted from the skill at session 2 (skill at the fourth testing block).
To examine any differences between young and older participants at session 1 and at re-testing, a two-way (Age Group: Young vs. Olders) X (Testing Session: Session 1 vs. Session 2) mixed Factors ANOVA was performed with age group as the between subjects factor, session as the within subjects factor, and skill as the dependent variable. Older participants showed higher average skill than the young (Main effect Age Group: F (1, 24) = 4.92, p,0.05, Older Mean = 95.4610.8; Young Mean = 62.6 610; all means will be reported6SE). As shown in Figure 2, at session 1 older participants showed significantly more skill than young participants (Interaction: In addition, based on our a priori hypotheses, we examined the change in skill between sessions for both young and older participants. Young participants showed an increase in skill from session 1 to session 2 (Mean Delta Skill Young = 36.8611.4, t (13) = 23.23, p,0.01), whereas older participants' skill did not change from session 1 to session 2 (Mean Delta Skill Older = 24.5612.5, t (11) = 0.37, ns). Young participants' change in skill was significantly greater than that of the older participants (t (24) = 22.45, p,0.05 (see Figure 2)).
Skill Acquisition and Off-Line Improvement as Percentage
As expected, older participants had slower reaction times (M = 464.2630.1) than young participants (M = 393.9613. 8) irrespective of sequence and random trials (t (24) = 2.23, p,0.05).
To account for the possibility that older participants showed a greater difference in reaction times between sequence and random trials due to slower baseline reaction times, the percentage skill improvement was calculated across all four blocks of the task. Each participants' skill scores for each testing block of the SRTT was divided by their average random reaction time for that block, and The task was performed over four blocks, here labeled as test, training, test, and retest. The first three blocks of the task are completed during session 1, and the fourth block is completed during session 2. Each block begins and ends with 50 random trials (grey areas labeled ''R'') sandwiching 180 or 300 sequence trials (white areas labeled ''S''). A subject's skill at any given block is measured by subtracting the mean of the last 50 sequence trials from the mean of the last 50 random trials. Skill at the end of session 1, or block 3, is shown. The change in skill from session 1 to session 2 (''delta skill'' or ''off-line learning'') is found by subtracting skill at session 1 (Skill 1) from skill at session 2 (Skill 2). Error bars represent standard error of the mean. doi:10.1371/journal.pone.0006683.g001 the result was multiplied by 100 to obtain the percentage by which their reaction times decreased during the sequence trials. A twoway (Age group: Young vs Older participants) X (Testing session: Session 1 vs. Session 2) mixed-factors ANOVA was run using these scores, and similar results were found. Similar to the previous analysis, at session 1 older participants showed higher percent skill than the young participants (Interaction:
Accuracy
To assess for the possibility of a speed-accuracy trade off, errorrate was examined over random and sequential trials of the SRT task for young and older participants. Error rate was calculated as a percentage of incorrect responses made by each participant out of the total number of responses they made during either random and sequencetial trials. For both age groups, error rates were greater during random trials than during sequential trials (Main Effect Trial Type: F (1) = 22.29, p,0.05; Mean Random = 6.0560.34, Mean Sequential = 4.1560.34). Error rates did not differ significantly by age group (Older Mean = 4.4260.92, Young Mean = 6.2760.85, F (1) = 2.18, p = ns), nor was there an interacting effect of age group and trial type on error rate (F (1) = 0.01, p = ns). Neither age group appears to have sacrificed speed for accuracy or vice versa.
Declarative Memory
Performance on the declarative memory task (CVLT) showed contrasting results to the implicit, procedural skill measure. Young participants, in contrast to their reduced skill measures, were better able to encode the list of words than older participants. Young participants correctly recalled more words over the five presentations of the 16
Discussion
Over a single practice session, older subjects acquired more skill on a sequence of finger movements than young subjects. This age discrepancy in skill is not attributable to the fact that older subjects are slower overall and thus have more opportunity to decrease their response times during the sequence trials, as expressing the skill as a percentage of baseline performance demonstrated the same results. The results also cannot be attributed to having selected older subjects with exceptional memory, as their scores on the declarative memory tasks were lower than those of the young.
As predicted, college-age subjects showed skill improvement over the 24-hour off-line period. The older participants showed no between-session improvement, but maintained their level of skill after 24 hours, which supports previous findings showing older adults' consistency of performance on motor tasks over long periods of time [4]. A ceiling effect could account for older adults' lack of off-line improvement, since older adults' initial skill was higher even than young adults' skill at re-test. Further investigation is needed to determine whether older adults can demonstrate enhancement of motor skills off-line.
The finding that older participants gained more skill than young participants at session one was unexpected, as previous studies have reported that older participants show magnitudes of sequence-specific learning that are, at most, equal to that of young participants over initial practice [5,6]. This discrepancy of findings could be due to the current older sample being younger (55-70 years) than previous older samples (approx. 60-79 years, [11]; approx. 65-80 years, [5,6,13]. However, our sample may have been appropriate for examining normal aging separately from extraneous cognitive declines. Strict screening was applied to exclude subjects with either dementia or mild cognitive impairment, and older subjects were also matched closely to young in terms of education. Furthermore, despite the younger age range, our older sample showed characteristically poorer declarative memory than the young adults as well as slower reaction times. The current sample of older adults may therefore be representative of normal aging in the absence of significant pathology.
The current demonstration of older adults' superior skill could be suggestive of possible interacting memory systems, particularly between the systems that support declarative memory and those that support procedural memory. Some studies have presented evidence for interacting memory systems by showing that disruption of one system can lead to enhancement in the other, and vice versa [14,15]. Such an interaction might predict that declines in declarative memory, such as those that occur with age, would lead to enhanced procedural memory. Even normal aging is associated with hippocampal atrophy and decreased activation in imaging studies [16]. Conversely, motor regions including primary motor cortex, premotor cortex, cerebellum and the supplementary motor area show compensatory increases in activation with normal aging [17]. Either declarative memory impairment or increased activation in motor networks could underlay our findings.
In summary, we found that older adults can actually acquire greater sequence skill during practice than college-age students. This difference could not be ascribed to older adults' slower overall reaction times or to selection of older adults with exceptional memory. As previously shown, the young showed off-line improvements between sessions, but these only brought the young up to comparable skill levels to the older adults. At least under certain circumstances, older adults can actually show greater acquisition of skill than young. The effect of aging on skill consolidation is unclear, yet the fact that participants maintained their skill levels after 24 hours suggests that their skill may stabilize over the off-line period even if it may not be enhanced as it is for the young.
|
v3-fos-license
|
2021-04-06T13:26:42.891Z
|
2021-04-06T00:00:00.000
|
233030008
|
{
"extfieldsofstudy": [
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2021.650581/pdf",
"pdf_hash": "afae1dcde3135191c22a17ed2c7ce25af9d3e81e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2416",
"s2fieldsofstudy": [
"Medicine"
],
"sha1": "afae1dcde3135191c22a17ed2c7ce25af9d3e81e",
"year": 2021
}
|
pes2o/s2orc
|
SARS-CoV-2 Diagnostic Tests: Algorithm and Field Evaluation From the Near Patient Testing to the Automated Diagnostic Platform
Introduction: Since the first wave of COVID-19 in Europe, new diagnostic tools using antigen detection and rapid molecular techniques have been developed. Our objective was to elaborate a diagnostic algorithm combining antigen rapid diagnostic tests, automated antigen dosing and rapid molecular tests and to assess its performance under routine conditions. Methods: An analytical performance evaluation of four antigen rapid tests, one automated antigen dosing and one molecular point-of-care test was performed on samples sent to our laboratory for a SARS-CoV-2 reverse transcription PCR. We then established a diagnostic algorithm by approaching median viral loads in target populations and evaluated the limit of detection of each test using the PCR cycle threshold values. A field performance evaluation including a clinical validation and a user-friendliness assessment was then conducted on the antigen rapid tests in point-of-care settings (general practitioners and emergency rooms) for outpatients who were symptomatic for <7 days. Automated antigen dosing was trialed for the screening of asymptomatic inpatients. Results: Our diagnostic algorithm proposed to test recently symptomatic patients using rapid antigen tests, asymptomatic patients using automated tests, and patients requiring immediate admission using molecular point-of-care tests. Accordingly, the conventional reverse transcription PCR was kept as a second line tool. In this setting, antigen rapid tests yielded an overall sensitivity of 83.3% (not significantly different between the four assays) while the use of automated antigen dosing would have spared 93.5% of asymptomatic inpatient screening PCRs. Conclusion: Using tests not considered the “gold standard” for COVID-19 diagnosis on well-defined target populations allowed for the optimization of their intrinsic performances, widening the scale of our testing arsenal while sparing molecular resources for more seriously ill patients.
Introduction: Since the first wave of COVID-19 in Europe, new diagnostic tools using antigen detection and rapid molecular techniques have been developed. Our objective was to elaborate a diagnostic algorithm combining antigen rapid diagnostic tests, automated antigen dosing and rapid molecular tests and to assess its performance under routine conditions.
Methods: An analytical performance evaluation of four antigen rapid tests, one automated antigen dosing and one molecular point-of-care test was performed on samples sent to our laboratory for a SARS-CoV-2 reverse transcription PCR. We then established a diagnostic algorithm by approaching median viral loads in target populations and evaluated the limit of detection of each test using the PCR cycle threshold values. A field performance evaluation including a clinical validation and a user-friendliness assessment was then conducted on the antigen rapid tests in point-of-care settings (general practitioners and emergency rooms) for outpatients who were symptomatic for <7 days. Automated antigen dosing was trialed for the screening of asymptomatic inpatients.
Results: Our diagnostic algorithm proposed to test recently symptomatic patients using rapid antigen tests, asymptomatic patients using automated tests, and patients requiring immediate admission using molecular point-of-care tests. Accordingly, the conventional reverse transcription PCR was kept as a second line tool. In this setting, antigen rapid tests yielded an overall sensitivity of 83.3% (not significantly different between the four assays) while the use of automated antigen dosing would have spared 93.5% of asymptomatic inpatient screening PCRs.
INTRODUCTION
At the time of writing (January 7, 2021), Belgium is emerging from a second wave of COVID-19 epidemic. The World Health Organization (WHO) recommended mass use of reverse transcription real-time PCR (RT-PCR) to detect active SARS-CoV-2 infections (1). However, the unprecedented high volume of samples reaching laboratories led to global scarcities of reagents and delays making prolonged containment measures less acceptable by the population (2). Since then, a new set of diagnostic tools have been developed, such as antigen detection immunoassays or molecular point-of-care tests. These tools could allow diversification of testing strategies and decrease shortages and overflows.
Thanks to their high sensitivity, ranging from 73.9 to 89.5% for high viral load samples [10 5 -10 7 RNA copies/swab (3)], and their overall specificity (4,5), antigen-detection rapid diagnostic tests have been integrated in several countries' testing strategies (6-10) 1,2 . Both Centers for Disease Control and Prevention (CDC) (11) WHO (12) and European Center for Disease Control and Prevention (ECDC) (13) have issued guidelines for their use. However, practical considerations are still lacking (including the best target populations). Meanwhile, several manufacturers have developed molecular point-of-care tests, most of which additionally target influenza and/or RSV (14,15) while others offer wider respiratory syndromic panel (16).
In addition, high throughput antigen-dosing systems based on chemiluminescence enzyme immunoassay represent an interesting alternative (17). This solution, recently deployed in German airports, is a striking example of delocalized laboratory medicine (18).
Following this expansion of available diagnostic tools, a deeper reflection has come to light on the best use of these various testing solutions according to their sensitivity, their turnaround time, the context in which the result will be used (patient vs. population-centered approach), the kinetics of the epidemic and the availability of reagents and consumables (19).
All of the above may partly explain the apparent confusion we are currently witnessing in the deployment of antigen rapid diagnostic tests and/or molecular point-of-care tests in most industrialized countries, either in terms of choosing the most appropriate diagnostic tests or the target population to apply these tests to. We would like to share here the results of evaluations we performed on four antigen rapid diagnostic tests, one automated antigen dosing assay and one molecular point-of-care test for the diagnosis of COVID-19, not only from an analytical "laboratory" point-of-view but also through their field implementation during the second Belgian COVID-19 wave. Using different techniques at different levels in a multistep, integrated, and adaptive diagnostic algorithm helped us to diversify and increase our overall testing capacity.
Population
LHUB-ULB (Laboratoire Hospitalier Universitaire de Bruxelles-Universitair Laboratorium Brussel) is a clinical laboratory serving five university hospitals (containing a capacity of around 3,000 beds) as well as a network of general practitioners in Brussels, Belgium. LHUB-ULB's service area covers 700,000 inhabitants (20). From July to September 2020, patients undergoing a SARS-CoV-2 RT-PCR were retrospectively categorized through a structured algorithm into four categories according to the information provided on the orders: symptomatic outpatients, hospital admissions (symptomatic or not), asymptomatic high-risk contacts, or mandatory screenings. The RT-PCR median C T values from these four groups were compared using the Tukey-Kramer method.
Symptomatic Cases Definition
We used the case definition provided by the Belgian national health institute (Sciensano) for COVID-19 (21). The acute apparition of one major symptom, the presence of two minor symptoms, or the aggravation of chronic respiratory symptoms without any other obvious cause was defined as a possible case (Supplementary Table 1). A confirmed case was a person with a SARS-CoV-2 positive sample.
Antigen Rapid Diagnostic Tests
Four lateral-flow immunoassays were evaluated: Panbio TM COVID-19 Ag Rapid Test Device (Abbott Rapid Diagnostics, Germany), BD Veritor TM SARS-CoV-2 (Becton-Dickinson and Company, USA), COVID-19 Ag Respi-Strip (Coris BioConcept, Belgium) and SARS-CoV-2 Rapid Antigen Test (SD Biosensor, Republic of Korea). Reading was performed by trained operators except for the BD Veritor TM for which an automated reader (BD Veritor TM System) was used.
An analytical performance study was performed using nasopharyngeal swabs. The swabs preserved in universal transport media (UTM) were sent to our laboratory for a SARS-CoV-2 RT-PCR, and then kept refrigerated overnight after the RT-PCR was performed. The four assays were performed at the same time by two trained operators. The amount of UTM engaged was according to the recommendations by each manufacturer for evaluation purposes but not for clinical use.
After the performance study, antigen rapid diagnostic tests were done in point-of-care settings, either a practice within our network of general practitioners, or in the emergency room of the Saint-Pierre university hospital. Each possible COVID-19 outpatient, who was within 7 days of symptoms onset, was offered an antigen rapid diagnostic test and informed that a negative result would require an additional sampling for RT-PCR as recommended at the time (21). Each antigen rapid diagnostic test sampling and test procedure was performed according manufacturer instructions (Supplementary Table 2).
The user-friendliness of each antigen rapid diagnostic test was assessed with a four-part questionnaire adapted from the Scandinavian evaluation of laboratory equipment for point-ofcare testing SKUP/2008/114 evaluation (22).
Molecular Point-of-Care Test
To assess the analytical performance of the Cobas R Liat SARS-CoV-2 & Influenza A/B nucleic acid test (Roche Molecular Systems, USA), nasopharyngeal swabs, which were sent to our laboratory for a SARS-CoV-2 RT-PCR and tested positive, were kept refrigerated overnight before testing. In addition, frozen samples from February 2020 which underwent at that time a Cobas R Liat Influenza A/B & RSV RT-PCR assay were also tested.
Automated Antigen Dosing Assay
Antigen dosing was performed using the Lumipulse R G SARS-CoV-2 Ag (Fujirebio, Japan) assay, expressing the dosage in pg/mL. For biosafety consideration, a viral-deactivation step (56 • C heating for 30 min) was added to the manufacturer's instructions protocol (23).
The analytical performance study was performed on UTM swabs kept refrigerated overnight after a SARS-CoV-2 RT-PCR. All available positive samples were selected. Negative samples were randomly selected to obtain a positive/negative ratio around 2:1.
In the second part of the evaluation, we evaluated the Lumipulse R performance on UTM samples sent to our laboratory for SARS-CoV-2 RT-PCR for patients who required scheduled hospital admission, COVID-19 contacts, or for healthcare workers.
Gold Standard and Statistical Analysis
Analytical performance study of antigen rapid diagnostic tests, molecular point-of-care test and automated antigen dosing were carried out on three different sets of samples.
SARS-CoV-2 RT-PCR was considered as the gold-standard. Except for some antigen rapid diagnostic tests, for which negative results were controlled by various other RT-PCR protocols, samples underwent the RealTime SARS-CoV-2 assay (Abbott Molecular, USA) on our m2000 platform. As detection of both targeted genes (RdRp and N) is performed using the same fluorophore, the C T values of this assay are observed up to 32 cycles, and not comparable with C T values of other RT-PCR assays. Consequently, only the C T values obtained using the RealTime SARS-CoV-2 assay were considered.
Statistical analyses and receiver operating characteristic (ROC) curves were performed using Analyse-it R for Microsoft Excel v3.80.
Antigen Rapid Diagnostic Tests
Ninety-nine UTM samples including 61 positive (C T ranging from 3.86/32 to 30.94/32) were selected. In this frame, the sensitivities of each antigen rapid diagnostic test were ranging from 36.1 to 49.2% ( Table 1). The latest C T detected antigen rapid diagnostic tests was 18.06/32. No false positive result was observed.
Molecular Point-of-Care Test
The agreement of the Cobas R Liat with the m2000 system for SARS-CoV-2 diagnostic was of 90.9% (50/55) for positive samples. C T values correlation between instruments was good (R 2 = 0.931). The Cobas R Liat yielded positive results for all positive samples presenting a C T value below 27.29/32 and yielded positive results for samples with C T of up to 29.11/32. Eighteen of the 19 frozen Influenza A positive samples and 5 of the 6 frozen influenza B positive samples yielded coherent positive results. Agreements for negative samples were of 100% for each parameter.
Automated Antigen Dosing Assay
Two hundred fourteen samples were selected including 136 positive samples. ROC curve analysis yielded an area under the curve (AUC) of 0.893±0.021 (Supplementary Figure 1). The highest Youden Index was at a threshold of 13.75 pg/mL (sensitivity 67.7%, specificity 97.1%). At a threshold set at 1.32 pg/mL [similar to a previous study (17) and to the manufacturer proposed cut-off at 1.34 pg/mL (24)], sensitivity was 78.9% and specificity of 73.9%. To exclude any false positive, the threshold had to be set at 20.27 pg/mL (sensitivity 63.9%). Finally, using a C T < 20/32 as a judgement criterion, the AUC of the ROC curve was 0.984 ± 0.007 (Supplementary Figure 2) with an optimal
Elaboration of the Diagnostic Algorithm
Following these results, we elaborated the algorithm described in Figure 2: whereas the diagnosis of outpatients was mainly based on point-of-care antigen rapid diagnostic tests, the hospital algorithm combined antigen rapid diagnostic tests, molecular point-of-care tests and conventional RT-PCR in an integrative diagnostic strategy. Four clinical situations were further identified: screening of asymptomatic patients, patients requiring immediate admission, symptomatic outpatients with symptoms lasting for less or more than 5 days. Table 3). The Coris COVID-19 Ag Respistrip had a less satisfactory rating. The main practical issue was its readiness: its "strip-in-a-tube" format was considered by operators as non-practical and leading to a potential biosafety hazard when the reading is difficult. Notably, SD Biosensor TM and Coris BioConcept did not provide any internal control in their kit. BD Veritor TM was the only kit offering nasal swabbing and automated reading.
Automated Antigen Dosing Assay
Two hundred seventy-nine patients (including 93 asymptomatic patients screened for a scheduled hospitalization) were tested. Their SARS-CoV-2 carriage status was categorized as "unlikely" if dosing below 1.32 pg/mL (n = 219, 78.5%), "possible" if dosing from 1.32 to 20.27 pg/mL (n = 46, 16.5%) and "certain" if dosing higher than 20.27 pg/mL (n = 14, 5.0%). All patients with "certain" results had a positive RT-PCR. Seven patients out of 46 (15.2%) with a "possible" result and five out of 219 (2.3%) with an "unlikely" result were tested positive according to RT-PCR, respectively (Table 4). Thus, the overall sensitivity for asymptomatic patients was of 86.7% (13/15). Hence, using this assay for the pre-admission screening of these 93 patients would have spared 87 RT-PCR (93.5%) for the cost of one missed low-positive (C T = 26.04/32).
DISCUSSION
In most industrialized countries, the large scale use of RT-PCR to detect active SARS-CoV-2 infections has shown limits in its capacity to broadly screen the population while providing timely and therefore meaningful results for optimized prevention and treatment. To fill this gap, SARS-CoV-2 antigen rapid diagnostic tests and molecular point-of-care tests are now considered as an adjunct to the RT-PCRs performed on large automated platforms (25).
Our results provide substantial evidence that no current antigen rapid diagnostic test is sensitive enough to be performed on UTM specimen (i.e., at the laboratory). During the first wave in Europe, we proposed a strategy combining antigen rapid diagnostic tests and RT-PCR, both performed in the laboratory (26). We stopped using antigen rapid diagnostic tests in the laboratory during the declining phase of the epidemic, not because of their low sensitivity [as stated by colleagues (27)], but because the proportion of samples from recently infected patient dropped, impairing these tests' usefulness (28). Regular follow-up of the positivity rate could allow adaptations of antigen rapid diagnostic test strategy as proposed by CDC (11) and ECDC (13). Here, we demonstrate the added-value of antigen rapid diagnostic tests at the point-of-care level for <5-days symptomatic outpatient thanks to their ease-of-use, rapid timeto-result, and low cost.
Our results show slightly lower sensitivity than previously reported (25). Indeed, part of the false negative results observed is likely due to variability in the adherence to protocol regarding sampling, incubation time and DSO. Sensitivity and specificity of such antigen rapid diagnostic tests strongly depend on their good execution and reading which are harder to achieve at the frontline where the expertise of personnel can vary; especially in this time of pandemic when the turn-over is higher than usual. This was confirmed by other recently published studies targeting the same population, with sensitivity ranging from 70.0 to 80.4% (29)(30)(31).
The absence of significant difference between antigen rapid diagnostic tests clinical performances highlights the need to assess their user-friendliness as a main criterion of choice. Our analysis underlined the need to consider very practical aspects such as opening caps while wearing gloves, ensuring biosafety outside a laboratory (see Figure 3) and instructions targeting non-laboratory operators, as recently discussed for low-resource settings (32). Besides, an immediate, in-person communication of a positive result likely allowed a stronger message and a better adhesion regarding quarantine, hygiene and contact-tracing than if done through virtual means, days after the consultation. The Cobas R Liat yielded stunning performances for a 20min triplex molecular point-of-care test compared to our RT-PCR. However, invalid results were experienced with viscous samples. The addition of a molecular point-of-care test for patients attending the emergency room and needing hospitalization, regardless of the suspicion of COVID-19, allowed a faster management of inpatients avoiding the admission of asymptomatic SARS-CoV-2 carrier in "COVID-free" units, or the admission of SARS-CoV-2-negative patients in COVID-19 units pending their RT-PCR results. Furthermore, influenza and SARS-CoV-2 co-detection allows a better surveillance at a time where the potential co-circulation of the influenza and SARS-CoV-2 is still unknown. The costs of these molecular point-of-care tests stay high and their availability low. Hence, their use should be considered by targeting the best population with regards to the reduction of global costs related to isolation, use of protective equipment and prevention of nosocomial clusters.
In the present study, the Lumipulse R G SARS-CoV-2 Ag showed an overall good analytical performance compared to RT-PCR; and more specifically, to exclude negative and low positive samples using different criteria and cut-off values than the ones proposed by the manufacturer. These cut-offs need to be adapted and chosen regarding the local epidemiology and the objectives of the screening. Our cut-off values diverged from the one proposed in a previous study (17). However, despite the fact we added a viral deactivation by heating, our results yielded a better AUC of the ROC curve. In case of limited access to RT-PCR, such technique can allow testing people who would be otherwise not tested. Its higher throughput and sensitivity than antigen rapid diagnostic tests and its faster timeto-result than RT-PCR make it an interesting intermediary tool. Its low costs and its probable good assessment of infectiousness allow a relevant periodic testing in terms of infection control. Therefore, using antigen dosing could be the best solution to repeatedly test high number of high risk contacts while sparing RT-PCR resources. However, their biosafety must be carefully considered and viral neutralization applied if needed; viscous samples may cause pipetting errors and specific interpretation algorithm should be elaborated.
Our study presents some limitations. We did not consider alternative specimens for SARS-CoV-2 detection such as saliva, the use of serology or broad molecular "syndromic" respiratory panels that could be of use in a larger diagnostic algorithms (33). The emergence of new variants should not impact the value of our algorithm due to the different targets of the assays. However a careful follow-up of their performances over time should be implemented.
CONCLUSION
In conclusion, our study underlines the importance of shifting our attention from a narrow focus on the sole analytical performances of the diagnostic tools available (especially when these are similar) to an integrated approach taking into account (i) practical consideration such as time-to-result, field easeof-use, availability of reagents (ii) target populations (iii) intended use of produced results, and (iv) kinetic of the epidemic. Hence, we elaborated here a diagnostic algorithm based on these considerations to optimize the use of the newly extended arsenal of SARS-CoV-2 direct diagnostic tools, from the decentralized setting to the automated lab, to ensure clinical microbiologists enough ammunition for a reliable and meaningful COVID-19 diagnostic.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
Ethical review and approval was not required for the study on human participants in accordance with the local legislation and institutional requirements. Written informed consent from the participants' legal guardian/next of kin was not required to participate in this study in accordance with the national legislation and the institutional requirements. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.
AUTHOR CONTRIBUTIONS
NY, CDe, MD, F-ZB, FP, MW, and CDu did the investigations. NY and MH contributed to literature review and the writing of the initial draft. NY, CM, FP, HD, ND, M-LD, and SM contributed to manuscript revision, data compilation, and figure presentation. All authors provided critical review and commentary. NY, ND, FC, MB, and MH contributed to study design, manuscript preparation, literature review, revision, and project administration. All authors contributed to the article and approved the submitted version.
|
v3-fos-license
|
2017-05-19T11:33:14.860Z
|
2015-02-09T00:00:00.000
|
17183395
|
{
"extfieldsofstudy": [
"Psychology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcpp.12395",
"pdf_hash": "adcb2404e342ce9f7e9b25c3ea33e39ba522164e",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2417",
"s2fieldsofstudy": [
"Psychology"
],
"sha1": "adcb2404e342ce9f7e9b25c3ea33e39ba522164e",
"year": 2015
}
|
pes2o/s2orc
|
Pathways from maternal depressive symptoms to adolescent depressive symptoms: the unique contribution of irritability symptoms
Background The authors tested three possible pathways linking prenatal maternal depressive symptoms to adolescent depressive symptoms. These pathways went through childhood Irritability Symptoms, Anxiety/Depressive Symptoms or Conduct Problems. Method Data were collected from 3,963 mother–child pairs participating in the Avon Longitudinal Study of Parents and Children. Measures include maternal depressive symptoms (pre‐ and postnatal); toddler temperament (2 years); childhood (7–13 years) irritability symptoms, anxiety/depressive symptoms, conduct problems, and adolescent depressive symptoms (16 years). Results Irritability Symptoms: This pathway linked sequentially – prenatal maternal depressive symptoms, toddler temperament (high perceived intensity and low perceived adaptability), childhood irritability symptoms, and adolescent depressive symptoms. Anxiety/Depressive symptoms: This pathway linked sequentially – prenatal maternal depressive symptoms, toddler temperament (negative perceived mood), childhood anxiety/depressive symptoms, and adolescent depressive symptoms. Childhood conduct problems were not associated with adolescent depressive symptoms, above and beyond irritability symptoms and anxiety/depressive symptoms. Conclusions Results suggest evidence for two distinct developmental pathways to adolescent depressive symptoms that involve specific early and midchildhood features.
Introduction
Depression contributes significantly to the global burden ofdisease and affects peoplein all communities with an onset that typically occurs in adolescence (Andrade et al., 2003;Kessler et al., 2005;Patel, Flisher, Hetrick, & McGorry, 2007). Indeed, those with adolescent-onset depression often go on to have recurrent episodes and significant impairment (e.g. Hammen, Brennan, and Keenan-Miller, 2008). As a result, research has sought to identify early family risk factors and child characteristics that can predict adolescent depressive symptoms, to enable early identification and mobilize preventative intervention measures that focus on early risk factors (e.g. Garber, 2006).
In this research, we tested for three distinct pathways defined by correlated but distinct child characteristics, linking a common family risk factor prenatal maternal depressive symptomsto an equifinal outcome of adolescent depressive symptoms. The first pathway that we tested goes through childhood Irritability Symptoms; the second, through childhood Anxiety/Depressive symptoms; and the third through childhood Conduct Problems.
Oppositional defiance in youth is a highly prevalent psychiatric condition with strong associations with a wide range of adult psychiatric illness, including both emotional (e.g. depression) and externalizing disorders (e.g. conduct disorder, and callous-unemotional traits) (Angold, Costello, & Erkanli, 1999;Loeber, Green, Keenan, & Lahey, 1995;Maughan, Rowe, Messer, Goodman, & Meltzer, 2004). Partly due to the fact that Oppositional Defiant Disorder (ODD) predicts to such a wide range of adjustment difficulties in children, the DSM 5 (American Psychiatric Association, 2013) has suggested a distinction among irritable, headstrong, and hurtful ODD dimensions, as these dimensions appear to associate with distinct outcomes. Importantly, studies suggest that the ODD subdimension of irritability (i.e. has temper outbursts; touchy or easily annoyed; angry or resentful) predicts adolescent and young adult depressive symptoms (Leibenluft, Cohen, Gorrindo, Brook, & Pine, 2006;Leibenluft & Stoddard, 2013;Stringaris & Goodman, 2009a;Whelan, Stringaris, Maughan, & Barker, 2013). In addition, previous studies show predictive associations between adolescent depressive symptoms and other child characteristics such as anxiety/depressive symptoms (e.g. has many worries or often seems worried; often unhappy, depressed or tearful) and conduct problems (e.g. often fights with other children or bullies them, often lies or cheats) (Barker, Oliver, & Maughan, 2010;Goodman, 2001;Lahey, Loeber, Burke, & Rathouz, 2002;Stringaris, Lewis, & Maughan, 2014). Of importance, irritability symptoms are associated with child anxiety/depressive symptoms and conduct problems (Dougherty et al., 2013;Krieger et al., 2013;Stringaris & Goodman, 2009b) and at present, we cannot be certain whether the association between irritability symptoms and adolescent depressive symptoms is better accounted for by these other more well-established pathways of anxiety/depressive symptoms and conduct problems.
With regard to early family risk factors, maternal depressive symptoms (pre-and postnatal) are robust and well researched risks for offspring depressive symptoms in adolescence (Pawlby, Hay, Sharp, Waters, & O'Keane, 2009;Pearson et al., 2013) and may act as a common antecedent of the three pathways outlined above (i.e. Irritability Symptoms; Anxiety/Depressive Symptoms; and Conduct Problems; Cents et al., 2013;Leis, Heron, Stuart, & Mendelson, 2014;Mars et al., 2012). Moreover, pre-and postnatal maternal depressive symptoms are associated with difficult (i.e. negative perceived mood, low perceived adaptability, and high perceived intensity/reactivity) early child temperament (Bruder-Costello et al., 2007), which in turn, is associated with childhood anxiety/depressive symptoms and conduct problems (Barker & Maughan, 2009). Recently, Stringaris, Maughan, and Goodman (2010), reported that early temperamental dysregulation (emotionality and activity) predicted ODD diagnoses; however, the unique contribution of irritability symptoms was not examined. Yet, as irritability symptoms, anxiety/depressive symptoms, and conduct problems are highly comorbid, they may also share temperamental features.
Understanding whether there is a unique contribution of irritability symptoms toward adolescent depressive symptoms above anxiety/depressive symptoms and conduct problems may help refine risk to outcome associations and evidence-based interventions. In addition, as little is known about the unique or shared temperamental antecedents of irritability symptoms, anxiety/depressive symptoms, and conduct problems, we explored associations of these child characteristics and toddler temperament (high perceived intensity, low perceived adaptability, and negative perceived mood). The three pathways were tested within an autoregressive cross-lag model that allows us to test three possible equifinal pathways from the common family risk of maternal depressive symptoms toward adolescent depressive symptoms (e.g. Barker, Jaffee, Uher, & Maughan, 2011).
Sample
The Avon Longitudinal Study of Parents and Children (ALSPAC) was established to understand how genetic and environmental characteristics influence health and development in parents and children. All pregnant women residing in a defined area in the South West of England, with an expected date of delivery between 1st April 1991 and 31st December 1992, were eligible and 13,761 women (contributing 13,867 pregnancies) were recruited. These women have been followed over the last 19-22 years . When compared with 1991 National Census Data, the ALSPAC sample was found to be similar to the UK population as a whole . Ethical approval for the study was obtained from the ALSPAC Law and Ethics Committee and the Local Research Ethics Committees. More detailed information on ALSPAC is available from the website: http://www.bris.ac.uk/alspac/.
Measures
Mothers completed questionnaires about their impressions of their ownand their children'spsychosocial wellbeing, at multiple time points during pregnancy, and their child's toddlerhood and childhood. The children reported their impressions of their own depressive symptoms at age 16.
Maternal depressive symptoms at 18 and 32 weeks prenatally and in the postnatal period at 8 weeks, 8 months, 21 months were assessed by asking mothers to complete the Edinburgh Postnatal Depression symptoms Scale Questionnaire (EPDS), a widely used 10-item self-report questionnaire that has been shown to be valid in and outside the postnatal period (Cox, Holden, & Sagovsky, 1987;Murray & Carothers, 1990). The EPDS is a 10 item self-report questionnaire of symptoms experienced in the last 7 days and it has been used to identify pregnant women and mothers at risk of depressive symptoms. EPDS is a reliable measure of maternal depressive symptoms: prenatal (a = .78) and postnatal (a = .82).
Toddler temperament measures of Negative Perceived Mood, Low Perceived Adaptability and High Perceived Intensity at 24 months were used. Mothers completed each question using a 6-point scale response, from 'almost never' to 'almost always' for each measure which are all Carey Infant Temperament subscales (Carey & McDevitt, 1978). The 'Negative perceived mood' subscale is a measure of general tone of affect (i.e. positive or negative). Example items are, 'he/she is fussy on waking up and going to sleep (frowns, cries)', 'he/she cries when left to play alone'. The 'Low perceived adaptability' subscale is a measure of responses to novel or altered situations. Examples of items are, 'he/she resists changes in feeding schedule (1 hr or more) even after two tries, 'he/she is still wary or frightened of strangers after 15 min'. The 'High perceived intensity' subscale is a measure of the level of energy with which an emotional response is made. Examples of items are, 'he/she displays much feeling (vigorous laughing or crying) during nappy change or dressing, 'he/she reacts strongly to strangers: laughing or crying'.
Irritability Symptoms at ages 8, 10, and 13 (mother and teacher reports) was derived from the Development and Well-Being Assessment (DAWBA), a well-validated measure, developed for the British Child Mental Health surveys (Meltzer, Gatward, Goodman, & Ford, 2000). In addition to generating binary (yes/no) diagnostic indicators, DAWBA algorithms have been developed to generate six-level ordered-categorical measures of the probability of disorder for each of the individual items underlying the diagnoses, ranging from <0.1% to >70% (Goodman, Heiervang, Collishaw, & Goodman, 2011). Evaluated in two large-scale national samples, these DAWBA 'bands' functioned well as ordered-categorical measures, showed dose-response associations with mental health service contacts, and showed very similar associations with potential risk factors as clinician-rated diagnoses (Goodman et al., 2011).
The DAWBA asks 9 separate symptoms of ODD. Each question was introduced with the stem: 'over the last 6 months, and as compared with other children the same age, has s/he often. . ..' followed by the specific clause.
Following the lead of Stringaris and Goodman (2009a), and subsequently the DSM-5 (American Psychiatric Association, 2013), irritability symptoms were defined by the following three symptoms: 'has temper outbursts', 'has been touchy or easily annoyed', and 'has been angry or resentful' where age 8-13 years (a = .71). Anxiety/Depressive symptoms at ages 7, 10, and 12 years were measured by mother reports on the Strengths and Difficulties Questionnaire, a widely used screening instrument with well-established reliability and validity (Goodman, 1997(Goodman, , 2001Van Widenfelt, Goedhart, Treffers, & Goodman, 2003) and anxiety/depressive symptoms has the following five items: 'often complains of headaches, stomach aches or sickness', 'has many worries or often seems worried', 'often unhappy, depressed or tearful','is nervous or clingy in new situations, easily loses confidence' and 'has many fears, is easily scared'. Items were coded as a 3-point scale ('not true', 'somewhat true', and 'certainly true') where age 7-12 years (a = .71). It should be noted that at age 7 the item 'has many fears, is easily scared' was not available in the dataset, but was included at ages 10 and 12.
Conduct Problems at ages 7, 10 and 12 years were measured by mother reports on the Strengths and Difficulties Questionnaire, a widely used screening instrument with well-established reliability and validity (Goodman, 1997(Goodman, , 2001, with the following four items: 'is generally obedient, usually does what adults request' (reverse coded), 'often fights with other children or bullies them', 'often lies or cheats, 'steals from home, school, or elsewhere'. Items were coded as a 3-point scale ('not true,' 'somewhat true,' and 'certainly true') where age 7-12 years (a = .72). It should be noted that SDQ was found to be at least as efficient at detecting externalizing problems as the Child Behavior Checklist (CBCL; Goodman & Scott, 1999); and associates with ICD-10 diagnoses of CD and ODD (Goodman, Renfrew, & Mullick, 2000). However, the temper outburst item that is typically the final measure for the SDQ for conduct problems was removed in this study to avoid item overlap between this item and the irritability temper tantrum/outburst item.
Depressive symptoms at age 16 were derived from the adolescent-reported Mood and Feelings Questionnaire Short Form (SMFQ) (Messer et al., 1995). The SMFQ is a 13-item selfreport questionnaire of symptoms experienced in the previous 2 weeks (that codes symptoms on a 3-point scale: 'true', 'sometimes true', 'not true') with a range of 0-26 (a = .91). This scale has been found to have high reliability and validity, and the short form is made up of items that best discriminated depressed and nondepressed children in field trials using structured psychiatric interviews (Costello & Angold, 1988).
Attrition and missing data. Participants with data for depressive symptoms at 16 years were selected for the analysis (n = 3,963). In a multiple regression model, we tested the extent to which risk factors common to irritability symptoms, anxiety/depressive symptoms, and conduct problems (see Tremblay, 2010) associated with noninclusion in this study. Partner status (OR = 2.45; 95% CI = 2.00-2.99), low SES (OR = 1.68; 95% CI = 1.47-1.91), teen pregnancy (OR = 2.81; 95% CI = 2.53-3.12), and maternal education (OR = 2.57; 95% CI = 2.53-3.12). We note that inclusion of these variables in the analysisin conjunction with missing data replacement by full-information maximum likelihoodcan help to minimize bias and maximize recoverability of 'true' scores (Little & Rubin, 2002).
Analysis
Using an autoregressive cross-lag model (ARCL), we tested three possible equifinal pathways from the common family risk of maternal depressive symptoms toward adolescent depressive symptoms (Figure 1). In this modeling approach, each variable in the model is regressed on all of the variables that precede it in time in order to examine developmental continuity and inter-relationships across the three hypothesized pathways. The Irritability Symptoms pathway predicts associations among pre-and postnatal maternal depressive symptoms, temperament, irritability symptoms and adolescent depressive symptoms; the Anxiety/Depressive Symptoms pathway predicts associations among pre-and postnatal maternal depressive symptoms, temperament, anxiety/ depressive symptoms and adolescent depressive symptoms; the Conduct Problems pathway predicts associations between pre-and postnatal maternal depressive symptoms, temperament, conduct problems and adolescent depressive symptoms.
Descriptive statistics
All study variables were significantly positively correlated (Table 1). For example, negative perceived mood, low perceived adaptability and high perceived intensity were all highly correlated. Moreover, childhood irritability symptoms, anxiety/depressive symptoms and conduct problems were highly correlated. Adolescent depressive symptoms were significantly associated with maternal depressive symptoms, toddler temperaments and child irritability symptoms, anxiety/depressive symptoms and conduct problems. We note that the highest correlation was between pre-and postnatal maternal depressive symptoms (r = .64), which indicated that these measures shared 41% common variance (i.e. 0.64*0.64 = 0.41).
Examining three pathways to adolescent depressive symptoms
The ARCL model showed acceptable fit on three indices (v 2 (121) = 471.978, p < .001; CFI: 0.964; TLI: 0.953; RMSEA: 0.027). R 2 values (the proportion of variance explained based on each variable's predictors) are reported for adolescent depressive symptoms (R 2 = .06); irritability (R 2 = .08); anxiety/ depressive symptoms (R 2 = .12) and conduct problems (R 2 = .09). Figure 1 shows the significant path coefficients in the ARCL model. To begin with, we note that prenatal maternal depressive symptoms were associated with postnatal maternal depressive symptoms (b = .64). In addition, we highlight four main results. First, for the Irritability Symptoms pathway: postnatal maternal depressive symptoms was associated with low perceived adaptability (b = .11) and high perceived intensity (b = .11) in toddlerhood, which were associated with irritability symptoms at 8-13 years (b = .12; b = .11; respectively), which, in turn, was associated with adolescent depressive symptoms (b = .11). Second, for the Anxiety/Depressive Symptoms pathway: postnatal maternal depressive symptoms were associated with toddler negative perceived mood (b = .15), which was associated with anxiety/ depressive symptoms at 7-12 years (b = .16), which, in turn, was associated with adolescent depressive symptoms (b = .15). Third, for the Conduct Problems pathway: postnatal maternal depressive symptoms were associated with low perceived adaptability (b = .11) and high perceived intensity (b = .11), which were associated with conduct problems at 7-12 years (b = .19; b = .06; respectively). Conduct problems at 7-12 years did not predict adolescent depressive symptoms, above and beyond the other variables in the ARCL model.
Fourth, in addition, direct associations were found between prenatal depressive symptoms and anxiety/ depressive symptoms at 7-12 years and depressive symptoms at 16 years (b = .11; b = .05, respectively). Direct associations were also found between postnatal depressive symptoms and irritability (b = .14); anxiety/depressive symptoms (b = .17); and conduct problems (b = .12).
Discussion
The present epidemiological study examined three distinct pathways linking a common antecedent -maternal depressive symptomsto a shared equifinal outcome of adolescent depressive symptoms. The overall findings provide evidence for two distinct co-occurring pathways from maternal to adolescent depressive symptoms: an Irritability Symptoms pathway and an Anxiety/Depressive Symptoms pathway; however a third Conduct Problems pathway was not found.
With regard to irritability symptoms (i.e. temper outbursts, being easily annoyed and angry or resentful), unlike previous studies (e.g. Stringaris et al., 2010;Whelan et al., 2013), this study examined irritability symptoms while controlling for co-occurring anxiety/depressive symptoms and conduct problems. Study findings delineate the specific contribution of childhood irritability symptoms to adolescent depressive symptoms, alongside its earlier temperamental features. More specifically, we found a pathway from pre-and postnatal maternal depressive symptoms, to temperamental low perceived adaptability and high perceived intensity in toddlerhood, to childhood irritability symptoms, and ultimately to adolescent depressive symptoms. With Figure 1 Multivariate Autoregressive cross-lagged model of longitudinal relationships between maternal depressive symptoms, early toddler temperament, irritability symptoms, anxiety/depressive symptoms, and an outcome of adolescent depressive symptoms. Multivariate Autoregressive cross-lagged model; * = p < .05; Matdep prenatal = Prenatal maternal depressive symptoms; Matdep postnatal = Postnatal maternal depressive symptoms; Mood = Negative perceived mood; Adapt = Low perceived adaptability; Intens = High perceived intensity; Irrit = Irritability symptoms at 8,10 and 13 years collapsed; A/D = Anxiety/depressive symptoms at 7,10 and 12 years collapsed; CP = Conduct problems at 7, 10 and 12 years collapsed; and Dep16 = Adolescent depressive symptoms. In this model, we controlled for risk factors common to irritability, anxiety/depressive symptoms, and conduct problems and associated with noninclusion in this study. The resulting population effect sizes are interpreted using the Cohen (1988) regard to temperamental low perceived adaptability, studies have found that irritable children perform poorly in tasks of cognitive flexibility thereby demonstrating deficits (Adleman et al., 2011;Leibenluft, 2011) that is, low perceived adaptability. It may therefore be that irritable children display early temperamental signs of cognitive and behavioral inflexibility (i.e. low perceived adaptability) which manifest as prodromal signs of irritability symptoms, which, if recognized early, may offer a treatment target. In addition, the association that we found between toddler high perceived intensity and childhood irritability symptoms may be explained by the fact that biological systems relevant to the regulation of arousal are functionally immature during pregnancy and birth and mature gradually during the toddler years (Glover, 2011); postnatal maternal depressive symptoms may impact adversely upon the development of these systems. During infancy, the child is dependent on parenting (Jaffee, 2007) to support the achievement of developmental milestones such as cognitive maturation and early social and emotional competence (Shonkoff, Boyce, & McEwen, 2009). However, the presence of depressive symptoms may compromise a mother's ability to provide the sensitive care needed to foster the development of the toddler's self-regulatory capabilities (Barker, 2013;Feldman et al., 2009;Goodman & Gotlib, 1999).
Childhood anxiety/depressive symptomsworrying, being unhappy and tearful and fearful -were found to uniquely contribute to adolescent depressive symptoms above and beyond childhood irritability symptoms and conduct problems, thereby providing a second pathway to depressive symptoms at 16 years. This second pathway linked pre-and postnatal maternal depressive symptoms with toddler temperamental negative perceived mood, childhood anxiety/depressive symptoms, and adolescent depressive symptoms. The associations between preand postnatal maternal depressive symptoms and child anxiety/depressive symptoms may be explained by two distinct but related processes. With regard to prenatal depressive symptoms, our findings appear congruent with research suggesting that depressive symptoms can lead to an intra-uterine environment not conducive to healthy fetal development (Weinstock, 2008), thereby increasing risk for abnormal child development, including -but not specific tochildhood anxiety/depressive symptoms (Barker et al., 2011;Glover, 2011). Additionally, as noted above with respect to irritability symptoms, postnatal depressive symptoms may negatively alter a mother's ability to provide attentive and sensitive care needed to foster the development of the toddler's self-regulatory capabilities (Barker, 2013;Feldman et al., 2009;Goodman & Gotlib, 1999). This in turn could increase the risk of a toddler developing temperamental negative perceived mood and childhood anxiety/depressive symptoms, ultimately increasing the risk for the onset of adolescent depressive symptoms. Future research may want to examine more closely how specific symptoms of postnatal maternal depressive symptoms (e.g. as measured by the EPDS: anxious or worried vs. low laughter, humor) may align more as a risk for anxiety/depressive symptoms or irritability.
Third, conduct problems were not found to associate with adolescent depressive symptoms when childhood irritability symptoms and anxiety/depressive symptoms were controlled. In this study, at the bivariate level, the correlation between conduct problems and depressive symptoms was significant, albeit half the magnitude of the association between irritability symptoms and adolescent depressive symptoms. However, in the autoregressive crosslagged model, this association became nonsignificant. Possibly, conduct problems associate with adolescent depressive symptoms (e.g. Barker et al., 2010;Lahey et al., 2002) via irritability symptoms. Indeed, a recent study (Stringaris et al., 2014) highlighted that irritability symptoms shared genetic associations with childhood depressive symptoms and conduct problem symptoms. Prenatal Strengths of this study include large sample size, longitudinal focus, and inclusion of cross-informant predictions (i.e. mother and teacher reports of risks, child reports of adolescent depressive symptoms). However, the present results should be interpreted in light of a number of limitations. First, this study, as with previous studies, is correlational and not causative. Second, it should be noted that we relied on self-reports of mothers on a range of study variables including their own depressive symptoms and the child temperaments and irritability, anxiety/depressive symptoms and conduct problems. An important limitation is that the study almost exclusively (exception being adolescent depressive symptoms) relied on mothers' impressions of their ownand their child'spsychosocial wellbeing (e.g. hence 'perceived' temperament) Indeed, although studies suggest that depressed mothers can be as accurate as other informants about their children's behavior (Richters, 1992), ALSPAC does not have the capability of confirming or disconfirming potential bias associated with maternal depressive symptoms by comparing mother reports of their children to independent, validated criterion raters. Third, we do not have information on whether mothers in this study received treatment for depression. As treatment induced reduction in maternal depressive symptoms associates in improved adjustment in offspring (Pilowsky et al., 2008;Weissman et al., 2006), the present results may underestimate the effect of maternal depression on child wellbeing. Fourth, younger and more socially disadvantaged mothers were more likely to be lost to follow-up. As these predictors of attrition also predict child psychopathology, our sample is likely to under-represent the most severely affected children. Of note, an ALSPAC cohort study (Wolke et al., 2009) has shown that attrition affects the prevalence of DSM-IV disruptive behavior disorders (which includes ODD), however, associations between risks and outcomes remained present, although conservative of the likely true effects. Fifth, although the ALSPAC sample represents a broad spectrum of SES backgrounds, it includes relatively low rates of ethnic minorities. The present results will need replication with more ethnically diverse and high risk samples. Sixth, this study did not test for indirect effects between study variables and as such future studies may wish to examine indirect effects. If indirect effects were found they would suggest that successful intervention on maternal depressive symptoms could lead to lower adolescent depressive symptoms through higher perceived toddler temperamental adaptability and lower perceived intensity, and lower irritability symptoms.
In conclusion, we found that irritability symptoms contributed independently to adolescent depressive symptoms when co-occurring anxiety/ depressive symptoms and conduct problems were controlled. Moreover, common risk factors of preand postnatal maternal depressive symptoms were associated with two equifinal pathways to adolescent depressive symptoms, based on temperamental features and child characteristics. First, we found, an Irritability Symptoms pathway linked with toddler temperamental low perceived adaptability and high perceived intensity. Second, we also found an Anxiety/Depressive Symptoms pathway linked with toddler temperamental negative perceived mood. Thus, this study supports the existence of distinct developmental pathways to adolescent depressive symptoms while pinpointing important targets and windows of opportunity for prevention. We suggest that interventions addressing childhood irritability symptoms, as well as maternal depressive symptoms, toddler temperamental low perceived adaptability and high perceived intensity; and those that target childhood anxiety/depressive symptoms alongside toddler temperamental negative perceived mood may be the most efficient manner to prevent the onset of adolescent depressive symptoms.
Key points
• Studies suggest that the ODD subdimension of irritability prospectively associates with adolescent and young adult depressive symptoms • This study supports the existence of a distinct Irritability Symptoms developmental pathway to adolescent depressive symptoms above and beyond co-occurring anxiety/depressive symptoms and conduct problems.
• Common risk factorsmaternal depressive symptoms were associated with two equifinal pathways to adolescent depressive symptoms, based on temperamental features and child characteristics. An Irritability Symptoms pathway linked with toddler temperamental low adaptability and high intensity; and an Anxiety/ Depressive Symptoms pathway linked with toddler temperamental negative mood.
• We suggest that interventions addressing childhood irritability symptoms, as well as childhood anxiety/ depressive symptoms alongside toddler temperamental negative mood may be the most efficient manner to prevent the onset of adolescent depressive symptoms.
|
v3-fos-license
|
2020-11-26T09:02:46.979Z
|
2020-11-20T00:00:00.000
|
228864404
|
{
"extfieldsofstudy": [
"Physics",
"Materials Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://aip.scitation.org/doi/pdf/10.1063/5.0022033",
"pdf_hash": "82e259dc788e51a2f42f1ba6dec8e5a6da345306",
"pdf_src": "Arxiv",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2419",
"s2fieldsofstudy": [
"Physics"
],
"sha1": "65024768688bc1548831409a47e6efa751c18b2a",
"year": 2021
}
|
pes2o/s2orc
|
Magnetization Reversal Signatures of Hybrid and Pure N\'eel Skyrmions in Thin Film Multilayers
We report a study of the magnetization reversals and skyrmion configurations in two systems - Pt/Co/MgO and Ir/Fe/Co/Pt multilayers, where magnetic skyrmions are stabilized by a combination of dipolar and Dzyaloshinskii-Moriya interactions (DMI). First Order Reversal Curve (FORC) diagrams of low-DMI Pt/Co/MgO and high-DMI Ir/Fe/Co/Pt exhibit stark differences, which are identified by micromagnetic simulations to be indicative of hybrid and pure N\'eel skyrmions, respectively. Tracking the evolution of FORC features in multilayers with dipolar interactions and DMI, we find that the negative FORC valley, typically accompanying the positive FORC peak near saturation, disappears under both reduced dipolar interactions and enhanced DMI. As these conditions favor the formation of pure Neel skyrmions, we propose that the resultant FORC feature - a single positive FORC peak near saturation - can act as a fingerprint for pure N\'eel skyrmions in multilayers. Our study thus expands on the utility of FORC analysis as a tool for characterizing spin topology in multilayer thin films.
I. Introduction
Magnetic skyrmions have been realized in several material systems, most notably magnetic multilayer thin films which host nanoscale skyrmions at room temperature [1][2][3]. In such multilayers, the Dzyaloshinskii-Moriya interaction (DMI) arising at the ferromagnet (FM)/heavy-metal (HM) interfaces is paramount in stabilizing the Néel spin textures that these skyrmions possess [4,5]. However, the actual spin textures of these skyrmions have recently been proven to be more complex, owing to the competition between DMI and dipolar interactions between the thin film layers [6][7][8]. For most multilayer systems, skyrmions exhibit thickness-dependent magnetization profiles, where a centrallayer Bloch texture is sandwiched between Néel textures of opposite chiralities from the topmost and bottommost layers [7]. These are known as hybrid skyrmions, whereas uniform Néel-texture skyrmions throughout the multilayer, realizable in a high-DMI environment, are known as pure Néel skyrmions [7].
Differences in the spin texture and chirality of skyrmions strongly influence their current-driven dynamics [7,[9][10][11], rendering the knowledge of their complete, three-dimensional spin textures crucial for spintronic material design. Distinguishing between hybrid and pure Néel skyrmions, however, requires sophisticated imaging methods, such as circular dichroism x-ray resonant magnetic scattering [7,8] or nitrogen-vacancy center magnetometry [12] in order to resolve the thicknessdependence of the magnetic textures. These techniques may not always be readily available in most research facilities. On the other hand, the interplay of dipolar interactions and DMI, as well as other ubiquitous and tunable magnetic interactions in multilayers, directly affect the domain size, density, and the level of disorder in the skyrmion configuration [3]. These parameters consequently influence the magnetization reversal processes [13] and hysteretic behaviour of spin textures [14], which may provide indirect clues for inferring the inner complexity of three-dimensional skyrmions.
The ability to identify these subtle processes has been demonstrated by the First Order Reversal Curve (FORC) technique, which provides a magnetic fingerprint of the interactions and reversal processes occuring in magnetic materials [15,16]. Recent studies have begun utilizing FORC in skyrmion-hosting multilayers to study field history control of zero-field skyrmion population [17,18], while simultaneously revealing magnetic reversal mechanisms influenced by the skyrmion configuration [17]. Indeed, the variety of magnetic interactions and skyrmion configurations realizable in different thin film heterostructures offer a rich resource for FORC studies. , suggesting that their stability results primarily from magnetic dipolar interactions [19], while the latter shows smaller 46 ± 12 skyrmions, indicating the more dominant role played by the DMI (D) in skyrmion formation [19]. FORC analysis and MFM imaging reveal distinct irreversibility features in these two material systems. Using micromagnetic simulations, we show these two multilayers stabilize hybrid and pure Néel skyrmions, respectively, which may account for their distinct FORC features. To support this hypothesis, we apply our analysis to Pt/Co/MgO samples with different numbers of layer repetitions, and also to Fe/Co multilayers with different ferromagnetic compositions. Again, we observe a correlation between FORC features and the relative strengths of dipolar interactions and DMI, which faciliate the transition from a hybrid to a pure Néel skyrmion texture [7]. This points towards a possible thermodynamic signature for high-D multilayers, which can stabilize pure Néel skyrmions. (4) were deposited on thermally oxidized silicon wafers at room temperature (numbers in parentheses refer to the layer thickness in nm). Ir/Fe/Co/Pt samples were deposited using a Chiron ultra-high vacuum multi-source sputter tool, while Pt/Co/MgO samples were deposited using a Singulus Timaris ultra-high vacuum multi-target sputter tool. The base vacuum in each case is 1×10 -8 Torr and sputtering is carried out in 1.5×10 -3 Torr of argon gas. Magnetization measurements on these samples were performed using superconducting quantum interference device (SQUID) magnetometry, in a Quantum Design Magnetic Property Measurement System (MPMS), to obtain the saturation magnetization (MS). Out-of-plane and in-plane hysteresis loops were also acquired to determine the uniaxial effective anisotropy values Keff.
Multilayer
FORC measurements were then conducted on as-grown samples using a Vibrating Sample Magnetometer (VSM) at room temperature. Each FORC measurement consists of a two-part sequence: (1) the sample is first saturated at a positive field and then brought to a reversal field , (2) from the magnetization of the sample is measured starting from and ending at 0, as the applied field is reversed. Repeating the sequence for multiple values of , we obtain a family of FORCs ( Figure 1(a),(b)), used to compute the FORC distribution defined as: Plotting as a density plot against and produces a FORC diagram ( Figure 1(c), (g)), which quantifies the degree of magnetic irreversibility for the magnetic field histories of the measured sample. Each FORC diagram is studied by complementary MFM images, which capture the magnetic textures obtained by different field histories. The method used for acquiring and analyzing MFM images is similar to that described in Ref. [3].
Micromagnetic computations were performed by means of a state-of-the-art micromagnetic solver, PETASPIN, which numerically integrates the Landau-Lifshitz-Gilbert (LLG) equation by applying the Adams-Bashforth time solver scheme [20]: where G α is the Gilbert damping, is the normalized magnetization, and h is the normalized effective magnetic field, which includes the exchange, interfacial DMI, uniaxial anisotropy, and Zeeman fields, as well as the magnetostatic field computed by solving the magnetostatic problem of the whole system [8,21].
III.
Results and discussion We first focus on the FORC diagram of Pt/Co/MgO, which has an estimated (Refs. [2,7]) D value of 0.5 mJ/m 2 . For this sample, we observe large regions of irreversibility extending all the way from = 0 mT to ≈ −180 mT. The first feature is a wide, positive-valued ridge from ! = 0 " to ! ≈ −125 " (Figure 1(c)), coinciding with the transition from labyrinthine stripes to the skyrmion phase (Figure 1(d)-(f)), where approximately 100 nm-diameter skyrmions emerge in a disordered configuration at ! = ! = −110 " (Figure 1(f)). Based on the interpretation of Ref. [17], we deduce that the large, positive-valued region of irreversibility for | | ≤ | | in this range corresponds to skyrmion and stripe mergers taking place as the applied field decreases. As | ! | increases from 125 " to 180 ", a pair of irreversible regions consisting of a negative-valley (blue) and positive-peak (red) emerges (Figure 1(c)). This familiar pair feature arises from the sign change in the second derivative of the magnetization as neighboring reversal curves diverge and then converge in the high field regime (dashed circles in Figure 1(a)). The feature, which was observed near the out-of-plane saturation fields in FORC studies of other magnetic multilayers [15][16][17], signifies the onset of skyrmion annihilation as the applied field increases along the diagonal edge, followed by skyrmion and stripe nucleation as the field is reduced along the H axis.
While the negative-positive pair feature frequently appears in magnetic multilayers, including the Ir/Fe(x)/Co(y)/Pt stacks, it does not appear for Fe(0.4)/Co(0.4), where D=2.1 mJ/m 2 . No sign change in the second derivative of the magnetization is observed, and hence only a single positive peak is seen as the system approaches saturation, i.e. from ! ≈ −150 mT to ! ≈ −225 mT ( Figure 1(g)). This feature is preceded by an elongated irreversible ridge extending from ! ≈ −50 " to ! ≈ −150 " (Figure 1(g)). Unlike the sprawling irreversible feature in Pt/Co/MgO, the irreversible ridge for Fe(0.4)/Co(0.4) is narrower and localized around the diagonal edge of the FORC diagram. This indicates the presence of a large population of skyrmions, whose repulsive interaction at the short range precludes skyrmion merger, thus resulting in less irreversible activity taking place as the applied field reverses from . Indeed, high-density skyrmions appear as early as ! ≈ −50 " (Figure 1(h)) and quickly transform into a dense array of small skyrmions (≈ 50 nm in diameter) as the applied field increases (Figure 1(i)).
This configuration stands in sharp contrast to the sparse array of larger skyrmions (≈ 100 in diameter) observed in Pt/Co/MgO. Due to their larger size, the latter are likely to be strongly stabilized by dipolar interactions, thus exhibiting hybrid magnetization profiles. The appearance of these hybrid spin textures may be linked to our observed differences in FORC features. To test this hypothesis, we performed micromagnetic simulations of the two systems and extracted their thickness-dependent spin textures. Figure 2 summarizes micromagnetic simulations for the two multilayers. In both cases, the skyrmion diameter is thickness-dependent, being larger in the middle layer and smaller in the external layers. This is attributed to the z-component of the magnetostatic field [7]. The size of the skyrmion is larger in the Pt/Co/MgO sample than in Fe(0.4)/Co(0.4), in qualitative agreement with experimental measurements. A crucial difference between the two cases lies in the thickness-dependence of their respective spin textures. In Fe(0.4)/Co(0.4), the spin chirality is independent of the layer position and a pure Néel skyrmion is obtained in all the layers. This can be attributed to the strong DMI in Fe(0.4)/Co(0.4), which, by overcoming the magnetostatic field dictates the skyrmion texture in all the layers, in agreement with previous theoretical results [7].
On the other hand, a skyrmion in the Pt/Co/MgO exhibits a layer-dependent chirality (hybrid skyrmion), which gradually changes from Néel with an outward spin chirality at the bottom layer, to an intermediate skyrmion mixing Néel-outward and Bloch-clockwise chiralities in the middle layer, and eventually to a Néel skyrmion with inward chirality at the top layer. This is ascribed to the small DMI value in Pt/Co/MgO, thus allowing the magnetostatic field to be dominant. The small DMI only affects the position of the Bloch skyrmion, which is not located in the middle layer, as expected from the magnetostatic field, but is shifted upward to the 10 th layer, consistent with previous findings [7,8].
Comparing our micromagnetic simulations with the FORC features in Fig. 1(c),(g), we found that the coexistence of a positive peak and negative valley of the irreversibility coincides with the stabilization of hybrid skyrmions, stabilized by a combination of DMI and dipolar interactions. On the other hand, the presence of a single positive peak coincides with the presence of pure Néel skyrmions, stabilized primarily by interfacial DMI interactions. The distinct FORC features observed in Fig.1 and the hybrid and pure Néel skyrmion textures suggested by micromagnetic simulations thus suggest a potential correlation between FORC distribution features and the strengths of dipolar interactions and DMI, which influence the thickness-dependent skyrmion textures.
To investigate this correspondence, we track the evolution of FORC distributions and skyrmion diameters in Pt/Co/MgO multilayers with the dipolar interaction strength, by reducing the number of layer repetitions (N) progressively from 15 to 2. The results are encapsulated in Fig 3, where the and axes of the FORC diagrams are normalized to the out-of-plane saturation field, determined as the value at which irreversible features terminate. As interlayer dipolar interaction weakens with reduced N, the FORC distributions transition from a negative-positive peak pair to a single positivepeak. Correspondingly, the observed skyrmion diameter decreases from ≈ 105 nm (for N = 15) to ≈ 80 nm (for N = 4), reflecting a transition from a dipolar-dominant regime to a DMI-dominant regime of skyrmion stability [19]. These observations suggest the disappearance of the negative FORC valley correlates with a reduced dipolar interaction in the multilayer.
Likewise, we also track the evolution of FORC features with the increase in DMI, achieved by varying the Fe/Co compositions of the [Ir(1)/Fe(x)/Co(y)/Pt(1)]20 heterostructure. Raising the Fe/Co composition ratio while keeping their total thickness ≤ 1nm effectively increases the DMI strength while also modifies other magnetic parameters. This results in a variation of the skyrmion size, density, and energetic stability, which can be correlated with key changes in the respective FORC diagrams. The gradual disappearance of the negative-valley, the increase of the DMI strength, and the two-fold decrease in skyrmion diameter [3,17] (Figure 4 (f)) again suggest a transition from a dipolar-dominant to a DMI-dominant regime of skyrmion stability [19], thus hinting at a transition from hybrid to pure Neel skyrmions.
To support this inference, we have performed additional micromagnetic simulations for samples Fe(0.2)/Co(0.8) and Fe(0.5)/Co(0.5), and compared them with the case of Fe(0.4)/Co(0.4). In Fe(0.2)/Co(0.8) with DMI strength of 1.5 mJ/m 2 (Fig. 4(g)), we observe a hybrid skyrmion where the Bloch skyrmion is present in the 17 th ferromagnetic layer. In contrast, the Bloch position for Pt/Co/MgO appears roughly at the center of the stack due to dominant dipolar interaction over DMI. In Fe(0.5)/Co(0.5) (D = 1.9 mJ/m 2 ), no Bloch skyrmion is observed, and the 3D skyrmion profile is almost pure-Néel, with outward chirality in all the layers except for the topmost layer, which hosts a Néel skyrmion with inward chirality (Fig. 4(h)). Eventually, the skyrmion profile achieves a complete pure Néel texture in all the layers in the case of Fe(0.
IV. Conclusion
In summary, we investigated the magnetization reversals and skyrmion configurations for Pt/Co/MgO and Ir/Fe/Co/Pt multilayers, using a combination of FORC measurements, MFM imaging, and micromagnetic simulations. Wide sprawling FORC regions with a characteristic negativevalley/positive-peak pair are indicative of large, hybrid skyrmions in low-D Pt/Co/MgO. In contrast, a single positive FORC distribution peak is indicative of small, pure Néel skyrmions in high-D Fe(0.4)/Co(0.4). By reducing the number of film layer repetitions in Pt/Co/MgO and tuning the thicknesses of Fe and Co in Fe(x)/Co(y) multilayers, we observe a transition of FORC features from a negative-valley/positive-peak pair to a single positive peak in correspondence with a reduction in dipolar interactions and an increase in the DMI strength, respectively. Hence, we propose that the single positive FORC feature can be a useful fingerprint for pure Néel skyrmions in multilayer systems. In addition to providing an indicator for skyrmion spin chirality, the observed FORC features enable a robust assessment of the thermodynamic stability of skyrmions within a particular multilayer: the negative FORC valley vanishes as the stability rises. Whilst additional spin imaging techniques are desirable for microscopically resolving the multitude of complex spin topologies [7,8,12], FORC analysis can play an important role in the analysis of magnetic multilayers. Combining these techniques can efficiently address future challenges in designing and optimizing skyrmionic materials.
|
v3-fos-license
|
2017-05-24T05:28:05.615Z
|
2016-06-29T00:00:00.000
|
9975943
|
{
"extfieldsofstudy": [
"Biology",
"Medicine"
],
"oa_license": "CCBY",
"oa_status": "HYBRID",
"oa_url": "https://doi.org/10.15252/embj.201593106",
"pdf_hash": "5d6097eaf1172b4d155fec8751a889aae469481c",
"pdf_src": "PubMedCentral",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2421",
"s2fieldsofstudy": [
"Biology"
],
"sha1": "cabeaad56efb3371a72ac8e1f2a8c130611f14c1",
"year": 2016
}
|
pes2o/s2orc
|
Spatial control of lipid droplet proteins by the ERAD ubiquitin ligase Doa10
Abstract The endoplasmic reticulum (ER) plays a central role in the biogenesis of most membrane proteins. Among these are proteins localized to the surface of lipid droplets (LDs), fat storage organelles delimited by a phospholipid monolayer. The LD monolayer is often continuous with the membrane of the ER allowing certain membrane proteins to diffuse between the two organelles. In these connected organelles, how some proteins concentrate specifically at the surface of LDs is not known. Here, we show that the ERAD ubiquitin ligase Doa10 controls the levels of some LD proteins. Their degradation is dependent on the localization to the ER and appears independent of the folding state. Moreover, we show that by degrading the ER pool of these LD proteins, ERAD contributes to restrict their localization to LDs. The signals for LD targeting and Doa10‐mediated degradation overlap, indicating that these are competing events. This spatial control of protein localization is a novel function of ERAD that might contribute to generate functional diversity in a continuous membrane system.
1) The authors' favored model is one where Pgc1 inserts into the ER, then traffics to LDs, the failure of which triggers its ERAD via Doa10. This is a reasonable explanation of the observations, but it seems to me an alternative explanation is also plausible. In this alternative view, Pgc1 is normally directly inserted into LDs from the cytosol. If this fails (e.g., when LDs are absent or simply due to some inefficiency), PGC1 is then degraded via cytosolic quality control. Note that cytosolic QC of proteins with hydrophobic regions can be mediated by Doa10 (see Metzger et al., 2008;PMID 18812321), which would explain why Doa10 is needed, and why one can see an ER localization pattern. I favor this model because it would also explain why Cdc48 is NOT needed, and why the cytosolic ligase Ubr1 also contributes to the degradation, observations that in the authors' model are a little puzzling. This alternate model should certainly be considered and discussed. If a straightforward experiment exists to discriminate the two views, it is worth including.
2) Related to point 1, the authors' experiment to demonstrate Pgc1 traffic from the ER to LDs in Fig. 3A would appear to have an alternative explanation. In short, Fig. 3A is not a pulse-chase type experiment and hence one cannot be convinced that the ER Pgc1 observed at time 0 moves to the LDs observed at time 3h. It is equally possible that the ER population was degraded over this time (completely consistent with degradation kinetics shown in Fig. 1) while at the same time newly synthesized Pgc1 populates the newly formed LDs. It should be possible to discriminate these possibilities. One way is if photo-activatable GFP is used: activate the ER population at time 0 and follow it during LD induction. There may be other ways to test this as well.
The above two points need to be clarified before publication. Note that either model is interesting and worth publishing. My proposed alternative model would perhaps be less novel as earlier work has described the phenomenon of mislocalized proteins begin degraded in a mammalian system (PMID 24981174 and 21743475), although the factors are different.
Referee #3: The manuscript by Ruggiano et al. reports a new function for the endoplasmic reticulum associated degradation (ERAD) pathway. ERAD is a protein quality control mechanism that eliminates misfolded proteins from the endoplasmic reticulum to maintain protein homeostasis. In budding yeast, it employs two major ubiquitin ligases, Hrd1p and Doa10p. The author followed up on their previous proteomic study, which identified a few lipid droplet proteins as potential substrates of the ERAD ubiquitin ligase Doa10 because the expression of these proteins was elevated in Doa10 deletion yeast cells. In this study, the authors demonstrate that this is due to increased stability. They showed that the lipid droplet (LD) protein Pgc1, Dga1, Yeh1are unstable proteins whose degradation depends on the 26S proteasome. The degradation of these proteins also requires Doa10 and the cognate ubiquitin conjugating enzyme Ubc6p and Ubc7p, but the Cdc48p ATPase is dispensable. To characterize this process, they focused on Pgc1. They showed that Pgc1 is normally partitioned between ER and lipid droplet and only the ER pool is subject to degradation by the ERAD pathway. They further show that the long transmembrane domain of Pgc1 is both necessary and sufficient for degradation by the proteasome. The study identifies a new class of substrate for the ERAD pathway (some lipid droplet proteins mislocalized to the ER membrane, and suggests that ERAD may have a quality control function to remove mislocalized LD proteins. Overall, the conclusions are well supported by the results. I only have a few relatively minor points for the authors to consider, which I believe will make the paper a stronger candidate for EMBO. 1. The authors tested the degradation of Pgc1 in Cdc48 and Npl4 mutant yeast stains and found no effect. They mentioned that the strains were characterized previously by testing a bone fide ERAD substrate, whose degradation is strongly inhibited by mutations in Cdc48 gene. However, that was done separately. It would be more convincing if the authors can put Erg1 together with Pgc1 in the same strain and show that one is affected by not the other. I am a little bit concerned that the negative data here might be due to some trivial experimental error because in a paper cited by the authors, a mammalian LD protein was recently demonstrated to be an ERAD substrate, but the degradation requires p97, the mammalian homolog of Cdc48. 2. In Figure 4C, the authors wish to demonstrate that oleate acid feeding, a procedure that induces LD formation may stabilize Pgc1. However, the difference is too small to be significant. It may be better to compare the stability of Pgc1 in the mutant yeast cells that do not have LD (those shown in Figure 3A before induction) and those that have the Dga1 expression induced (LD induction condition shown in Figure 3A) or perform the oleate acid feeding experiment in mutant cells that do not have any LD to begin with.
Minor points: 1. On page 9, the authors mentioned Erg6 as "a well-characterized LD marker protein". When I searched PubMed, I could not find many papers showing this as an LD marker. Please give the reference. 2. On page 13, the authors mentioned that their study reveals "a function for the ERAD pathway that is distinct from its role in protein quality control". In a broad sense, the newly demonstrated ERAD function can still be considered as quality control as it removes LD proteins mislocalized to the ER, which is analogous to the cytosolic quality control pathway that deals with mislocalized ER membrane proteins. 3. Figure 1B, what do I and U stand for? Please explain. 4. Can standard deviation be calculated from two experimental datasets? It may be more appropriate to say that the error bars indicate the range of two experimental repeats. With key experiments (such as those shown in Figure 4B), it would be more convincing if the authors provides 3 repeats instead of 2, so can add p value to the graph. Some quantification graphs appear too busy. It may be better to separate the curves into 2 or more panels, so the comparison is more obvious? The FRPA experiments need to explain the number of LD analyzed in the figure legend.
Response to Reviewers
We were very pleased to see that the reviewers found our study interesting, technically excellent and were generally supportive of publication in the EMBO Journal.
We are now submitting a revised manuscript in which the issues raised have been addressed. In particular, this version includes two additional key results. First, we took advantage of live cell microscopy and photoconversion experiments to follow a population of Pgc1 molecules. The results demonstrate that Pgc1 traffics through the ER en route to LDs and argue against Pgc1 being independently targeted to ER and LDs. Second, we show that the Cdc48 ATPase function is necessary for Doa10-mediated degradation of Pgc1 and Dga1, which are strongly stabilized in cells expressing the tight allele cdc48-6. Importantly, we show that in cdc48-6 cells, polyubiquitinated Pgc1 accumulates and partitions with ER membranes, as assayed by membrane flotation analysis. These results are consistent with the role of Cdc48 in Pgc1 membrane extraction. In the previous version of the manuscript, we showed that the kinetics of Pgc1 and Dga1 degradation was essentially indistinguishable between wt and cells expressing cdc48-3 allele, even if the steady state of both substrates were higher in the mutant cells (this data is still presented as a supplemental figure). Our interpretation of these results is that residual Cdc48 activity in cdc48-3 cells is sufficient for ER extraction of LD proteins that associate with the membrane through a hydrophobic hairpin. Perhaps membrane extraction of hairpin-containing proteins like Pgc1 and Dga1 may need only a single Cdc48 ATPase cycle while extraction of polytopic ERAD substrates might require processive and/or multiple rounds of Cdc48 activity. The ATPases of proteasome 19S regulatory particle have also been implicated in membrane extraction of some ERAD substrates. However we did not find any convincing evidence for a role of 19S ATPases in Pgc1 and Dga1 degradation. Finally, the number of replicates were increased for all the experiments. We believe that the new data, together with additional minor points listed below, make the paper significantly stronger.
Reviewer 1: 1-We agree with the reviewer that, in general, the concept of spatial segregation of proteins leading to membrane heterogeneity has been described in different contexts. However, to our knowledge, the INM and the LDs proteins (described here) are the only examples in which this segregation is facilitated by spatially restricted protein degradation. In both cases ERAD is involved suggesting that this pathway, besides its well-characterized function in degrading misfolded proteins, also plays important roles in determining ER architecture. Moreover our work expands the knowledge on the degradation of mislocalized proteins. Previous work showed that proteins that fail to target or that target the wrong membrane are detected as mislocalized (this work is now discussed and appropriate references included). We now show that cells also have the means to discriminate mislocalized proteins, even if these are in the same membrane, and raise the possibility that this spatial quality control is important in generating functional subdomains in a continuous membrane system, as is the case of the ER.
This manuscript describes a body of evidence that the ERAD ubiquitin ligase Doa10 is important for consolidating the LD-specific localization of certain LD resident proteins. This question of LDspecific localization is an interesting one because the phospholipid shell that encases LDs is contiguous with the cytoplasmic leaflet of the ER and many LD proteins are inserted into the ER prior to being sorted into LDs. The basic findings of this MS are that Doa10 plays an important ubiquitin-and proteasome-dependent function in degrading LD proteins in the ER
efficiently back into the ER. On the other hand, the fact that Doa10 is a polytopic membrane protein (with multiple luminal loops) precludes its diffusion into LDs. All together these data lead to a simple model in which the LD pool of Pgc1 is physically separated from the ERAD machinery and as such protected from degradation. This model is simply based on the physical separation of the substrate from its degradation machinery.
With the tools used in our study we cannot pinpoint the hairpin structural features. However, we postulate that hairpins that evolved to localize to LD monolayers (such as the ones of Pgc1 and Dga1) while in the ER bilayer reveal some conformational instability. As a consequence, they are recognized by Doa10 as quality control substrates. If and how ERAD recognizes misfolded substrates targeted to the LD surface by amphipathic helices, we agree with the reviewer that it is an interesting question. However, we believe it is outside of the scope of this study. 1-Analysis of Pgc1 membrane association by alkaline treatment indicated that it was stably inserted in membranes both in wt and doa10Δ cells. This was further supported by the membrane flotation analysis (presented now). Given the well characterized role of Cdc48 in membrane extraction of ubiquitinated ERAD substrates, like the reviewer, we were puzzled by the lack of a phenotype in cdc48-3 mutant cells. Therefore we decided to reevaluate the requirement of Cdc48 function using a different, presumably more stringent, Cdc48 allele (cdc48-6). Inactivation of Cdc48 function in cells expressing cdc48-6 strongly delayed the degradation of both Pgc1 and Dga1. Moreover, we detected increased levels of ubiquitinated Pgc1 associated with membranes in the cdc48-6 cells. Together these results indicate that Cdc48 function participates in the membrane extraction of ER-localized LD proteins during ERAD. As mentioned above, we interpret that residual Cdc48 activity in cdc48-3 is responsible for the discrepancy between the two alleles. This result suggests that, while extraction of polytopic ERAD substrates requires processive and/or multiple rounds of Cdc48 function, membrane extraction of hairpin-containing proteins like Pgc1 and Dga1 may need only a single/few Cdc48 ATPase cycle.
2-We expressed Pgc1 fused to the green-to-red photoconvertible fluorescent molecule tdEOS and followed the fate of ER localized EOS-Pgc1 upon induction of LD formation. We found that EOS-Pgc1 photoconverted at the ER concentrated at LDs. This is consistent with the idea that Pgc1 transits through the ER on its way to LDs and argues against the model that Pgc1 is independently targeted to ER and LDs. Together with point 1., this experiment strongly supports the model presented in the paper and we thank the reviewer for suggesting it. Finally, we discussed the similarities and differences of the protein spatial control of LD proteins and previously described degradation of mislocalized proteins. Figure 4C, the authors wish to demonstrate that oleate acid feeding, a procedure that induces LD formation may stabilize Pgc1. However, the difference is too small to be significant. It may be better to compare the stability of Pgc1 in the mutant yeast cells that do not have LD (those shown in Figure 3A before induction) and those that have the Dga1 expression induced (LD induction condition shown in Figure 3A) or perform the oleate acid feeding experiment in mutant cells that do not have any LD to begin with.
Minor points: 1. On page 9, the authors mentioned Erg6 as "a well-characterized LD marker protein". When I searched PubMed, I could not find many papers showing this as an LD marker. Please give the reference.
2. On page 13, the authors mentioned that their study reveals "a function for the ERAD pathway that is distinct from its role in protein quality control". In a broad sense, the newly demonstrated ERAD function can still be considered as quality control as it removes LD proteins mislocalized to the ER, which is analogous to the cytosolic quality control pathway that deals with mislocalized ER membrane proteins. 3. Figure 1B, Figure 4B), it would be more convincing if the authors provides 3 repeats instead of 2, so can add p value to the graph. Some quantification graphs appear too busy. It may be better to separate the curves into 2 or more panels, so the comparison is more obvious? The FRPA experiments need to explain the number of LD analyzed in the figure legend.
1-Please see our response to Reviewer 2, point 1. Also we would like to clarify that in all the experiments with Cdc48 mutant alleles, the analysis of the control ERAD substrate Erg1 was always performed in the same cells. In all cases we used a previously described anti-Erg1 antibody to look at endogenous Erg1. 2-We agree with the reviewer that in the original manuscript there was only a marginal effect of the oleate feeding on the kinetics of degradation of Pgc1. Now, by performing longer oleate feeding, preventing triglyceride lipolysis or both, we detect more pronounced delays on Pgc1 degradation. We note the strength of the effect increases with the expansion of LD surface consistent with the notion that the LD pool of Pgc1 is protected from Doa10dependent degradation. In agreement with this conclusion, the kinetics of degradation of Vma12-Ndc10C', a Doa10 substrate that does not localize to LDs, was unaffected under all the tested conditions.
Minor points: 1-References have been added. 2-We now reference and discuss our data in light of previous findings on the role of quality control pathways in degrading mislocalized proteins. 3-The labelling of Figure 1B Thank you for submitting your revised manuscript for our consideration. It has now been seen once more by the original referees (see comments below), and I am happy to inform you that they are broadly in favor of publication, pending satisfactory minor revision.
I would therefore like to ask you to address referee #3's remaining concern and to provide a final version of your manuscript.
The authors have responded strongly and positively to the reviewers' criticisms. Two additional lines of experimentation are provided. The first is a photoconversion assay that in effect represents a pulse-chase experiment to demonstrate that Pgc1 loads into LDs from an ER pool rather than a cytosolic pool. These data are consistent with the ERAD machinery degrading Pgc1 in the ER when loading into LDs is inhibited. Second, the authors resolve what was a paradoxical result rewgarding the role of Cdc48 in the degradation reaction. Using a tighter allele, they now find Cdc48 is indeed required for Pgc1 degradation in the ER therefore making this a canonical ERAD reaction. The demonstration that polybiquitinated Pgc1 accumulates in the ER of cdc48-6 mutants supports that conclusion.
The technical quality of the work is high, it is clearly written, and most definitely deserves publication. This reviewer remains on the fence as to whether their refined definition of spatially controlled degradation as it relates to ERAD and LD cargo is of sufficient conceptual novelty to merit publication in EMBO J. This reveiwer remains of the opinion that this quality work is better suited for a more specialized journal.
Referee #2: The authors have addressed the two main issues I had raised with additional experiments and discussion. The new results are convincing, and I am happy to recommend publication of this interesting study in EMBO journal.
Referee #3: The authors have addressed my criticisms adequately. There is only one minor issue remaining. In the previous version, the authors used the Cdc48-3 allele and a Npl4 mutant stain. They found that inactivation of Cdc48 or its co-factor did not affect ERAD of pgc1, but now in the revised version, they used a different Cdc48 mutant strain to reach the opposite conclusion. They argue that the two different Cdc48 mutant strains may not cause similar degree of lost-of-function, but the readers are left wondering about the cause of this difference. It would be nice if the authors could sequence the Cdc48 gene in these two strains, which may allow them to provide a plausible explanation for such a difference. At minimum, the authors should provide the source and a reference for the Cdc48-6 allele.
2nd Revision -authors' response 01 June 2016 We are submitting a final version of the manuscript in which we addressed the minor issue raised by reviewer #3. Previous studies describing the phenotypes of the CDC48 alleles used in our work are now referenced. Moreover, we followed the reviewer's suggestion to sequence the two alleles. The data shows that cdc48-3 contains mutations only in the D1 ATPase domain (P257L and R387K), whereas in agreement with its tighter phenotype, cdc48-6 contains mutations in both D1 (P257L) and D2 (A540T) ATPase domains. These findings support our conclusion that membrane extraction of hairpin-containing LD proteins (like Pgc1 and Dga1) may require only residual Cdc48 activity. These results are now included in the final manuscript.
Accepted 02 June 2016
Thank you for submitting your revised manuscript to us. I appreciate the additional insight into the cdc48 alleles offered and I am happy to inform you that your manuscript has been accepted for publication in the EMBO Journal.
Congratulations!
For animal studies, include a statement about randomization even if no randomization was used.
4.a. Were any steps taken to minimize the effects of subjective bias during group allocation or/and when assessing results (e.g. blinding of the investigator)? If yes please describe. Do the data meet the assumptions of the tests (e.g., normal distribution)? Describe any methods used to assess it.
Is there an estimate of variation within each group of data?
Is the variance similar between the groups that are being statistically compared?
Please ensure that the answers to the following questions are reported in the manuscript itself. We encourage you to include a specific subsection in the methods section for statistics, reagents, animal models and human subjects.
Captions
The data shown in figures should satisfy the following conditions: Source Data should be included to report the data underlying graphs. Please follow the guidelines set out in the author ship guidelines on Data Presentation. a statement of how many times the experiment shown was independently replicated in the laboratory.
Any descriptions too long for the figure legend should be included in the methods section and/or with the source data.
B--Statistics and general methods
the assay(s) and method(s) used to carry out the reported observations and measurements an explicit mention of the biological and chemical entity(ies) that are being measured. an explicit mention of the biological and chemical entity(ies) that are altered/varied/perturbed in a controlled manner.
the exact sample size (n) for each experimental group/condition, given as a number, not a range; a description of the sample collection allowing the reader to understand whether the samples represent technical or biological replicates (including how many animals, litters, cultures, etc.).
Bias in group allocation does not apply. Blinding during results assessment was not used NA definitions of statistical methods and measures: 1. Data the data were obtained and processed according to the field's best practice and are presented to reflect the results of the experiments in an accurate and unbiased manner. figure panels include only data points, measurements or observations that can be compared to each other in a scientifically meaningful way.
|
v3-fos-license
|
2020-06-18T09:09:39.974Z
|
2020-01-01T00:00:00.000
|
226480531
|
{
"extfieldsofstudy": [
"Computer Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://doi.org/10.5267/j.jpm.2020.3.002",
"pdf_hash": "44b351aec7c3213a5bf9af337ed162eb2ec0325c",
"pdf_src": "MergedPDFExtraction",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2422",
"s2fieldsofstudy": [
"Engineering"
],
"sha1": "9fbde873d4ac701dbbe82d2f77466ecfaa1bdbd9",
"year": 2020
}
|
pes2o/s2orc
|
Predicting project duration and cost, and selecting the best action plan using statistical methods for earned value management
Article history: Received: March 1 2020 Received in revised format: March 7 2020 Accepted: March 19 2020 Available online: March 19 2020 Nowadays, with the increasing number of projects in organizations, managers are enthusiastic about managing and controlling projects, and as projects become more complex and cause unforeseen risks, project management becomes even more critical. Using the earned value management method to control the current status of projects, and predicting the future status of projects has had many advantages. Representing the status of the project, and predicting the project performance by various indicators are some features of this method. As these indicators are deterministic, the risk of the prediction increases when the number of risks goes up in a project. Therefore, statistical methods can be employed to estimate the statistical distribution of risks and notably boost predicting accuracy. The purpose of this paper is to present a method for predicting project duration and cost of the project and selecting the best action plan. Control charts are used along with the earned value management method to increase accuracy. This model also provides the possibility of selecting the best action plan to improve project performance. Moreover, this method can be applied in each project phase separately. Finally, a case study is used to investigate the validity of the proposed method. © 2020 by the authors; licensee Growing Science, Canada.
Introduction
In today's world, planning and controlling are essential issues in project management that have numerous effects on the different fields of the projects, such as lowering the project duration and cost (Ghorbani et al., 2019). Measuring project performance and determining project progress comparing to its baseline has always been a concern of project managers. They are enthusiastic about finding simple ways to denoting project performance. One of the most effective project management methods is the earned value management method. This method was first used in 1960 as a method of financial analysis in the defense industry of the USA. In 1967, this method was used for financial control and analysis. Then, with the expansion of this method, a standard named ANSI EIA 748-A, was published in the United States (Valle, 2006). Finally, this method was used as the most crucial method of project performance control in the project management standard in 2000. After that, the first and second standardized versions of the practice standard for earned value management acquired by the American Project Management Institute in 2005 and 2011, respectively 158 (Fleming & Koppelman, 2016). This method and its indicators, provide the possibility of checking the current status of the project and its performance. Indicators of this method also facilitate the prediction of the future status of projects. Since earned value management is one of the crucial components of project management and control, a significant stream of research has been conducted to earned value management from various perspectives in recent years. Furthermore, multiple extensions of traditional EVM forecasting approaches have been proposed (Batselier & Vanhoucke, 2017). These researches are about the application of statistical methods in predicting project duration and cost using earned value management indices. Among these researches, we can refer to Liu and Lin (2008), who set up individual control charts for CPI and CPI-1 to evaluate project performance. In another study, Hunter et al. (2014) focus on the implementation of earned value management method on the RBSP projects and analyze the benefits of this method to provide insight into cost/benefits considerations considering implementation of this method. Mortaji et al. (2015) use change point analysis to extend a relatively reliable cost and performance indicators. They show how that model can be used to obtain more accurate estimates for the final cost and duration of the projects. Moreover, Chen et al. (2016) proposed a method for improving the predictive power of planned value, and also developed the model for four case projects. Baqerin et al. (2016) presented a model to conduct an estimation of schedule performance. Their model concentrated on the recurring nature of the main activities to forecast schedule performance. Also, Khamooshi and Golafshani (2014) developed a method that decupled schedule and cost performance measures. They also introduced new indicators that have broader applications to measure schedule performance.
In all of the aforementioned researches in the literature, the earned value management approach has been used as a way to control and predict the status of projects on a deterministic basis, which can be incorrect considering project risks. Besides, the project phases are not considered in calculating the earned value management indicators, whereas project phases are different from each other. Moreover, these researches focused on the estimation, and action plan selection is not be considered. Thus, the purpose of this paper is to present a method to project duration and cost estimation and best action plan selection using statistical methods. This method will provide a confidence interval to predict project duration and cost in each phase. This method also suggests the best action plan using expert judgment. Since the proposed method is used in each phase separately, the accuracy of this method will be increased. This paper is structured as follows. In the second section, the concept of earned value management is described. The proposed method is presented in section 3. The validity of the presented method is examined in the fourth section. Finally, conclusions of this research and suggestions for future research are delineated in section 5.
Concepts and definitions
This section explains the concepts related to research, such as the method of earned value management and earned schedule.
Earned Value Management
Earned value management is a systematic way of integrating, measuring, and comparing trends of cost, time, and scope in a project (PMBOK, 2017). This method is one of the most common tools for evaluating project performance. This method has three main parameters that are discussed below (Valle, 2006): Planned Value (PV): It means the budget of planned works. The budget of all activities that should have been spent according to the plan. Planned value is obtained by multiplying the percentage of project progress by the project budget.
Earned Value (EV): It is the value of work done. The earned value for each activity is obtained by multiplying the percentage of actual activity progress in the project budget.
Actual Cost (AC): It is the actual cost spent on the performed activities, including all fixed and variable costs.
Budget at Completion (BAC): It is the planned cost of the entire project. Budget at Completion is equal to the planned value at the final stage of the project .
Based on the above indices, other indices can be calculated, which provide more accurate information about the current status of the project. Some essential indicators in the earned value management method are presented below: Cost Variance (CV): This indicator shows the difference between the earned value and the actual cost that is calculated in Eq. (1): (1) . ) 4 ( .
EV SPI PV
In addition to these indicators, the earned value management method has some indicators to predict the future status of projects. This forecast has a relation with the schedule or cost performance of projects. One of the most important predictors in the earned value method is estimated cost at completion (EAC(c)). This index is calculated as shown in Eq. (5):
Earned Schedule
The earned schedule method is a part of the earned value method that has a different approach toward measuring schedule indices. This method was first introduced by Walter Lipke (2003). In this method, it was shown that the values of schedule variance and schedule performance index by the earned value method would be zero and one respectively, when projects are completed with delay. It means that although projects are not completed on planned duration, the schedule indices show that the project has been finished on time. In other words, those two mentioned indices undergo sudden changes in their trends (Colin and Vanhoucke, 2016). It also happens after the end of projects. Moreover, since the schedule performance index is expressed in monetary terms, a method is needed to calculate project schedule indices. Therefore, the earned schedule method is used to calculate the schedule variance and schedule performance index. This method helps us calculate project schedule performance based on a time unit. The earned schedule can be calculated as depicted in Eq. (6) (Lipke, 2003): where C is obtained by comparing the earned value based on planned values in each period. The method of calculating the value of I is also shown in Eq. (7): Using the earned schedule method, the time-related indices can be obtained in a new way. Schedule Variance (SV) and Schedule Performance Index (SPI (t)) are obtained using Eqs. (8) and (9), respectively: where AT is the time of measurement of the earned value index. The calculation of the schedule variance and the schedule performance index are instead based on the horizontal axis (time) than the vertical axis (cost).
Similar to the estimated cost at completion, there is also an indicator named estimated duration at completion (EAC (t)), as shown in Eq. (10) (Martens and Vanhoucke, 2018):
The proposed method
In this section, the prediction method is presented. First, the relevant indicators are introduced, then the prediction of these indices is made by a control chart. Finally, the best action plan is selected.
Indicators selection
The control chart is one of the methods of quality control. Thus, by determining the mean and standard deviation of a variable, the control chart can be applied for any variable. The limit bars of the control chart vary depending on the rate of first-type error ( ). Also, the probability distribution function of the variables must be determined. Since the variables follow Normal distribution, the accuracy of control charts and its limit increases. This study aims to use indices to plot control charts in which indicate the status of the project and follow Normal distribution. Since time and cost are identified as two measurable factors in project management, indicators that represent these two factors are used. According to Lipke (2002), the probability distribution function of the earned value indices, as well as the earned schedule, was examined. Various tests were employed to investigate the normality of the indices, including the Kolmogorov-Smirnov test, Chi-Square test and Anderson-Darling test. The indices examined in that study are, CPI 1 , Ln CPI 1 , SPI 1 , CV , Ln SPI t 1 . Finally, the result showed that the indices follow the Normal distribution. As these two indices represent the project performance in terms of time and cost, respectively, they are selected for drawing the control chart.
Drawing a control chart for indicators
In this step, the control charts are drawn for the selected indices. It is to be mentioned that control limits are used for a process. An important feature of the process is that it performs a repetitive activity over a long time. So, there is no need to change the initial process control limit for next times, because the process does what it is supposed to do in the long run when it starts. However, the nature of the project is somewhat different from the process. Due to the uncertain nature of projects, a fixed control limit cannot be used to evaluate the performance of a project during its lifecycle, and different control limits should be used depending on the project phase as well as the activities carried out there. Besides, projects have a definite start date and finish date and are not similar in the long run. So, there is a need for a factor to adjust the control limits. In another study, Lipke et al. (2009) explored the use of statistical methods in the earned value management indicators as well as the earned schedule indicators to predict project output. This study introduces factors called adjustment factors for schedule and cost performance indices. The method of calculation of these factors is shown, respectively, for the adjustment factor for schedule and cost in Eq. (11) and Eq. (12) where n is the time of index calculation, and PD is the planned duration for the project. Concerning the mentioned control chart material and the considerations that should be considered for using these charts for the project, the limits for controlling the schedule performance index and the cost performance index can be obtained using Eqs. (13) and (14), respectively: . .
Predicting the duration and cost of projects
Using the upper and lower control limit for each indicator, one can make the most optimistic and pessimistic forecasts for project duration and cost, and provide predicting intervals. Since the natural inverse logarithm of the indices is used to plot the control chart, the highest index value that corresponds to the best project performance according to the indices is the lower control limit of each index. Similarly, the lowest index value is equal to the upper control limit of the index.
Predicting the duration and cost at completion
Using the upper and lower control limit calculated for the ( ) Ln SPI t 1 , an optimistic and a pessimistic prediction for completion duration of the project are calculated using (15) EAC t represent the optimistic and pessimistic prediction for the completion duration of the project respectively. Similarly, an optimistic and a pessimistic prediction for completion cost of the project are calculated using the upper and lower control limit calculated for Ln CPI 1 as shown in Eq. (17) EAC c represent the optimistic and a pessimistic prediction for completion cost of the project respectively.
Selecting the best action plan
Based on the control limits calculation discussed in subsection 3.2, the selected indicators are categorized into four zones named A, B, C, and D, as presented in Table 1. This table helps us for better classification for project status.
The status of any project can be classified into six states (Kerzner, 2015). Then, six states for project performance including "Perfect", "Good", "Normal", "Caution", "Bad" and "Critical" states are defined, as shown in Table 2: Based on the defined states, the best action plan for each project phase is obtained using expert opinions.
Implementing the proposed method on a case study
In this section, the duration and cost forecasting methods introduced in a real project are implemented and the best action plan is selected. Hence, a case study is introduced, and after calculating the control limits for the indicators, duration and cost of projects are estimated.
Introducing a Case Study
In this paper, the information from a database introduced by Batselier and Vanhoucke (2015) is chosen as a case study. This database is available in (OR-AS, 2019). The selected project is about the construction of a residential building in the Netherland. The summary of the approved project information is shown in Table 3:
Calculating Control Limits
To calculating control limits, current project information is needed. Project progress information at the end of the third month is depicted in Table 4:
Analyzing Results
To predict the project's duration and cost, optimistic and pessimistic predictions were calculated. The initial predictions for duration and cost of the project shown in Table 1 are within the prediction interval, which confirms the validity of the proposed method. Besides, a large percentage of the prediction interval for the project cost is less than the budget of the project at the project baseline. In other words, the centerline of the estimated cost of the project is less than the total budget of the project, which indicates the project is in good condition in terms of cost. Moreover, a large proportion of the predicted interval for duration is greater than the actual completion duration. It indicates that the project is likely to take longer than the projected duration.
Selecting the Best Action Plan
The information from a particular time track of the current phase of the project is obtained to select the best action plan. Hence, the information of the project after 180 days from project commencement are shown in Table 5: Then, by calculating the ( ) Ln SPI t 1 and Ln CPI 1 , and comprising the value of these indicators with the control limits calculated in subsection 4.2, the current state of the project is determined. So, the current state of the project is "Normal". To obtain accurate information about selecting the best action plan considering the current states of the project from multiple perspectives, 24 experts from project management were interviewed. Hence, the best action in each state are gathered from experts, as shown in Table 6:
Conclusion
One of the most critical issues in projects is the calculation of the project performance. Several studies regarding project duration and cost predictions are made. A vast amount of mentioned studies proposed models to predict the future of the project. In all of the proposed models in the literature, the phase of the project is not considered, and they use a static approach to predict the final state of the project. Whereas there are many risks in projects and the nature of projects is unrepeatable. Besides, these models cannot provide any method to select the best action plan considering the current state of the project. Although the application of statistical methods in project management is not new, it is novel for simultaneous project duration and cost prediction and the best action plan selection. In this paper, a method for prediction of duration and cost of completion of the project is developed. This model provides an interval prediction of the future status of the project using the earned value management method. Moreover, this model can provide the best action plan based on the current state of the project dynamically. Also earned value management method is used as one of the most popular ways to measure project performance and project duration and cost prediction. There are three main advantages of the proposed method in the present paper. First, since control charts for the appropriate indicators are used in the proposed method, the accuracy of this model in predicting the duration and cost of the project is increased. Second, the possibility of selecting the best action plan to control and improve project performance is provided in this model. Third, this method is used in each phase separately, which leads to an increase in the accuracy of the model. Although, one of the main limitations of this research is that, since the best action plan is obtained using expert judgments, the sensitivity analysis on the result is impossible. Future research can overcome this limitation using a mathematical model to integrate forecasting the final state of the project and selecting the best action plan.
|
v3-fos-license
|
2020-02-26T18:55:55.100Z
|
2020-02-21T00:00:00.000
|
213679559
|
{
"extfieldsofstudy": [
"Political Science"
],
"oa_license": "CCBY",
"oa_status": "GOLD",
"oa_url": "https://www.mdpi.com/2077-1444/11/2/99/pdf?version=1582545439",
"pdf_hash": "99b60d3b74804f70b2a5c48eefb0193ec9526e5b",
"pdf_src": "ScienceParsePlus",
"provenance": "20241012_202828_00062_s4wg2_01a5a148-7d10-4d24-836e-fe76b72970d6.zst:2423",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"sha1": "e701fc3ba40b55c3a9a965cb310f51a709890267",
"year": 2020
}
|
pes2o/s2orc
|
Seven Faces of A Fatwa: Organ Transplantation and Islam
: A new fatwa was announced by the British National Health Service (NHS) in June 2019 to clarify the Islamic position on organ donation. Additionally, the NHS promotional material presents brief arguments for and against organ donation in Islam. However, to date, research into the various fatwas on organ donation is required. This article goes beyond the dichotomous positions mentioned by the NHS and goes on to explore and summarise seven conflicting views on the issue extrapolated from an exhaustive reading of fatwas and research papers in various languages since 1925. Our discussion is circumscribed to allotransplant and confined to the gifting of organs to legally competent adult donors at the time of consent. These arguments include an analysis of the semantic portrayal of ownership in the Qur’an; considering the net benefit over the gross harm involved in organ donation; balancing the rights of the human body with the application of the rule of necessity; understanding the difference between anthropophagy and organ transplantation; understanding of death, and the conceptualisation of the soul. We argue that, given the absence of clear-cut direction from Muslim scripture, all seven positions are Islamic positions and people are at liberty to adopt any one position without theological guilt or moral culpability.
Introduction
In June 2019, the British National Health Service (NHS) announced a new independent legal opinion (henceforth fatwa) clarifying the Islamic position on organ donation (NHSBT 2019).The fatwa was published in the wake of 'The Organ Donation (Deemed consent) Act 2019′ receiving royal assent in March 2019 and scheduled to become law in April 2020 (Department of Health and Social Care 2019).The imminent change in law from express consent to an 'opt-out' system where consent is deemed to be the default position set off a flurry of action and campaigns in the Muslim community.Organisations such as the British Islamic Medical Association (BIMA) with funding from the NHS Blood and Transplant service managed to run numerous workshops and webinars promoting organ donation in the Muslim community (BIMA 2019).On the other hand, there was a pushback from some quarters of the Muslim community accusing the government of meddling in Muslim affairs (Master 2019).
To provide potential Muslim donors with a choice, the NHS promotional material presents brief arguments for and against organ donation in Islam.What is the correct Islamic position on organ transplantation?Why are some Muslims campaigning in support of organ donation while others oppose it?In this article, we go beyond the two positions mentioned by the NHS and explore seven conflicting views on the issue.We demonstrate that the Islamic discussion on organ transplantation is highly technical and multi-faceted.The evidence provided for and against it is simultaneously opaque and porous, giving rise to a multitude of understandings.We posit that all seven positions are valid Islamic positions, expanding the range of choices hitherto offered by the NHS.We recommend that people consult with their families, imams, chaplains, doctors and significant others to arrive at a theologically informed decision about donating their organs, and the organs of their loved ones.
Methodology and Scope of the Article
The positions mentioned in this article were gleaned from an exhaustive reading of fatwas and research papers in Arabic, Urdu, and English.The initial search was conducted by inputting key words related to organ transplantation into the 'Islamic Medical and Scientific Ethics Database' (IMSE Project), a collaborative effort between two Georgetown Libraries, the Bioethics Research Library (Washington DC) and the Georgetown University Qatar Library (Doha) (Shabana et al. 2009).Certain Qur'anic concepts such as 'milkiyya', 'rūḥ ' and 'nafs' were analysed using the ArabiCorpus tool, which was developed by Professor Dil Parkinson of Brigham Young University.The tool consists of over 174 million Arabic words-of which, 77% are from newspapers, 28 million words from non-fiction literature, 9 million words from premodern literature and 1 million words of modern literature (Parkinson 2013).In explicating the seven positions, we sufficed with analysing the major arguments for each position as a detailed analysis of all arguments will surpass the word count limit of this article.Moreover, our discussion is circumscribed to allotransplant, i.e., receiving from and donating to another human being.Autotransplant, xenotransplant, and donation for medical research are not the focus of this article as these transplantations come with another set of ethical issues not discussed here (see Padela and Duivenbode 2018 for some of these issues).
In conducting our research, we have upheld certain conventions without challenging them.Thus, our discussion on live organ donation is confined to the gifting of non-vital organs, as there is a consensus on the impermissibility of donating vital organs by a living person (IFC 2003;IIFA 1988). 1 Additionally, we automatically excluded the issue of donating male and female reproductive glands from both living and cadaver donors from our discussion; its impermissibility has not been challenged by anyone (Albar 1994).However, we do not exclude womb transplantation since some scholars have conditionally permitted it (for transplantation into women only) arguing that the womb has no influence in the genetic makeup of a child (Shawqī 2018b).With respect to donor types, we have focused on adult donors who were legally competent at the time of consent.The arguments in this article do not extend to the minor or the legally incompetent person as there are complicated ethical issues associated with them; a discussion, which is beyond the scope of this article.Finally, there are other peripheral issues associated with organ transplantation including directed organ donation, inter-faith organ donation, the status of organs of criminals, the issue of consent and deemed consent and the ethics of organ reception (see Rispler-Chaim and Duguet 2018).These will be discussed in subsequent articles.
Preliminary Remarks
Before delving into the different positions on organ transplantation, some remarks on the Islamic sources employed in this discussion are in order.Organ transplantation is an issue that is conspicuous by its absence in Muslim scripture.Scholars discussing the topic creatively entertain what God would have wanted had He pronounced on the subject.The starting point of all discussions is silence.Scripture is drawn upon to explain related abstract topics such as ownership of the body (Sachedina 2011, p. 176), human dignity and prohibition of mutilation (Ebrahim 1995, pp. 292-93;Sachedina 2011).These are abstract concepts which can be argued for either way depending on who is interpreting them.Thus, the discussion on organ transplantation falls within the domain of 'legal discretion' (ijtihād) (Moosa 1998, p. 293), which is the reason why there is a plurality of opinions.
A fatwa is the product of ijtihād, which is a non-binding legal opinion provided by a specialist trained in Islamic law known as a muftī.Given the complex nature of technology and specialised knowledge, current practice in the Muslim world is to hold conferences, which bring together a group of specialists including muftīs, medical doctors, lawyers and other professionals depending on the nature of the conference.The collective deliberations at such conferences lead to the birth of a novel mode of reasoning and a new way of arriving at religious verdicts known as ijtihād jamāʿī (collective legal reasoning) (Caeiro 2017;ʿAbdullāh 2010;Ghaly 2012a;IFC 2004).The resolutions arrived at these conferences have more legal force than the fatwa of a lone muftī since government legislatures are present at those conferences.For example, the resolution passed by the International Islamic Fiqh Academy of Jeddah (IIFA) in 1988 led to the Saudi Government adopting it as their official position on organ transplantation.Ali AlBar writes in his exhaustive study on organ transplantation that as a result of this law, for the period up to 1991, Saudi Arabia saw 823 successful kidney transplants-352 of which were procured from patients whose deaths were determined using neurological criteria and 471 donations from living family members (Albar 1994).
The first Muslim discussion on organ transplantation at our disposal is by the Saudi scholar ʿAbd al-Raḥ mān al- Saʿdī (d. 1965) and dates back to 1925.Al-Saʿdī stages his discussion as a debate between opponents and proponents of organ transplantation without mentioning which side he is on.Nevertheless, it is not difficult to distil his position from the article.The Saudi bioethicist, Abdullah Aljoudi, in preparation for presenting his research as a poster to the Harvard bioethics conference 2018 quantified al-Saʿdī's fatwa.Aljoudi observes that out of the 1,476 words of the article, al-Saʿdī utilises 22.6% of the words describing the prohibition position; the bulk of his article (56.9%) is used to simultaneously respond to the opponents of organ transplantation as well as arguing in its favour (Aljoudi 2018).Interestingly, al-Saʿdī's discussion focuses on blood transfusion and corneal transplants.This focus is understandable given the state of transplant medicine during al-Saʿdī's time.
The early part of the 20th century saw a shift in the world of organ transplantation.Al-Saʿdī's context for his discussion originated before major breakthroughs were discovered.Thus, he provides a more generic guideline, which places trust in medical professionals and strongly encourages jurists and medical experts to collaborate on the matter.The discussion also highlights that as the medical field advances, what may once have been prohibited due to harm, may be permitted due to greater potential benefits (Maravia 2019).
Innovation in medical treatment is viewed positively by al-Saʿdī.Accordingly, the benefits of transplantation to patients are expected to outweigh the harms brought to the donor.Al-Saʿdī's faith in the medical field may have been solidified by the two decades of successful corneal transplants since 1906 as well as the effective use of defibrillators in Europe.Likewise, consequent decades discovered tissue typing and immunosuppressant drugs in the 1970s to ensure more effective treatments.Al-Saʿdī, therefore, appears to have envisioned the trajectory of medical breakthroughs rather than fear the possible harms on living donors (Maravia 2019).
Earlier discussions and fatwas on organ transplantation focus primarily on individual organs and tissues rather than providing a fatwa for the entire body.The earliest discussions focused on blood transfusion (Al-Saʿdī [1925] 2011;Makhlūf 1951;Maʾmūn 1959a), followed by cornea tissue (Maʾmūn 1959b;Al-Harīdī 1966) and skin graft (Al-Khāṭ ir 1973).Only in the late 1960s did a general discussion on organ transplantation took place rather than individual organs (Shafīʿ [1967(Shafīʿ [ ] 2010;;Gād al-Haq 1979).Recent fatwas on organ transplantation delve into novel and non-routine transplants such as womb transplant (Shawqī 2018b) and mitochondria DNA transplant (Shawqī 2018a).Despite the different foci of these fatwas and discussions, nearly all of them display the same concerns.
What can be gleaned from an exhaustive reading of these fatwas and discussions is that the following topics fare quite highly in them: 1. God's ownership of the human body, 2. Human dignity, 3. Necessity, 4. Altruism and charity, 5. Benefit and harm, and 6.A watertight definition of death.
Our research has revealed that there are seven main opinions on organ transplantation in addition to some minor variations of these opinions.Below, we present these seven opinions; for each position, we provide the names of some advocates, its major arguments and response to those arguments.We use a considerable amount of space engaging with positions one and two as subsequent positions draw from the same pool of resources as these two.
Position 1: Organ Reception and Donation are both Forbidden
The first position can be deemed as the default position on how a human being should be treated as far as bodily integrity is concerned (see Rashid 2018).Proponents of this position argue that the human body should be left naturally intact as far as possible without any invasive intervention.This belief stems from the Islamic understanding of the 'primordial natural state' (fiṭ ra) enshrined in the verse of the Qur'an 'This is the natural disposition God instilled in mankind-there is no altering God's creation,' (Q.30:30). 2 For proponents of this position, organ transplantation in both iterations: reception and donation are prohibited.This opinion was held by Muḥ ammad Shafīʿ (d. 1976), former chief muftī of Darul Uloom Deoband India (Shafīʿ [1967] 2010), Akhtar Rezā Khān (d. 2018) (Khān 1991), Muḥ ammad Mitwallī Al-Shaʿrāwī (d. 1998) (Al-Shaʿrāwī 1987), ʿAbdullāh Ṣ iddīq al-Ghumārī (d. 1993) (2007) and ʿAbd al-Salām ʿAbd al-Raḥ īm Al-Sukkarī (1988) to name a few.These scholars resort to four types of sources to argue their position: (1) scripture, (2) classical Islamic law, (3) society and (4) culture.
Two main arguments are made by invoking scripture: (1) God's ownership of the human body and (2) human dignity.The Qur'an clearly places the sovereignty of everything within God's domain, 'Exalted is He who holds all control in His hands; who has power over all things.'(Q.67:1).The Qur'an further singles out human beings as the property of God, 'Say, 'I seek refuge with the Lord of people, the Master of people, the God of people.' (Q.114:1-3).From such verses, it has been inferred that God is the true owner and master of the human body while humans act as mere stewards and agents for it.Stewardship implies that humans do not have unlimited freedom over their bodies (Sachedina 2011, p. 176).This freedom has to be bridled with accountability and responsibility which includes a fair-use policy.
The Ownership Argument
By using the above verses as a springboard, scholars from this camp develop rational arguments to prove that organ transplantation is impermissible.The argument is that true ownership of a thing means that one has complete control and discretionary right over that thing.Once the definition is established, the next question is whether it applies to human beings vis-à-vis their organs and limbs.To test this definition, scholars employ the case of voluntary and involuntary movements in the human body.Bakrū posits that there are certain movements and functions in the human body which are out of a person's control such as breathing, flowing of blood and vital fluids, and bowel movements (Bakrū 1992, p. 201).Furthermore, even voluntary movements are predicated on God willing them to move, without which a person is not able to move an inch.By employing biological and theological reasonings, Bakrū concludes that since human beings fall short of the definition of ownership vis-à-vis their bodies, they do not have the right to transact with it.
Numerous responses have been given to the ownership argument.Firstly, a corpus-based analysis of the Arabic verb 'yamlik' (to own) and its derivatives from the Qur'an reveal that, contrary 2 All Qur'an translations in this article has been taken from Haleem, M. A. Abdel.(Haleem 2005).The Qur'an.Oxford: Oxford University Press.
to Bakrū, one does not need to have full control over a thing to own it.A frequency search of the verb 'yamlik' and its derivatives using the ArabiCorpus tool reveals that the verb 'yamlik' and its associate words denote owning the concrete such as wealth, as well as having the abstract such as right or ability.Moreover, in the Qur'an, collocates of 'yamlik' and its variant forms-mālik and malik-portray a distinction whereby humans are owners whereas statues and structures cannot own even a qitmīr, which is the membrane of a date seed (Q. 35:13).The verse implies that humans are owners of various goods in life.On the other hand, God reserves complete control over mostly abstract concepts such as sustenance, benefit, harm, life and death.
The collocation analysis reveals that the Qur'an clearly accepts humans as being owners and possessors of material substances but makes humans conscious of the fact that they are not in absolute control.Thus, the semantic portrayal of ownership (milkiyya) in the Qur'an is antithetical to Bakrū's notion of one not being able to own what one has no control over.Rather, the very fact that one is not in control is the aim of the Qur'anic message while establishing that humans have been allowed ownership.This ownership extends to things that God created Himself including slaves, a concurrent issue during the time of revelation.
The fact that slavery was tolerated in Islam through Qur'anic sanction and the Prophetic mandate is one of the strongest defences against the ownership argument.Islam did not institute slavery, but it certainly did not abolish it, although emancipation was seen as a highly recommended act of worship (Brown 2019).Clarification is required that the point of this rebuttal is to demonstrate that the ownership argument is not consistent and should not be construed as an argument for the return of slavery.
Secondly, some have questioned whether the ownership argument really has any legal basis.The former grand muftī of Lebanon, Muḥ ammad Rashīd Qabbānī, argues that to explore the issue from the angle of ownership is incorrect as no one disputes this fact (Qabbānī 2003).Qabbānī maintains that the issue needs to be tackled from the point of view of discretionary rights and not ownership.By employing the rights argument, one is able to arrive at a decision on the extent of discretion that humans have over their bodies.Qabbānī maintains that the human body is a site where both God and humans share a claim on it and people's right over their bodies is privileged over God's right.While Qabbānī's argument does not neatly establish the permissibility of organ transplantation and donation, he manages to create a space to discuss bioethical issues related to the human body without having to discuss the ownership question (for a detailed commentary and translation of Qabbānī's discussion see Ali 2019b).
For the Qatar-based Egyptian scholar, Yūsuf al-Qaraḍ āwī, human organs and limbs are similar to wealth, since both of them have been given to humans by God, and therefore fall under the same ruling related to wealth.The only difference is that the restriction to donate organs is slightly stricter than donating wealth (Al-Qaraḍ āwī 2009).Al-Qaraḍ āwī's reading of the Prophetic statement 'Every good is charity (ṣ adaqa)' goes beyond financial help and extends to any form of the 'good'-one example of which is organ donation.
Finally, the ownership argument fails in the case of blood transfusion.With the exception of Khān, all other proponents acknowledge that blood transfusion is permissible.This contradiction is a methodological flaw in their argument, since blood, albeit regenerative, nevertheless is a part of the human body, which according to their argument should be forbidden since one does not have complete control over their own blood.These scholars respond that the permissibility of blood transfusion is based on the issue of selling human milk which is permissible.This argument is problematic from numerous angles.First of all, it contradicts the original argument about control as a basis for the prohibition of organ transplantation.Secondly, blood is categorically mentioned as one of the forbidden and impure substances in the Qur'an which these scholars believe does not have any curative value.Despite this notion, these scholars allow transfusion whereas they do not allow organ transplantation about which the scripture is silent (Moosa 1998).Finally, the analogy with milk is an incorrect one as the former is permissible and pure, while the latter is impermissible and impure.
The Dignity Argument
Stemming from the argument that the human body is a trust from God, who is the true owner of the body, is the issue of human dignity (karāma and ḥ urma).Organ transplantation violates this dignity and therefore it is impermissible.The Qur'an in numerous verses mentions that God has dignified and honoured the human being (Q.17:70).Violation of this dignity is measured in two ways: (1) degradation (ihāna) and ( 2) mutilation (muthlā).While retrieving organs, which prolongs the funeral is not in and of itself mutilation, however, since it prolongs what naturally should be done (i.e., burial), it is deemed to be an infraction of that dignity (Krawietz 2003).Furthermore, viewing the dying or dead person as a potential repository for organs relegates the value of the human to that of a means to an end.
The degradation (ihāna) intensifies when physical intervention into the body is involved.Any form of incision into the human body, dead or alive, without it having any physical benefit to the donor (iṣ lāḥ al-badan) is deemed mutilation.By way of evidence, a conversation in the Qur'an between God and Satan regarding how the latter will lead people astray is presented.According to Qur'an 4:119, Satan announces to God that one of his major ploys to lead people astray from God's way is by seducing them to mutilate and change the creation of God.The above verse, coupled with a Prophetic statement, 'breaking the bones of the dead is like breaking the bones of the living,' is the final nail in the coffin against organ transplantation (Ibn Mājah, bāb fī al-nahyy fī kasr ʿiẓ ām al-mayyit, cited in Al-Bassām 2003).For the proponents of the first position, organ transplantation is an evil anticipated by the Prophet and an instantiation of the self-fulfilling prophecy of the devil.
Scholars have responded that while the Qur'an declares that humans have dignity and are honoured, it has not laid down concrete guidance as to how this dignity is to be actualised.Therefore, it is left on society to decide how to define dignity (Raḥ mānī 2010; Butt 2019).Al-Bassām (d. 2002) mentions that mutilation (muthlā) has a specific understanding in Arabic relating particularly to the context of war.Mutilation in Arabia was used as a form of weapon employed to cause hurt to the living by desecrating their loved ones.Malignant intention is a prerequisite of mutilation.Al-Bassām argues that this understanding of mutilation cannot be transposed on to precise surgery carried out in a clinically sterile environment at the hands of a qualified surgeon for the sole purpose of saving someone else's life (Al-Bassām 2003).Furthermore, he argues that to deem organ transplantation as an example of the actualisation of Satan's prophecy is misplaced and an incorrect stretching of the meaning of the verse.Thus, a close reading of the Qur'an reveals that mutilation in this context relates to certain occult practices involving cutting off of animal organs (especially the male-born of the fiveyear-old camel) to ward off evil from the rest of the flock (Al-Bassām 2003, p. 40).
Organ Transplantation in Secondary Literature
In addition to using generic verses regarding God's ownership and human dignity from Muslim scriptures, scholars of the first position draw upon the Islamic legal literature which includes both abstract legal maxims (al-qawāʿid al-fiqhiyya) as well as legal precedents (furūʿ al-fiqh) in order to fortify their position.Legal maxims are a set of principles derived from scripture to which legal scholars (henceforth, fuqahāʾ) resort to in new arising situations in the absence of firm textual evidence (Rabb 2010).The maxims are an eclectic mix of categorical moral imperatives and utilitarian statements.Some of the legal maxims employed by advocates of the first position include (1) Harm is not to be removed by another harm, (2) harm is not to be removed by similar harm, and (3) that which one cannot sell, cannot be gifted (Al-Shinqīṭ ī 1994, p. 365).
These legal maxims are abstract guiding principles that do not fit neatly with the issue at hand.For example, the first maxim can easily be challenged by questioning (in the case of cadaver donation) whether any harm is inflicted on the donor.Moreover, should not the 'net benefit' of a transplant be considered over the 'gross harm' (if any) involved in the procedure?More will be said about this further down.The abstract nature of the legal maxims makes them harder to pin down neatly to any given case as opposed to legal precedents.Advocates of the first position are on more solid ground when they employ these precedent cases from the legal literature (furūʿ al-fiqh).
The precedent cases allow scholars to extend them to the ruling of organ transplantation by way of analogy (qiyās).A benefit in using this method is that it is based on precedents set by previous scholars; contemporary scholars are in good company as they do not need to venture into unchartered territory.However, a problem with this approach is that, in their zeal to veer closely to a text as far as possible, scholars may infer wrong analogical reasoning from the precedents resulting in an incorrect legal ruling for the issue at hand.Two precedent cases presented by the advocates will be explored here.The human dignity argument established from the Qur'an demands that human beings are not a means to an end.Based on this Qur'anic command, the medieval fuqahāʾ declared any therapeutic use of human teeth, hair and bones to be forbidden except for the owner of these items (Nizām al-Dīn 1991).The Syrian scholar al- Būṭ ī (d. 2013) argues that the examples adduced in the medieval law books relate to cosmetic enhancement of the human (taḥ sīn) and is not to be confused with modern invasive life-saving technology which falls under the degree of necessity (ḍ arūra).Islam allows such exceptions to the laws in the case of necessity only (Nyazee 2016, p. 185).Al-Būṭ ī explains that the examples are correct in that no one argues for the use of human remains for cosmetic enhancement, but they are not accurate legal precedents for organ transplantation (Al- Būṭ ī 1988).The ḍ arūra argument has been labelled as the 'breaker of all rules' argument (Brown 1999).However, one can question whether the relaxing of a Shariah law in the presence of necessity is absolute?Are there situations where the necessity rule does not apply?
Cannibalism and Anthropophagy
Proponents of the first position believe that the 'necessity' rule should not be used recklessly.The rule fails in circumstances like the consumption of human flesh (anthropophagy) in times of dire necessity such as in a famine where no other source of food is available.The logic of using this argument is that if it can be established that consuming human flesh is permissible in dire necessity, which is the ultimate aggression inflicted on the human body, then organ procurement will be a fortiori permissible.However, medieval Muslim scholars were not unanimous on the issue of anthropophagy which led to contemporary scholars dually employing it to prove contradictory opinions.Shafīʿ and his colleagues from the Indian sub-continent stuck to the position of the Ḥ anafī school of law which argues that any form of anthropophagy is forbidden even in a life-threatening situation.Others accept the position of the Shāfiʿī school of thought, which is the most lenient on this issue.Al-Būṭ ī argues that the Shāfiʿī permissibility of anthropophagy is in line with the broader objective of the Shariah (maqāṣ id al-sharīʿah).
The anthropophagy argument features early on in the organ transplantation debate (Shafīʿ [1967(Shafīʿ [ ] 2010).However, one may ask whether the analogy between consuming human flesh and procuring and transplanting an organ is correct and symmetrical?Can there be any parity between eating human flesh and transplanting human organs?In the case of the former, it is most likely that the person is found dead; his flesh is consumed, gnashed with the teeth, swallowed, digested and excreted.Anthropophagy is in stark contrast to removing an organ in a sterile environment at the hands of professional surgeons and then equally transplanted into a recipient, taking every care not to harm or perforate the organ in any way that renders it useless.The image that is conjured up by the first scenario is bloody and brutal; an image that vividly depicts mutilation in every sense of the word.One needs to ask whether the same revolting thoughts are conjured in the mind when talking about organ transplantation.
Social Ills
Organ transplantation is more or less accepted throughout the world as an effective life-saving technology.Why then are the scholars from the first position so adamant to forbid it?Are they against the saving of life, which is a religious imperative?In addition to viewing the act of procuring, donating and transplanting an organ as a violation of religious sensibilities established from scripture and Islamic law, these scholars were wary of the negative effect that their fatwas will have on their society.There is a genuine fear on their part that in the absence of government-supported transplant programs in Muslim countries, fatwas on the permissibility of organ donation will legitimise the demand for organ harvesting-the supply of which will most likely come through illegal organ trafficking and black market organ trade (Shafīʿ [1967(Shafīʿ [ ] 2010, pp. 55-59), pp. 55-59).Pakistan, during the time Shafī wrote his fatwa, had no government-supported organ transplantation programme.Moazam maintains that transplantation is still being carried out in private transplantation centres-one of which was her fieldwork site (Moazam 2006).
Exploitation of the weak and poor for health tourism reasons is a common problem in thirdworld countries.The public in Egypt was already aware of numerous scandals involving organ transplantation.The televangelical cleric, al-Shaʿrāwī, did not bring them anything new when he campaigned against organ transplantation.For the Egyptian people, al-Shaʿrāwī's fiery brimstone preaching confirmed their anxiety and suspicion regarding the efficacy of organ transplantation.Farmers in Egypt already faced the repercussion of consuming crops treated with pesticides by government contractors in the form of mass renal failure.Furthermore, stories of children kidnapped from orphanages to service organ tourists, and missing eyeballs of dead relatives preserved in state hospitals left a very bitter taste in their mouths (Hamdy 2012).
Concomitant with the exploitation argument, some scholars are worried that allowing organ transplantation will lead to a slippery slope practice resulting in the complete annihilation of the human (corpse).In Islamic legal theory, such an argument is known as 'blocking the means' (sadd al-dharāʾiʿ), which tends to look at the future rather than the present.Shams Pīrzādah from the Indian Fiqh Academy argues that allowing organ transplantation will set off a conveyor belt motion which starts with organ donation, leading to organ transaction, emerging into doctors using human bones and skin to make medicine and end up with doctors playing God (Qāsmī 1994, pp. 191-95).Others argue that the ultimate dignity of the human being is to deposit the decedent's body into the earth.If organ transplantation is allowed, there will come a point where potentially every limb, organ, bones, and tissues of the human being can be put to manufacture mundane things like bags and soap with nothing to bury in the grave (Mawdūdī cited in Al-Sanbhālī 1987, p. 54).The legalisation of organ donation will thus ultimately result in a situation where the entire human corpse is put to use with nothing to bury.
Cultural Imperialism
An alternative argument against organ transplantation comes from the Moroccan Scholar ʿAbdullāh bin Ṣ iddīq al- Ghumārī (d. 1993).Where al-Shaʿrāwī emphasised how organ transplantation encroaches on God's sovereignty and Shafīʿ argued against human exploitation, for al-Ghumārī the issue boils down to cultural superiority.Al-Ghumārī sees in permitting organ transplantation a self-fulfilling prophecy of the Prophet Muhammad.Al-Ghumārī writes, Organ transplantation is something which is prevalent among European doctors and Muslim doctors followed them suit.This is a grave mistake because the religion of Islam honours the dead.… However, people hasten to blindly follow the Christians in everything that comes from them bringing to truth the saying of the Prophet, "You will blindly follow the ways of the previous communities span by span, cubit by cubit" (Al-Ghumārī 2007).Some Egyptian scholars and the public also viewed invasive technological advancements as a way of Westernisation and individualisation of Egyptian society and an erosion of traditional, religious and cultural values (Hamdy 2008(Hamdy , 2012)).Al-Sukkarī argues that many of the new technological and medical progresses which have seeped into Muslim culture were manufactured by non-Muslims who have no understanding of Islamic principles and ethics, these include narcotics in medicine, organ transplantation, gender reassignment surgery, surrogacy, IVF treatment, milk banks, sperm banks and determining the sex of the foetus.Al-Sukkarī argues that since these advancements were not developed by Muslims, they lack an infusion of Islamic ethics, which renders them impermissible (Al-Sukkarī 1988, p. 121).Hamdy argues that to view the debate as a clash of civilisation is a misdiagnosis of the issue.Hamdy contends that as long as the issue remains misdiagnosed, legitimate worries about the exploitation of marginalised patients will remain unaddressed, which will further impede the establishment of a national transplant programme (Hamdy 2013, p. 149).
The forgoing was a discussion of the major arguments and evidence provided by the proponents of position one.There are other arguments associated with this position which include the body or soul feeling pain during organ procurement from a cadaver donor, anxiety over being resurrected with a missing organ or limb in front of God, negative traits of the donor being passed on to the recipient through transplantation especially from a non-Muslim and the question of whether an organ renders impure once separated from the body and its ramifications on the recipient vis-à-vis performing acts of worship in this state.Unfortunately, space does not allow us to elaborate on all of these points.
Position 2: Organ Reception and Donation are both Permissible
Organ transplantation surgery is routine practice today throughout the world.The procedure is viewed as one of the best technological advancements for the betterment of society.Proponents of the second position conform to this understand and have declared both organ reception and donation to be permissible in all iterations, living and dead, determined through circulatory and/or neurological criteria, with certain caveats.This is the opinion of the Islamic Organisation for Medical Sciences (IOMS)3 of Kuwait which arrived at a resolution in its second conference on the topic of beginning and end of life in Islam in 1985 (IOMS 1985, cited in IIFA).This was followed by the resolution arrived at by the International Islamic Fiqh Academy (IIFA) of Jeddah in its 3rd conference held in Amman, Jordan in 1986 (IIFA 1986) and again in 1988 in its 4th session in Jeddah where death determined through neurological criteria was deemed as Islamic death (IIFA 1988).It is also the opinion of eminent Muslim scholars such as the former rector of Al-Azhar University Sayyid al-Ṭ anṭ āwī (d.2010) (Hamdy 2012, p. 48), Yūsuf Al-Qaraḍ āwī (2009) and Khālid Ṣ ayfullāh Raḥ mānī (2010).A similar fatwa was issued by Zaki Badawi in the UK in 1995 (which is being used by the NHS) (Badawi 1995).The resolution is also that of the European Council for Fatwa and Research (ECFR), which was declared in the 6th session in 2000 and is the opinion which is becoming the most popular in the Muslim world as transplant medicine advances and people become more aware of the need of and benefits for transplantation (ECFR 2000; Islamic Religious Council of Singapore 2015; The Ministry of Health Malaysia 2011).
As previously mentioned, the issue of organ transplantation falls within the domain of legal discretion (ijtihād) since there is nothing clear cut in Muslim scripture on the topic.Despite this, proponents of position two believe that the spirit of the Qur'an and hadith is conducive to organ transplantation and donation.These scholars arrive at this decision by joining numerous disparate themes found in the Qur'an and hadith together.These include the necessity to save one's life, the exhortation to save another's life, human dignity and honour, and charity.
Organ Reception
The justification for receiving an organ in a life-threatening or life-enhancing situation is easily justifiable from multiple Qur'anic verses permitting the consumption of prohibited (ḥ arām) ingredients in dire necessity.A typical example of such verses is found in the second chapter of the Qur'an, He has only forbidden you carrion, blood, pig's meat, and animals over which any name other than God's has been invoked.However, if anyone is forced to eat such things by hunger, rather than desire or excess, he commits no sin: God is most merciful and forgiving' (Q.2:173).
While opponents of organ transplantation circumscribe this verse to food products only, the proponents find no reason not to extend it to all cases of dire necessity (Al-Yaḥ yāwī 2016, p. 153).Hence, the proponents argue that such verses also extend to medical treatment using prohibited ingredients and methods (Al-Yaʿqūbī 1987).This is further exemplified through an incident involving one of the companions of the Prophet.ʿArfaja severely injured his nose in a battle.Per Arab medical practice at the time, he made a mould out of silver and fixed it in the place of his nose.After a while, it started to become putrid and the Prophet permitted him to make a mould out of Gold (Abū Dāwūd, bāb mā jāʾ fī rabṭ al-asnān bi al-dhahab, cited in Al-Bassām 2003).Gold is a prohibited item of jewellery for men, but the Prophet allowed it for ʿArfaja due to his particular circumstance.Such guidance from the Qur'an and prophetic practice are further enshrined as legal maxims to facilitate scholars in arriving at decisions where the scripture is conspicuously silent such as: necessity permits the prohibited, hardship facilitates ease, needs (ḥ āja) shares the same legal ruling of necessity (Abū Zayd 1988).
Organ Donation
Where the justification for receiving an organ is easily demonstrated from the Qur'an and hadith, the same cannot be said for organ donation.Here, the scholars employ numerous unrelated pieces of evidence organised logically, allowing them to arrive at the conclusion that organ donation is permissible.The first of these is the above verse read in tandem with the verse, 'Do not contribute to your destruction with your own hands' (Q.2:195).These scholars argue that while it is necessary for a person in trouble to save themselves, it is a duty for others to facilitate this saving lest it results in the troubled person perishing.This is a collective duty (farḍ kifāya) where everyone will be sinful if no one carries it out (Al-Qaraḍ āwī 2009, p. 38).Furthermore, to save a life is one of the objectives of the Shariah which the Qur'an equates to saving the entire human race, 'If anyone saves a life, it is as if he saves the lives of all mankind' (Q.5:32).Al-Qaraḍ āwī, quotes the saying of the Prophet, 'Whoever can benefit his brother, he should' (Ṣ aḥ īḥ Muslim, bāb al-salām) and the second Caliph Umar's recommendation to Muḥ ammad Ibn Maslama, 'If you have a thing which will benefit your brother and not harm you, why do you resist using it?'(Muwaṭ ṭ ā Mālik, kitāb al-aqḍ iya cited in Al-Qaraḍ āwī 2009, p. 44).
The issue of charity and altruism have been invoked as further evidence and encouragement for organ donation.'They give them preference over themselves, even if they too are poor: those who are saved from their own souls' greed are truly successful,' the Qur'an reads (Q.59:9).This verse has led to the justification of numerous actions which otherwise would have been prohibited such as a bystander putting themselves in way of danger to protect a drowning person or a burning person (Qabbānī 2003, pp. 64-65).
The Greater-Good and Lesser-of-Two-Evils Argument
One of the main arguments put forward by proponents of organ donation is that the net benefit of organ donation to the recipient outweighs the gross harm incurred on the donor.These scholars argue that there is no such thing as absolute benefit or absolute harm but a mixture of the two (Qabbānī 2003, p. 63).The legal ruling of permissibility or impermissibility will follow the preponderance of benefit or harm in any situation respectively.To illustrate this point, proponents use two precedent cases from medieval Islamic legal literature.The case of a deceased pregnant woman, where there is a high probability that the baby in her womb is alive; and the deceased, who had devoured someone else's wealth and died.All of the schools of thought are of the opinion that if a pregnant woman dies and her baby is still alive in the womb and a high probability that the child will remain alive at the moment of extraction, it is permissible to open her womb and save the baby (Al-Yaʿqūbī 1987, pp. 80-88).
The ḥ anafīs argue that if it is known for sure that the baby will live, it is obligatory to open the womb, otherwise it is permissible.An opinion from Mālik and the position of the ḥ anbalī school is that it is not permissible to cut open the deceased's womb.A closer look at ḥ anbalī argument reveals that they believed, given the state of technology at that time, that it is never possible to save the child in such a situation-as a result of which, violating the corpse is futile (Al-Yaʿqūbī 1987, p. 60).Ibn Qudāmah writes, 'according to us (ḥ anbalīs), it cannot be established whether the child is alive or not, even then, the child normally does not survive.Hence, it is not permissible to violate the real dignity of the deceased for a doubtful (mawhūm) matter (Ibn Qudāmah cited in Al-Yaʿqūbī 1987, p. 60).Similar to the above, all schools of thought agree that when a person swallows another person's possession, such as jewellery, and then dies, it is permissible to exhume the corpse and extract the valuable by cutting open the abdomen (Al-Yaʿqūbī 1987, pp. 80-88).
Proving the permissibility of organ donation is difficult based on the above points as they provide no explicit evidence that organ donation is permissible.Nevertheless, the point was to demonstrate that the dignity of the dead is not absolute (Al- Būṭ ī 1988, p. 197).Muslim scholars have allowed dignity to be violated to a degree for the sake of achieving a greater good.In the case of the deceased woman, preservation of a new life surpasses the dignity of the mother's corpse.Similarly, the fact that the deceased swallowed someone else's wealth, they have automatically forfeited their right to bodily integrity.In both these cases, the principles of preservation of life and preservation of wealth are at play.Both of these are viewed as greater benefits than the harm caused to the deceased.Advocates of the second position extend the same analogy to organ donation.Advocates argue that while there is minimal harm to a living donor (which is arrived at after thorough medical check-up), and hardly any harm to the dead donor, the benefit it brings to the recipient is life saving or life enhancing.However, does mere necessity warrant violation of the dignity of the donor, dead or alive?Do the donors have a right over their bodies?Does God have a right over the body of the donor?
Proponents of this position are aware of these questions and retort that the donor voluntarily forfeits their right over their bodies through their consent (Al-Qaradāghī 2011, p. 55).Without the consent of the donor, procuring their organs is not permissible irrespective of the life-threatening effect that it will have on the recipient.However, what about God's right?Since God has a claim over the human body, can the donor make that choice on behalf of God? ʿĀrif Al-Qaradāghī provides a formula for knowing whether God sanctions an action or not (in the absence of clear instructions from Him).Al-Qaradāghī posits that God is good and ultimately does things for the betterment of people.If by comparing the harm and benefit of an action, the net harm is greater than its benefit, then God's consent ceases to exist in that thing and it is deemed to be prohibited.However, if the net benefit preponderates the harm, then it can be assumed that God is happy to sanction this action (Al-Qaradāghī 2011, p. 55).If the harm to the donor is greater than the benefit to the recipient, for example as a result of donation the donor falls terminally ill, then organ donation is not permissible.Conversely, if the donated organ saves the life of the recipient or restores a basic function of the body with minimal harm to the donor then organ donation is permissible.The benefits incurred from a living donor, although lifesaving, is still less compared to the benefit gained from a cadaver donor.This imbalance is because, in a live donation, only organs which do not lethally harm the donor are donated, such as one kidney, blood and some tissues.However, restricting donations to living donations alone only reduces the pool of organs available for transplant.All vital organs need to come from cadaver donors.However, what is death and for the purpose of organ donation, how is death to be understood?
Death and Organ Donation
Death is understood as the 'the irreversible loss of that which is essentially significant to its nature' (Veatch and Ross 2015, p. 54).To put it another way, at what point can death-related activities such as the distribution of inheritance and preparation of burial be enacted?In Islam, death translates to the exiting of the soul from the human body (Encyclopedia of Islamic Jurisprudence 1988-2006, 39:248).The Qur'an describes this phenomenon in the following verse, 'He is the Supreme Master over His subjects.He sends out guardians to watch over you until, when death overtakes any of you, those sent by Us take his soul-they never fail in their duty (Q.6:61).' Death from this point of view is a metaphysical phenomenon which cannot be empirically verified.Nevertheless, medieval Muslim scholars associated the flowing of essential fluids (blood and breath) in the body with the presence of the soul and its loss with its exit.When in doubt, they opted for putrefaction, to leave the body until the stench of rotting flesh can be smelt (Ibn ʿĀbidīn 1992, 2:193).This is the common-sense understanding of death.In other words, certain physical criteria were observed as an indicator of the exiting of the soul.Two words are normally associated with the word 'soul', 'rūḥ ' and 'nafs'.The word 'rūḥ ', when referred to as a material disembodied body, is translated as 'spirit'.There is only one occasion in the Qur'an (Q.17:85) where the word 'rūḥ ' can possibly refer to the human soul and most commentators believed it to be so.' [Prophet], they ask you about the rūḥ .Say, 'The rūḥ is an order (ʾamr) of my Lord.You have only been given a little knowledge.'(Q.17:85).If this is the case, then per Qur'anic instruction, it is not possible to define what the soul/rūḥ is.However, a linguistic concordance analysis reveals that every instance of the usage of the word 'rūḥ ' in the Qur'an either refers to the angel Gabriel, or Jesus Christ, or revelation, or the Spirit that God breathed into Adam's mould.In fact, in addition to Q. 17:85, there are two other verses where the word 'ʾamr' is associated with the word 'rūḥ ' and in both these verses, the word 'rūḥ ' either refers to the angel Gabriel (Q.42:52) or revelation itself (Q. 40:15).
What the above analysis reveals is that it is highly unlikely that the word 'rūḥ ' in Q. 17:85 is referring to the 'human soul' which one can never know.On the other hand, the word 'nafs' has multiple meanings in the Qur'an, and its translation as 'soul' is not contextually appropriate in all its usage in the Qur'an (Sachedina 2011, p. 148).Thus, it refers to the: self (Q.2:9), human being (Q.2:72), life (Q.2:155), reflective pronoun (Q.2:187), inner disposition (Q.2: 235), soul (Q.3:185), spirit (Q.4:1), evil self (Q.5:30), exiting of the soul (Q.6:93), extraction of the soul at the time of death (Q.39:42), self-reproaching soul (Q.75:2), and the content-soul returning to God (Q.89:27).The different usage of the word 'nafs' reveals that it does not only refer to the 'soul' but as the human as an integrated being involving, physical life, psychological disposition with its evil thoughts and self-reproach and the spiritual soul which returns to God (Sachedina 2011, p. 148).This integrated capacity of personhood, viewed as a vital force, makes a human a human and its absence is deemed as the onset of death.
Organ Donation and Death Determined through Neurological Criteria
While death determined through circulatory criteria is what corresponds with a common-sense understanding of death, organs retrieved through such determination of death are not always prime.This is due to the gradual destruction of the organ's cells as a result of oxygen deprivation.Technological advancement in intensive care techniques gave birth to the brain-based concept of death towards the end of the 19th century (Machado et al. 2007, p. 197).The concept merged with organ transplantation after the publication of the 'Harvard Ad Hoc Committee report in 1968 entitled 'A Definition of Irreversible Coma' (Veatch and Ross 2015, p. 52;Machado et al. 2007, p. 198).Henceforth, the success of transplant improved with the refinement and development of the brainbased death concept (Machado et al. 2007, p. 198).Organ retrieval from brain-dead patients became the major and primary source of organs due to their quality as a result of the decedent being artificially ventilated and perfused.Advocates of whole brain death (USA) or brain stem death (UK) believe that irreversible loss of vital brain functions is akin to the death of the organism.It should be understood that such criteria for death are unprecedented in human history.Prior to the invention of life-support machines, this situation would not have arisen.A terminal and lethal injury to the brain would have meant all vital functions of the body including breathing and heartbeat would have ceased.Death determined using neurological criteria was only possible because of such 'new deathassaulting technologies' (Veatch and Ross 2015, p. 53).
The clinical diagnosis of brain death as actual death is more or less standard practice all around the world although there are disparities in how one arrives at this diagnosis (Veatch and Ross 2015, pp. 52-63).Muslim scholars like al-Ṭ anṭ āwī argued that determining the onset of death falls outside of the jurisdiction of Islamic scholars, and that physicians have full authority over this matter (Hamdy 2012, p. 48).In 1985, the IOMS in Kuwait recognised brain death as Islamic death.And the declaration of the IIFA in 1986 and 1988 led to transplantation centres opening up in Saudi Arabia.However, how did these scholars arrive at the decision that brain-based death is actual death?
The Rūḥ , the Brain and Death
As we have mentioned above, the soul is not an unknown entity that cannot be tracked.For scholars of position two, the soul is the vital force that animates the human being.Medieval Muslim scholars also recognised this function of the soul.Ibn al-Qayyim al-Jawziyya (d.1350), the 14 thcentury ḥ anbalī scholar of Damascus, asks what the constituent parts of the rūḥ /soul is made up of.Is it the sum of disparate human body parts; is it soul and body; is it a combination of the four humours or is it the circulation of blood?He asks if it is the soul that ascends to the brain; or is it a subtle matter which is born on the left side of the heart and circulates around the body through the veins or if it is an integral part of the heart (Al-Jawziyya 2011, p. 520) 520.Ibn al-Qayyim opts for the definition that, It is a living, animated, subtle, heavenly illuminated mass (jism nūrānī ʿulwī ḥ ayy khafīf mutaḥ arrik) which permeates the essence of the organs and circulates in them like the way water flows in a rose or oil in the olive or fire in the coal (Al-Jawziyya 2011, p. 521).
He argues that as long as the body parts are capable of being influenced by this subtle mass, the latter remains integrated with the organs and it benefits the organs with sensations and voluntary movement.However, when the organs are destroyed because of the overpowering of a foreign object and are no longer capable of accepting the effects of the rūḥ , this is an indication that the rūḥ has departed and passed on to the realm of the soul.
For advocates of the second position, the above description of the soul's integrated functionality with the body corresponds with the brain's integrated relationship with the body.An irreversible loss of the brain's vital functions, for the advocates, is an indication that the soul has moved on and the person is no more.This soul-brain-body relationship did not go uncontested.For advocates of the next position, the brain-death criteria throw up more problems than it can solve (see Padela and Basser 2012 for a detailed exploration of these issues).
Position 3: Organ Retrieval after Brain Death Not Allowed
While advocates of this position allow organ reception and donations from living and circulatory-death patients, they have serious reservations when it comes to allowing organs to be retrieved when the death of the donor was determined using neurological criteria.For these scholars, it creates a peculiar situation-a betwixt and between position-where the patient is dead from one perspective and yet has signs of the living from another such as warmth, a heartbeat and breathing.Some argue that the prognosis of death has been confused with its diagnosis, and the death of the organism is being conflated with the death of an organ.The fact that certain somatic activities such as breathing, albeit mechanically, is present, is an indication of the presence of the soul in the body.Termination of life at that moment is tantamount to killing a dying yet living human being.
While an international conference convened by the Islamic Fiqh Council (IFC) of Mecca in 1985 declared cadaver organ retrieval to be permissible, it did not deal with the thorny issue of organ procurement from brain-dead patients (IFC 2003).In a later, unrelated conference held on October 1987 deliberating on the legal status of removing artificial ventilation machine from a brain-dead patient, the conference resolved that while it is permissible for doctors to switch off the life-support machine in such a situation, the person will not be declared Islamically dead until complete cessation of heartbeat and breathing has not taken place (IFC 2010, p. 231).This latter decision, although not directly related to the organ retrieval process must be read in tandem with the former cadaver organ donation position.
The issue of brain-dead organ retrieval is a deadlock situation borne out of competing worldviews and ontological understandings of what a human being is.While advocates of position two associate the soul and death with vital brain functions, proponents of the third position opt for a more traditional understanding of death, the complete cessation of vital fluids (breathing and circulation of blood).Al-Būṭ ī calls this the common understanding of death which everyone recognises.In his conference discussion, al-Būṭ ī mentions that he does not dispute the medical diagnosis of death but argues that death is a single occasion which is understood by all and not just the elite doctors.Al-Būṭ ī's barometer for ascertaining death is not a highly trained surgeon, but the common man.Death is what the average person understands it to be.Al-Būṭ ī writes, Death is "the complete separation of the soul from the body", or to put it differently for those who do not recognise the soul, "it is the complete cessation of life from the body."We do not think that there is anyone who will disagree with this understanding of death (Al- Būṭ ī 1988, pp. 205-6).
For al-Būṭ ī, the only Islamically reliable indicator for the onset of death is the weakening of the pulse and the cessation of heartbeat.One can argue that this is not a correct Islamic indicator of death, since there is no association of the departure of the soul with the cessation of heartbeat in Muslim scripture.Al-Būṭ ī further argues that using the legal tool 'presumption of continuity' (istiṣ ḥ āb al-aṣ l), the continuity of the life of the imminently dying person is certain while depending on which criteria one uses to diagnose death, his death is uncertain.The certainty of life cannot be removed by the uncertainty of death determination using neurological criteria.For al-Būṭ ī, as long as the heartbeat remains, even if artificially, the person is alive and no declaration of death can be pronounced (Al-Būṭ ī 1988, p. 208).
Finally, a quick word must be said about the recent NHS Fatwa on organ donation.While it is clear that the author, Zubair Butt, does not support a brain-based diagnosis of death, his position on circulatory death can easily be misunderstood.At first glance, it seems that Butt is a supporter of organ retrieval from circulatory death patients.However, on closer look, Butt is much more restrictive than what appears to be the case.Butt introduces two concepts into his position, concepts which are not a part of the Islamic discourse but taken from secular bioethicists such as Don Marquis, Miller and Troug (Veatch and Ross 2015, p. 44;Butt 2019, pp. 99-102).These two terms are 'permanence' and 'irreversibility'.'Permanence' is the irreversible loss of circulatory functions due to legal or moral reasons, for example, the decedent willed not to be resuscitated after cardiac arrest even if it is medically possible to do so.'Irreversibility' is what is known as medical or biological irreversibility; the point at which no amount of medical intervention will kick start the heart.Butt writes, "While contemporary Muslim scholars have recognised cardiorespiratory arrest as a reliable sign of departure of the soul, they have also required it to be irreversible.This stipulation of irreversibility is to ensure that the soul has indeed departed and, while this stipulation is a recent introduction to the definition of death, it is arguable that it was always implied but had to be expressly stated only because we decided we would interfere with the body of the dying/deceased.Thus, DDCD (donation from circulatory death) is not permissible until the point of elective irreversibility has lapsed" (Butt 2019, p. 100).
On the above basis, for Butt, only tissue donation and cornea donation from the deceased are allowed as these are possible to retrieve after the point of elective irreversibility has elapsed.
Variations to the 3rd Position
Some scholars advocated a third position between the living and dead, which they called al-ḥ ayy fī ḥ ukm al-mayyit, living but legally dead (Al-Ashqar 1987, p. 671).This was the opinion of the late Jordanian scholar Muḥ ammad Sulaymān al-Ashqar (d. 2009) who argued that from one perspective we can treat a brain-dead person living and therefore some of the rules of the living will apply to him, for example, the distribution of his wealth to his inheritors and his wife sitting in for the ʿidda period will only take place after complete cardiac arrest has taken place.However, from another angle, we may deem him to be dead and therefore treat him as we treat the dead, and therefore his organs can be procured and treatment can be withdrawn.
Precedents in Islamic law manuals exist for similar types of deaths where a person has somatic activity but is yet declared to be legally dead.Scholars discuss the case of the movement of a 'slain person' (madhbūḥ ) who still has some semblance of biological life and yet for legal reasons declare him to be dead.Thus, these scholars argue that if the slain person's father was to die after him, the slain person will not inherit anything from him, for he is legally dead and the deceased do not inherit (Al-Ṭ aḥ ṭ āwī 1997, p. 597;Al-Ashqar 1987, p. 668).Unfortunately, al-Ashqar did not develop his ideas further-as a result of which, we do not know what criteria are being used to say the person is dead in respect of this law and alive in respect of that.
Position 4: Higher Brain Functioning and Organ Retrieval
Dr Rafaqat Rashid, a Muslim scholar and medical doctor from the UK, moves the debate concerning death to a slightly earlier time.Rashid argues that death is the permanent loss of capacity of higher brain functioning including the cessation of volition, sentience, and voluntary action.This is when the rational soul has permanently lost its capacity of control of the critical human and rational components of the body.Rashid views the functions of the soul described by Ibn al-Qayyim al-Jawziyya above and other scholars like al-Ghazali as the soul's control over most of the conscious activities which also resembles the cerebral cortex's higher brain functions (Rashid forthcoming 19;Veatch and Ross 2015, pp. 88-100).Al-Ghazali argues that the soul is the primary integrator of the entire body's functions and its departure is equivalent to the collapse of this integrated bodily functioning.This is exactly the function of the brain or more specifically the cerebral cortex vis-à-vis the body.
However, are there any criteria that will ascertain the permanent loss of higher brain functioning?Rashid accepts that while the cerebral cortex is the nearest instrument and implement of the rational soul, it is impossible to draw a clear distinction between a sentient person and a sentient non-person.Rashid concedes that in the absence of a universal accurate anatomical criterion for a higher brain formulation of death, the brain-stem death criteria should be the closest and most accurate one to employ (Rashid forthcoming 27).This understanding of the relationship between the higher brain functioning, the soul and the body leads Rashid to conclude that legally (Islamically) it will be permissible to retrieve the organs of a donor at this point.Rashid argues that the phenomenon of declaring someone dead is not the domain of philosophy, metaphysics or theology but falls squarely within the realms of Islamic law.He arrives at this conclusion through a careful reading of some paradigm cases found in the classical Islamic law manuals.By way of example, the classical Islamic law manuals state that the punishment of qisās (retribution for murder) can be meted out to a person who slit someone else's throat on the basis that the victim has lost all sentience, volition, sight, speech and voluntary movements permanently as long as that attack leads to the irreversible loss all voluntary and involuntary movements (Rashid forthcoming 23).While Rashid accepts that it is permissible to retrieve organs at this point, no jurisdiction in the world allows organ retrieval based on permanent loss of higher brain functioning (Veatch and Ross 2015, p. 98).The following two positions are slight variations of positions 1-3.We will mention them briefly to capture an accurate picture of the range of opinions available.
Position 5: Donation only Allowed from Living Donors
Proponents of the fifth position maintain that although receiving an organ is permitted, donating an organ only while alive is permissible.Post-mortem donation is not permissible.This opinion is held by a sizeable number of scholars from the Indian subcontinent and is also the resolution of the Indian Islamic Fiqh Academy held in 1989 (Raḥ mānī 2010, 5:59).
Scholars advocating this position agree with the scholars of position one as far as it relates to the dignity afforded to the dead.Proponents of this position argue that the dead have sacrality (ḥ urma), which demands that they are deposited in the state they died in.Any intervention is an affront to the dignity of the deceased and therefore impermissible.This group of scholars further erroneously argue that since live organ donation fulfils the requirements of saving a life, turning to the dead is not necessary.Obviously, scholars from this group are not aware that their view seriously reduces the pool of organs available for donation to the non-vital organs only such as blood, bone marrow and certain tissues.
Position 6: Donation only Allowed from Cadaver Donors
The sixth opinion inverses the fifth position.Receiving an organ is permissible but only for donations that are to be made post-mortem and not by a living donor.This opinion is held by Aḥ mad Fahmī Abū Sunna (d. 2003) from the Islamic Fiqh Council of Mecca (Abū Sunna 2003) and Muhammad ʿAbd al-Raḥ mān, former grand muftī of Cameroon (ʿAbdurRaḥ mān 1988).Their arguments for receiving and donating organs are exactly the same as position two.Nevertheless, they restrict the procurement of organs from only cadavers and not the living.
However, for donating an organ, proponents of position six invoke legal maxims such as, 'in the presence of two harms the least harmful must be chosen', as well as the maxim 'a minor harm is tolerated for the sake of a major gain'.Such maxims lead to the conclusion that only post-mortem organ donation is permissible.The argument uses the following logic: retrieving an organ from a cadaver infringes on the dignity of the deceased.The deceased has certain rights which must be protected.These include the right to bodily integrity, the right to a proper bathing, shrouding and a quick burial.Violating any of these rights are deemed as harming the deceased.However, this harm is lower and more tolerable than the harm of the loss of a life which could have been saved.For advocates of this position, the harm inflicted on the cadaver will be tolerated and its dignity infringed for the sake of a higher purpose, i.e., saving the life of a dying person.
In contrast, however, Abū Sunna argues that the living has a right to a healthy life which is mandated by the Shariah and the potential donor does not have the autonomy to violate this right.Abū Sunna believes that giving away a non-vital solid organ will eventually lead to the donor falling ill and cause further health complications.In this instance, there are two harms involved and a benefit.The harm inflicted on a healthy person, which will inevitably lead to his destruction as opposed to the harm faced by the person in need of the organ (which may lead to his death) and the benefit of a longer life should they receive an organ.Abū Sunna believes that the harm that will be inflicted on the healthy living being will be greater than the harm already faced by the dying human being.Therefore, in this situation, the harm trumps the benefit and, therefore, a live donation is not permissible.
A close reading of Abū Sunna's arguments reveals that his position is contingent upon a particular understanding of the state of transplant medicine in the Muslim world at the time of his writing the paper, knowledge which he views as tentative medical knowledge.The advancements of transplant medicine, however, are overlooked.Critically engaging with Abū Sunna's beliefs may lead to an alternative perspective on live organ donation.Furthermore, Abū Sunna's view rests on medical knowledge being tentative.However, should a greater degree of success and quality of health be assured for both the donor and the recipient, Abū Sunna's view would need to be revisited.
Position 7: Suspended Judgment
A seventh position suspends judgment on the issue until further investigation.This opinion is held by the Pakistani scholar Muḥ ammad Ṭ aqī ʿUthmānī, son of Muḥ ammad Shafīʿ.Despite his noncommitted view, ʿUthmānī allows people to take benefit from one of the permissive fatwas should a person require to do so (ʿUthmānī [1998(ʿUthmānī [ ] 2011;;Al-Kawthari 2004).
There is a slight variation of the seventh position which is popular among some Muslims, but no serious scholar has entertained it.This opinion suggests that it is permissible to receive an organ due to necessity but not to donate one because the necessity cannot be extended beyond one's self.The opinion is based on a narrow and individualistic understanding of 'necessity'.We have argued above (in position two) that necessity is a two-way process.Where a person is allowed to eat/utilize forbidden objects in order to save himself from destruction, it is equally a collective obligation (farḍ kifāya) on others to facilitate this for him lest he perishes.Extended to the discussion on organ donation, it would mean that to donate is also a religious duty since it fulfils the religious requirement of saving a person from perishing, which is a necessity (see Al-Yaʿqūbī 1987, p. 32 for a fuller discussion).
While this position is legally sound (by way of analogy, one is not required to make a donation of wealth if they are able to do so, even though they were once recipients of donation), it is morally despised and opens up the Muslim community to vulnerability.It has been exploited by non-Muslim politicians as a tool to argue against Muslim integration into European society (Ghaly 2012b).The Netherlands' media portrayed Muslims as a group that donated less than the national average and mentioned the religion of Islam itself as the main cause for the lack of donors.On the contrary, Muslims in the Netherlands were found not to deviate from the average national standpoint (Zwart and Hoffer 1998).
Conclusions
In the above detailed exploration of the seven positions, we demonstrated that the topic of organ transplantation is not a simple right (ḥ alāl) or wrong (ḥ arām) answer.The matter, from an Islamic point of view, is ijtihādī and, therefore, people are at liberty to choose whichever position suits their culture and belief systems.The absence of any mention of organ transplantation in the Islamic sources creates a space for exploring numerous options.These options are derived from a particular understanding of broader issues related to life, death, attitude towards the dead and society and one's approach to scripture and understanding Islamic law.What really is at play here is the tension between two competing objectives of the Shariah: the right to preservation of religion and the right to preservation of life (Opwis 2017).Those who do not allow organ transplantation do so because they believe that it violates the dictates of the Shariah vis-à-vis God's autonomy over his property, the dignity that Islam affords humans, the right to bodily integrity and the right not to be killed or be used as a means to an end.Conversely, the proponents of organ transplantation argue that the right to preservation of life is a weightier objective of the Shariah than the preservation of religion.Preservation of life is weightier because while one can express non-belief in dire necessity, there is no such substitute for life.
Thus, the proponents of position one focus more on the state of the donor and bodily integrity.More specifically for the South Asian Deobandi and Barelwi scholars discussed above, bodily intervention is not a civil transaction (muʿāmala) bound by meaning and context where society can negotiate the best course of action.For them, the body is sacred and leaving it intact is a devotional imperative (ʾamr taʿabbudī)-the rules of which will remain unchanged in perpetuity (Moosa 1998, p. 306).Furthermore, the body is the site of religious, social and cultural identity and order.With respect to protecting society from social disorder, society has always exercised an element of control over the body most saliently through its purity laws (Douglas 2001).Moosa writes: The religious concern about 'human dignity' in relation to transplants express anxieties about social integrity and the maintenance of social order.Any attempt to 'dis(em)body' the cadaver through eviscerating surgery, may indeed signify the violation of a symbolic and social 'order'.This may be the cultural subtext that underlines the understanding of the Pakistani jurists (Moosa 1998, p. 305).
In contrast to the above scholars, advocates of the second position privilege the need of the recipient over the sanctity of the donor.For them, death and the soul are an empirical phenomenon which can be tracked using technology.In contrast, supporters of position one and three view death as a natural phenomenon where nature must take its course without any intervention.The soul and everything related to it is metaphysical and cannot be monitored through machines.Finally, for Rashid (position four), death is a legal phenomenon even if the body shows some semblance of biological life.
Lying beneath the scholarly ethico-legal arguments of the scholars discussed are broader assumptions regarding how these scholars view the human body.Bryan Turner (1996) argues that there are two ways that people conceptualise their bodies: embodiment and enselvement.When people make a distinction between themselves and their bodies by using phrases like 'having' or 'possessing' a body, they are embodying that body.For them, the body is external to themselves which they inhabit.In contrast, enselvement is when people identify themselves with their bodies.Is the body nothing more than a conglomeration of disparate interchangeable body parts or is it integrated with the idea of personhood (see Haddow 2000;Haddow 2005;Haque 2008;and Rashid 2018 for more on topic of personhood and its place in the organ donation debate)?Studies have shown that the more integrated body parts are to the idea of personhood, the more sacred they are considered and less likely to be donated.The idea is in part based on how people view their 'body image' which may not necessarily have any relation to biological facticity but can be influenced by history, tradition, religion and custom (Haddow 2000;Ali 2019a).While none of the scholars discussed above reject the notion of the soul, it seems that advocates of positions one, three and five view the body through the prism of enselvement.For them, donating a body part is akin to donation of the self, while scholars of the remaining positions do not confer the same amount of emotional attachment to the physical body once death has taken place (see Hamdy 2012, pp. 102-4 for al-Ṭ anṭ āwī's position).
We have mentioned above that the issue of organ transplantation and donation is an ijtihādī issue.On the basis of this, we opted for a legally pluralist approach to the issue.Hopefully, the detailed exploration of the different positions will allow people to make theologically informed decisions without feeling morally and theologically culpable for their choices.However, it must be acknowledged that people have their own understanding of bodily integrity, death and dying.Despite religion playing a big role in people's decision making, it is not the sole arbiter.Deciding to become an organ donor is a subjective and complex process involving various factors and prioritising values (some of them elaborated in this article).Understanding Muslim viewpoints on organ donation requires a thorough understanding of these factors and their importance to Muslims, whose decisions, based on these values need to be respected.
|
v3-fos-license
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.